1. Discussion
Read: Chapter 4 (Organizational Learning Theories and Technology)(Information Technology and Organizational Learning)
This week we focus on the social and organizational issues that exist with a better understanding of why changes occur. Beginning on page 96 of the Information Technology and Organizational Learning text, the author presents three phases of maturation with technology. A summary table of these phases appears on page 99. Using what you understand from these phases, consider what phase your current (or previous) organization may currently be in and what challenges they are facing in moving to the next phase.
Your response should be 250-300 words.
2. Article Review
Read: Chapters 7 and 8, Interaction Design 5th Edition
Big Data has undoubtedly played a role in the way business is conducted. For many industries, when a product is created, especially a Website, mobile application, or enterprise system, there is a desire to understand how data is presented in the environment. Data has in fact become a usability measurement whether it is under the scope of information, system, or service quality. For this assignment, you’ll find ONE SCHOLARLY ARTICLE that discusses data in the context of a usability evaluation. Your article should review data from the perspective of an industry vertical (i.e., healthcare, education, supply chain, etc.).
You’ll submit a 3-page synopsis of the article that answers the following questions:
- Article Title
- Article Subject Area
- Type of Data Set Used/Evaluated
- Synopsis of Article
- Three business takeaways on how data impacts the industry in respect to usability
- Source Citation
Article ReviewORGANIZATIONAL learning
Interaction Design continues to be the standard textbook in the field. Seasoned practitioners will find it use-
ful when they need a reference to best practices or to explain a concept to a colleague. Students can turn to
Interaction Design for an easy-to-understand description of the basics or in-depth how-tos. From personas and
disabilities to the design of UX organizations and working in Agile, if you’re going to pick one book to bring into
the office, it should be this one.
Jofish Kaye, Principal Research Scientist, Mozilla, USA
This is the perfect textbook for a wide range of User interface/User experience design courses. For an undergradu-
ate, it provides a variety of compelling examples that illustrate best practice in Interaction Design. For a graduate
student, it provides a foundational overview of advanced topics. This book is also essential for the professional who
wants to know the state of the art in Interaction design. I use this textbook and recommend it widely.
Rosa I. Arriaga, Ph.D., Senior Research Scientist, School of Interactive Computing
Georgia Institute of Technology, USA
The Interaction Design book has immensely contributed to a growing Namibian HCI skilled community over
the last decade. Exposing students, academics and practitioners to the basic principles and theories as well as
most recent trends and technologies, with global and local case studies, in the latest edition, allows for reflective
applications within very specific contexts. This book remains our number one reference in the education of future
generations of interaction designers in Namibia, promoting the creation of thoughtful user experiences for respon-
sible citizens.
Heike Winschiers-Theophilus, Professor, Faculty of Computing and Informatics,
Namibia University of Science and Technology, Africa
Throughout my teaching of user experience and interaction design, the book by Rogers, Preece and Sharp has
been an absolute cornerstone textbook for students. The authors bring together their own wealth of knowledge of
academic HCI with a deep understanding of industry practice to provide what must be the most comprehensive
introduction to the key areas of interaction design and user experience work, now an established field of practice. I
put this book in the “essential reading” section of many of the reading lists I give to students.
Simon Attfield, Associate Professor in Human Centred Technology, Middlesex University, UK
Interaction design has gone through tremendous changes in the last few years—for example the rising importance
of “big” data streams to design, and the growing prevalence of everyday ubiquitous computing issues of sensing
and blending gracefully and ethically into peoples’ daily lives. This is an important and timely update to a text
that’s long been considered gold standard in our field. I’m looking forward to using it with my students to help
prepare them for the design challenges they will face in today’s industrial practice.
Katherine Isbister, Professor, Computational Media, University of California Santa Cruz, USA
More than ever, designing effective human-computer interactions is crucial for modern technological systems.
As digital devices become smaller, faster and smarter, the interface and interaction challenges become ever more
complex. Vast quantities of data are often accessed on handheld screens, or no screens at all through voice com-
mands; and AI systems have interfaces that “bite back” with sophisticated dialogue structures. What are the best
interaction metaphors for these technologies? What are the best tools for creating interfaces that are enjoyable and
universally accessible? How do we ensure emerging technologies remain relevant and respectful of human values?
In this book, you’ll find detailed analysis of these questions and much more. (It is a valuable resource for both the
mature student and the reflective professional.)
Frank Vetere, Professor of Interaction Design, School of Computing and Information Systems,
University of Melbourne, Australia
This is at the top of my recommended reading list for undergraduate and master’s students as well as professionals
looking to change career paths. Core issues to interaction design are brought to life through compelling vignettes
and contemporary case examples from leading experts. What has long been a comprehensive resource for interac-
tion design now incorporates timely topics in computing, such as data at scale, artificial intelligence, and ethics,
making it essential reading for anyone entering the field of interaction design.
Anne-Marie Piper, PhD, Associate Professor, Departments of Communication Studies,
Electrical Engineering and Computer Science, Northwestern University, USA
I have been using Interaction Design as a textbook since its first edition for both my undergraduate and graduate
introductory HCI courses. This is a must-read seminal book which provides a thorough coverage of the discipline
of HCI and the practice of user-centered design. The fifth edition lives up to its phenomenal reputation by includ-
ing updated content on the process of interaction design, the practice of interaction design (e.g., technical debt in
UX, Lean UX), design ethics, new types of interfaces, etc. I always recommend Interaction Design to students and
practitioners who want to gain a comprehensive overview of the fields of HCI and UX.
Olivier St-Cyr, Assistant Professor, Teaching Stream, University of Toronto, Canada
Interaction design is a practice that spans many domains. The authors acknowledge this by providing a tremen-
dous amount of information across a wide spectrum of disciplines. This book has evolved from a simple textbook
for HCI students, to an encyclopedia of design practices, examples, discussions of related topics, suggestions for
further reading, exercises, interviews with practitioners, and even a bit of interesting history here and there. I see it
as one of the few sources effectively bridging the gulf between theory and practice. A copy has persistently occu-
pied my desk since the first edition, and I regularly find myself revisiting various sections for inspiration on how to
communicate the reasoning behind my own decisions to colleagues and peers.
William R. Hazlewood, PhD, Principal Design Technologist, Retail Experience
Design Concept Lab, Amazon, USA
For years Interaction Design has been my favourite book not only for supporting my classes but also as my
primary source for preparing UX studies to industrial and academic settings. The chapters engage readers with
easy-to-read content while presenting, harmonically, theories, examples and case studies which touch in multidisci-
plinary aspects of construction and evaluation of interactive products. The fifth edition again maintains the tradi-
tion of being an up-to-date book on HCI, and includes new discussions on Lean UX, emotional interaction, social
and cognitive aspects, and ethics in human studies, which are certainly contemporary topics of utmost relevance
for practitioners and academics in interaction design.
Luciana Zaina, Senior Lecturer, Federal University of São Carlos, Brazil
This book is always my primary recommendation for newcomers to human-computer interaction. It addresses the
subject from several perspectives: understanding of human behaviour in context, the challenges of ever-changing
technology, and the practical processes involved in interaction design and evaluation. The new edition again shows
the authors’ dedication to keeping both the primary content and representative examples up to date.
Robert Biddle, Professor of Human–Computer Interaction, Carleton University, Ottawa, Canada
This fifth edition provides a timely update to one of the must-have classics on interaction design. The changes in
our field, including how to deal with emerging sensing technology and the volumes of data it provides, are well
addressed in this volume. This is a book for those new to and experienced in interaction design.
Jodi Forlizzi, Professor and Geschke Director, Human-Computer Interaction Institute,
The School of Computer Science, CMU, USA
The milieu of digital life surrounds us. However, how we choose to design and create our experiences and
interactions with these emerging technologies remains a significant challenge. This book provides both a road-
map of essential skills and methodologies to tackle these designs confidently as well as the critical deeper history,
literature, and poetry of interaction design. You will return to this book throughout your career to operationalize,
ground and inspire your creative practice of interaction design.
Eric Paulos, Professor, Electrical Engineering and Computer Sciences, UC Berkeley, USA
Preece, Sharp and Rogers offer once again an engaging excursion through the world of interaction design. This
series is always up-to-date and offers a fresh view on a broad range of topics needed for students in the field of
interaction design, human-computer interaction, information design, web design or ubiquitous computing. The
book should be the book every student should have in their backpack. It is a “survival guide”! It guides one
through the jungle of information and the dark technological forests of our digital age. It also helps to develop
a critical view on developing novel technologies as our computing research community needs to confront much
more seriously the negative impacts of our innovations. The online resources are a great help for me to create good
classes and remove some weight from the backpacks of my students.
Johannes Schöning, Professor of Computer Science, Bremen University, Germany
Nearly 20 years have passed since the release of the first edition of Interaction Design, with massive changes to
technology and thus the science and practice of interaction design. The new edition combines the brilliance of the
first book with the wisdom of the lessons learned in the meantime, and the excitement of new technological fron-
tiers. Complex concepts are elegantly and beautifully explained, and the reader is left with little doubt as to how
to put them into practice. The book is an excellent resource for those new to interaction design, or as a guidebook
or reference to practitioners.
Dana McKay, UX Researcher, Practitioner and Academic, University of Melbourne, Australia
Computers are ubiquitous and embedded in virtually every new device and system, ranging from the omnipresent
cellphone to the complex web of sociotechnical systems that envelop most every sphere of personal and profes-
sional life. They connect our activities to ever-expanding information resources with previously unimaginable
computational power. To ensure interface design respects human needs and augments our abilities is an intellectual
challenge of singular importance. It involves not only complex theoretical and methodological issues of how to
design effective representations and mechanisms of interaction but also confronts complex social, cultural, and
political issues such as those of privacy, control of attention, and ownership of information. The new edition of
Interaction Design continues to be the introductory book I recommend to my students and to anyone interested in
this crucially important area.
Jim Hollan, Distinguished Professor of Cognitive Science, University of California San Diego, USA
Interaction Design continues to be my favorite textbook on HCI. Even named our undergraduate and postgradu-
ate programmes at Aalborg University after it. In its fifth edition, it captures the newest developments in the field’s
cumulative body of knowledge, and continues to be the most updated and accessible work available. As always, it
serves as a clear pointer to emerging trends in interactive technology design and use.
Jesper Kjeldskov, Professor and Head of Department of Computer Science, Aalborg University, Denmark
I got to learn about the field of HCI and interaction design when I came across the first edition of this book at the
library in my junior year of college. As an HCI researcher and educator, I have been having the pleasure of intro-
ducing the subject to undergraduates and professional master’s students using the previous editions. I thank the
authors for their studious efforts to update and add new contents that are relevant for students, academics, and
professionals to help them learn this ever-evolving field of HCI and interaction design in a delightful manner.
Eun Kyoung Choe, Professor of Human-Computer Interaction, College of Information Studies,
University of Maryland, USA
This new edition is, without competition, the most comprehensive and authoritative source in the field when it
comes to modern interaction design. It is highly accessible and it is a pleasure to read. The authors of this book
have once again delivered what the field needs!
Erik Stolterman, Professor in Informatics, School of Informatics and Computing,
Indiana University, Bloomington, USA
This book illuminates the interaction design field like no other. Interaction design is such a vast, multidisciplinary
field that you might think it would be impossible to synthesize the most relevant knowledge in one book. This
book does not only that, but goes even further: it eloquently brings contemporary examples and diverse voices to
make the knowledge concrete and actionable, so it is useful for students, researchers, and practitioners alike. This
new edition includes invaluable discussions about the current challenges we now face with data at scale, embrac-
ing the ethical design concerns our society needs so much in this era.
Simone D. J. Barbosa, Professor of Computer Science, PUC-Rio,
and Co-Editor-in-Chief of ACM Interactions, Brazil
My students like this book a lot! It provides a comprehensive coverage of the essential aspects of HCI/UX, which
is key to the success of any software applications. I also like many aspects of the book, particularly the examples
and videos (some of which are provided as hyperlinks) because they not only help to illustrate the HCI/UX con-
cepts and principles, but also relate very well to readers. I highly recommend this book to anyone who wants to
learn more about HCI/UX.
Fiona Fui-Hoon Nah, Editor-in-Chief of AIS Transactions on Human-Computer Interaction,
Professor of Business and Information Technology, Missouri University of Science
and Technology, Rolla, Missouri, USA
I have been using the book for several years in my Human-Computer Interaction class. It helps me, not only for
teaching, but also for theses supervision. I really appreciate the authors regarding their efforts in maintaining the
relevance and up-to-dateness of the Interaction Design book. For example, they put Data At Scale and AgileUX in
the new edition. Really love the book!
Harry B. Santoso, PhD, Instructor of Interaction System (HCI) course at Faculty of Computer Science,
Universitas Indonesia, Indonesia
During my PhD already the first edition of Interaction Design: beyond human-computer interaction in 2002
quickly became my preferred reference book. Seventeen years later, and now in its fifth edition, I commend the
authors for their meticulous and consistent effort in updating and enriching what has become the field’s standard
introductory textbook. Not just about objects and artefact, design today is increasingly recognized as a sophis-
ticated and holistic approach for systems thinking. Similarly, Preece, Sharp, and Rogers have kept the book’s
coverage with the times by providing a comprehensive, compelling, and accessible coverage of concepts, methods
and cases of interaction design across many domains such as experience design, ubiquitous computing, and urban
informatics.
Marcus Foth, Professor of Urban Informatics, QUT Design Lab, Brisbane, Australia
“Interaction Design” has long been my textbook of choice for general HCI courses. The latest edition has intro-
duced a stronger practitioner focus that should add value for students transitioning into practice, for practitioners,
and also for others interested in interaction design and its role in product development. It manages to be an engag-
ing read while also being “snackable”, to cover the basics and also inspire. I still find it a great read, and believe
others will too.”
Ann Blandford, Professor of Human – Computer Interaction, University College London
Very clear style, with plenty of active learning material and pointers to further reading. I found that it works very
well with engineering students.
Albert Ali Salah, Professor at Utrecht University, the Netherlands
INTERACTION DESIGN
beyond human-computer
interaction
Fifth Edition
Interaction Design: beyond human-computer interaction, Fifth Edition
Published by
John Wiley & Sons, Inc.
10475 Crosspoint Boulevard
Indianapolis, IN 46256
www.wiley.com
Copyright © 2019 by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-119-54725-9
ISBN: 978-1-119-54735-8 (ebk)
ISBN: 978-1-119-54730-3 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except
as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either
the prior written permission of the Publisher, or authorization through payment of the appropriate
per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978)
750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201)
748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or
warranties with respect to the accuracy or completeness of the contents of this work and specifically
disclaim all warranties, including without limitation warranties of fitness for a particular purpose.
No warranty may be created or extended by sales or promotional materials. The advice and strategies
contained herein may not be suitable for every situation. This work is sold with the understanding
that the publisher is not engaged in rendering legal, accounting, or other professional services. If
professional assistance is required, the services of a competent professional person should be sought.
Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an
organization or Web site is referred to in this work as a citation and/or a potential source of further
information does not mean that the author or the publisher endorses the information the organization
or website may provide or recommendations it may make. Further, readers should be aware that
Internet websites listed in this work may have changed or disappeared between when this work was
written and when it is read.
For general information on our other products and services please contact our Customer Care
Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993
or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material
included with standard print versions of this book may not be included in e-books or in print-on-
demand. If this book refers to media such as a CD or DVD that is not included in the version you
purchased, you may download this material at http://booksupport.wiley.com. For more information
about Wiley products, visit www.wiley.com.
Library of Congress Control Number: 2019932998
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley &
Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without
written permission. All other trademarks are the property of their respective owners. John Wiley &
Sons, Inc. is not associated with any product or vendor mentioned in this book.
http://www.wiley.com
http://www.wiley.com/go/permissions
http://booksupport.wiley.com#_blank
www.wiley.com
The authors are senior academics with a background in teaching, researching, and consulting
in the United Kingdom, United States, Canada, India, Australia, South Africa, and Europe.
Having worked together on four previous editions of this book, as well as an earlier textbook
on human-computer interaction, they bring considerable experience in curriculum develop-
ment using a variety of media for online learning as well as face-to-face teaching. They have
considerable knowledge in creating learning texts and websites that motivate and support
learning for a range of students. All three authors are specialists in interaction design and
human-computer interaction (HCI). In addition, they bring skills from other disciplines; for
instance, Yvonne Rogers started off as a cognitive scientist, Helen Sharp is a software engi-
neer, and Jenny Preece works in information systems. Their complementary knowledge and
skills enable them to cover the breadth of concepts in interaction design and HCI to produce
an interdisciplinary text and website.
Helen Sharp is a Professor of Software Engineering and Associate Dean in the Faculty of
Science, Technology, Engineering, and Mathematics at the Open University. Originally trained
as a software engineer, it was by watching the frustration of users and the clever “work-
arounds” they developed that inspired her to investigate HCI, user-centered design, and the
other related disciplines that now underpin the field of interaction design. Her research
focuses on the study of professional software practice and the effect of human and social
aspects on software development, leveraging her expertise in the intersection between inter-
action design and software engineering and working closely with practitioners to support
practical impact. She is active in both the software engineering and CHI communities, and
she has had a long association with practitioner-related conferences. Helen is on the editorial
board of several software engineering journals, and she is a regular invited speaker at aca-
demic and practitioner venues.
Yvonne Rogers is the Director of the Interaction Centre at University College London, a
Professor of Interaction Design, and a deputy head of department for Computer Science.
She is internationally renowned for her work in HCI and ubiquitous computing and, in
particular, for her pioneering approach to innovation and ubiquitous learning. Yvonne is
widely published, and she is the author of two recent books: Research in the Wild (2017,
co-authored with Paul Marshall) and The Secrets of Creative People (2014). She is also a
regular keynote speaker at computing and HCI conferences worldwide. Former positions
include Professor of Interaction Design at the Open University (2006–2011), Professor of
Human-Computer Interaction at the School of Informatics and Computing at Indiana
University (2003–2006), and Professor in the former School of Cognitive and Computing
Sciences at Sussex University (1992–2003). She has also been a Visiting Professor at UCSC,
University of Cape Town, Melbourne University, Stanford, Apple, Queensland University,
and UCSD. She has been elected as a Fellow of the ACM, the British Computer Society, and
the ACM’s CHI Academy.
About the Authors
A b o u t t h e A u t h o r sviii
Jennifer Preece is Professor and Dean Emerita in the College of Information Studies—
Maryland’s iSchool—at the University of Maryland. Jenny’s research focuses on the intersec-
tion of information, community, and technology. She is particularly interested in community
participation online and offline. She has researched ways to support empathy and social
support online, patterns of online participation, reasons for not participating (for example,
lurking and infrequent participation), strategies for supporting online communication, devel-
opment of norms, and the attributes of successful technology-supported communities.
Currently, Jenny focuses on how technology can be used to educate and motivate citizens to
engage and contribute quality data to citizen science projects. This research contributes to the
broader need for the collection of data about the world’s flora and fauna at a time when
many species are in rapid decline due to habitat loss, pollution, and climate change. She was
author of one of the first books on online communities—Online Communities: Designing
Usability, Supporting Sociability (2000) published by John Wiley & Sons Ltd and several
other HCI texts. Jenny is also widely published, a regular keynote speaker, and a member of
the ACM’s CHI Academy.
Associate Publisher
Jim Minatel
Editorial Manager
Pete Gaughan
Production Manager
Katie Wisor
Project Editor
Gary Schwartz
Production Editor
Barath Kumar Rajasekaran
Technical Editors
Danelle Bailey
Jill L. H. Reed
Copy Editor
Kim Wimpsett
Proofreader
Nancy Bell
Indexer
Johnna VanHoose Dinse
Cover Designer
Wiley
Cover Image
© Wiley; Jennifer Preece photo courtesy of
Craig Allan Taylor
Credits
Many people have helped us over the years in writing the four previous editions of this book.
We have benefited from the advice and support of our many professional colleagues across
the world and from our students, friends, and families. We especially would like to thank
everyone who generously contributed their ideas and time to help make all of the editions of
this book successful.
These include our colleagues and students at the College of Information Studies—
“Maryland’s iSchool”—at University of Maryland, the Human-Computer Interaction
Laboratory (HCIL) and Center for the Advanced Study of Communities and Information
(CASCI), the Open University, and University College London. We would especially like to
thank (in alphabetical first name order) all of the following individuals who have helped us
over the years:
Alex Quinn, Alice Robbin, Alice Siempelkamp, Alina Goldman, Allison Druin, Ana
Javornik, Anijo Mathew, Ann Blandford, Ann Jones, Anne Adams, Ben Bederson, Ben Shnei-
derman, Blaine Price, Carol Boston, Cathy Holloway, Clarisse Sieckenius de Souza, Connie
Golsteijn, Dan Green, Dana Rotman, danah boyd, Debbie Stone, Derek Hansen, Duncan
Brown, Edwin Blake, Eva Hornecker, Fiona Nah, Gill Clough, Godwin Egbeyi, Harry Brignull,
Janet van der Linden, Jeff Rick, Jennifer Ferreira, Jennifer Golbeck, Jeremy Mayes, Joh Hunt,
Johannes Schöning, Jon Bird, Jonathan Lazar, Judith Segal, Julia Galliers, Fiona Nah, Kent
Norman, Laura Plonka, Leeann Brumby, Leon Reicherts, Mark Woodroffe, Michael Wood,
Nadia Pantidi, Nick Dalton, Nicolai Marquardt, Paul Cairns, Paul Marshall, Philip “Fei”
Wu, Rachael Bradley, Rafael Cronin, Richard Morris, Richie Hazlewood, Rob Jacob, Rose
Johnson, Stefan Kreitmayer, Steve Hodges, Stephanie Wilson, Tamara Clegg, Tammy Toscos,
Tina Fuchs, Tom Hume, Tom Ventsias, Toni Robertson, and Youn-Kyung Lim.
In addition we wish to thank the many students, instructors, researchers and practition-
ers who have contacted us over the years with stimulating comments, positive feedback and
provocative questions
We are particularly grateful to Vikram Mehta, Nadia Pantidi, and Mara Balestrini for
filming, editing, and compiling a series of on-the-spot “talking heads” videos, where they
posed probing questions to the diverse set of attendees at CHI’11, CHI’14, and CHI’18,
including a variety of CHI members from across the globe. The questions included asking
about the future of interaction design and whether HCI has gone too wild. There are about
75 of these videos, which can be viewed on our website at www.id-book.com. We are also
indebted to danah boyd, Harry Brignull, Leah Beuchley, Albrecht Schmidt, Ellen Gottesdie-
ner, and Jon Froehlich for generously contributing in-depth, text-based interviews in the
book. We would like to thank Rien Sach, who has been our webmaster for several years, and
Deb Yuill who did a thoughtful and thorough job of editing the old reference list.
Danelle Bailey and Jill Reed provided thoughtful critiques and suggestions on all the
chapters in the fifth edition, and we thank them.
Finally, we would like to thank our editor and the production team at Wiley who have
been very supportive and encouraging throughout the process of developing this fifth edition:
Jim Minatel, Pete Gaughan, Gary Schwartz, and Barath Kumar Rajasekaran.
Acknowledgments
http://www.id-book.com
Contents
What’s Inside? xvii
1 WHAT IS INTERACTION DESIGN? 1
1.1 Introduction 1
1.2 Good and Poor Design 2
1.3 What Is Interaction Design? 9
1.4 The User Experience 13
1.5 Understanding Users 15
1.6 Accessibility and Inclusiveness 17
1.7 Usability and User Experience Goals 19
Interview with Harry Brignull 34
2 THE PROCESS OF INTERACTION DESIGN 37
2.1 Introduction 37
2.2 What Is Involved in Interaction Design? 38
2.3 Some Practical Issues 55
3 CONCEPTUALIZING INTERACTION 69
3.1 Introduction 69
3.2 Conceptualizing Interaction 71
3.3 Conceptual Models 74
3.4 Interface Metaphors 78
3.5 Interaction Types 81
3.6 Paradigms, Visions, Theories, Models, and Frameworks 88
Interview with Albrecht Schmidt 97
4 COGNITIVE ASPECTS 101
4.1 Introduction 101
4.2 What Is Cognition? 102
4.3 Cognitive Frameworks 123
5 SOCIAL INTERACTION 135
5.1 Introduction 135
5.2 Being Social 136
5.3 Face-to-Face Conversations 139
C O N T E N T Sxiv
5.4 Remote Conversations 143
5.5 Co-presence 150
5.6 Social Engagement 158
6 EMOTIONAL INTERACTION 165
6.1 Introduction 165
6.2 Emotions and the User Experience 166
6.3 Expressive Interfaces and Emotional Design 172
6.4 Annoying Interfaces 174
6.5 Affective Computing and Emotional AI 179
6.6 Persuasive Technologies and Behavioral Change 182
6.7 Anthropomorphism 187
7 INTERFACES 193
7.1 Introduction 193
7.2 Interface Types 194
7.3 Natural User Interfaces and Beyond 252
7.4 Which Interface? 253
Interview with Leah Buechley 257
8 DATA GATHERING 259
8.1 Introduction 259
8.2 Five Key Issues 260
8.3 Data Recording 266
8.4 Interviews 268
8.5 Questionnaires 278
8.6 Observation 287
8.7 Choosing and Combining Techniques 300
9 DATA ANALYSIS, INTERPRETATION, AND PRESENTATION 307
9.1 Introduction 307
9.2 Quantitative and Qualitative 308
9.3 Basic Quantitative Analysis 311
9.4 Basic Qualitative Analysis 320
9.5 Which Kind of Analytic Framework to Use? 329
9.6 Tools to Support Data Analysis 341
9.7 Interpreting and Presenting the Findings 342
C O N T E N T S xv
10 DATA AT SCALE 349
10.1 Introduction 349
10.2 Approaches to Collecting and Analyzing Data 351
10.3 Visualizing and Exploring Data 366
10.4 Ethical Design Concerns 375
11 DISCOVERING REQUIREMENTS 385
11.1 Introduction 385
11.2 What, How, and Why? 386
11.3 What Are Requirements? 387
11.4 Data Gathering for Requirements 395
11.5 Bringing Requirements to Life: Personas and Scenarios 403
11.6 Capturing Interaction with Use Cases 415
Interview with Ellen Gottesdiener 418
12 DESIGN, PROTOTYPING, AND CONSTRUCTION 421
12.1 Introduction 421
12.2 Prototyping 422
12.3 Conceptual Design 434
12.4 Concrete Design 445
12.5 Generating Prototypes 447
12.6 Construction 457
Interview with Jon Froehlich 466
13 INTERACTION DESIGN IN PRACTICE 471
13.1 Introduction 471
13.2 AgileUX 473
13.3 Design Patterns 484
13.4 Open Source Resources 489
13.5 Tools for Interaction Design 491
14 INTRODUCING EVALUATION 495
14.1 Introduction 495
14.2 The Why, What, Where, and When of Evaluation 496
14.3 Types of Evaluation 500
14.4 Evaluation Case Studies 507
14.5 What Did We Learn from the Case Studies? 514
14.6 Other Issues to Consider When Doing Evaluation 516
C O N T E N T Sxvi
15 EVALUATION STUDIES: FROM CONTROLLED TO NATURAL SETTINGS 523
15.1 Introduction 523
15.2 Usability Testing 524
15.3 Conducting Experiments 533
15.4 Field Studies 536
Interview with danah boyd 546
16 EVALUATION: INSPECTIONS, ANALYTICS, AND MODELS 549
16.1 Introduction 549
16.2 Inspections: Heuristic Evaluation and Walk-Throughs 550
16.3 Analytics and A/B Testing 567
16.4 Predictive Models 576
References 581
Index 619
What’s Inside?
Welcome to the fifth edition of Interaction Design: beyond human-computer interaction and
our interactive website at www.id-book.com. Building on the success of the previous edi-
tions, we have substantially updated and streamlined the material in this book to provide a
comprehensive introduction to the fast-growing and multi-disciplinary field of interaction
design. Rather than let the book expand, however, we have again made a conscious effort to
keep it at the same size.
Our textbook is aimed at both professionals who want to find out more about inter-
action design and students from a range of backgrounds studying introductory classes in
human-computer interaction, interaction design, information and communications technol-
ogy, web design, software engineering, digital media, information systems, and information
studies. It will appeal to practitioners, designers, and researchers who want to discover what
is new in the field or to learn about a specific design approach, method, interface, or topic.
It is also written to appeal to a general audience interested in design and technology.
It is called Interaction Design: beyond human-computer interaction because interaction
design has traditionally been concerned with a broader scope of issues, topics, and methods than
was originally the scope of human-computer interaction (HCI)—although nowadays, the two
increasingly overlap in scope and coverage of topics. We define interaction design as follows:
Designing interactive products to support the way people communicate and interact in
their everyday and working lives.
Interaction design requires an understanding of the capabilities and desires of people
and the kinds of technology that are available. Interaction designers use this knowledge to
discover requirements and develop and manage them to produce a design. Our textbook pro-
vides an introduction to all of these areas. It teaches practical techniques to support develop-
ment as well as discussing possible technologies and design alternatives.
The number of different types of interface and applications available to today’s interac-
tion designers continues to increase steadily, so our textbook, likewise, has been expanded to
cover these new technologies. For example, we discuss and provide examples of brain, smart,
robotic, wearable, shareable, augmented reality, and multimodel interfaces, as well as more
traditional desktop, multimedia, and web-based interfaces. Interaction design in practice is
changing fast, so we cover a range of processes, issues, and examples throughout the book.
The book has 16 chapters, and it includes discussion of the different design approaches
in common use; how cognitive, social, and affective issues apply to interaction design; and
how to gather, analyze, and present data for interaction design. A central theme is that design
and evaluation are interwoven, highly iterative processes, with some roots in theory but that
rely strongly on good practice to create usable products. The book has a hands-on orienta-
tion and explains how to carry out a variety of techniques used to design and evaluate the
wide range of new applications coming onto the market. It has a strong pedagogical design
and includes many activities (with detailed comments) and more complex activities that can
form the basis for student projects. There are also “Dilemmas,” which encourage readers to
weigh the pros and cons of controversial issues.
http://www.id-book.com
W h at ’ s I n s I d e ?xviii
The style of writing throughout the book is intended to be accessible to a range of read-
ers. It is largely conversational in nature and includes anecdotes, cartoons, and case studies.
Many of the examples are intended to relate to readers’ own experiences. The book and the
associated website are also intended to encourage readers to be active when reading and to
think about seminal issues. The goal is for readers to understand that much of interaction
design needs consideration of the issues, and that they need to learn to weigh the pros and
cons and be prepared to make trade-offs. There is rarely a right or wrong answer, although
there is a world of difference between a good design and a poor design.
This book is accompanied by a website (www.id-book.com), which provides a variety of
resources, including slides for each chapter, comments on chapter activities, and a number
of in-depth case studies written by researchers and designers. There are video interviews
with a wide range of experts from the field, including professional interaction designers and
university professors. Pointers to respected blogs, online tutorials, YouTube videos, and other
useful materials are also provided.
tasteRs
We address topics and questions about the what, why, and how of interaction design. These
include the following:
• Why some interfaces are good and others are poor
• Whether people can really multitask
• How technology is transforming the way people communicate with one another
• What are users’ needs, and how we can design for them
• How interfaces can be designed to change people’s behavior
• How to choose between the many different kinds of interactions that are now available
(for example, talking, touching, and wearing)
• What it means to design accessible and inclusive interfaces
• The pros and cons of carrying out studies in the lab versus in the field and in the wild
• When to use qualitative and quantitative methods
• How to construct informed consent forms
• How the type of interview questions posed affects the conclusions that can be drawn from
the answers given
• How to move from a set of scenarios and personas to initial low-fidelity prototypes
• How to visualize the results of data analysis effectively
• How to collect, analyze, and interpret data at scale
• Why it is that what people say can be different from what they do
• The ethics of monitoring and recording people’s activities
• What are Agile UX and Lean UX and how they relate to interaction design
• How Agile UX can be practically integrated with interaction design throughout different
stages of the design process
http://www.id-book.com
W h at ’ s I n s I d e ? xix
Changes from Previous Editions
To reflect the dynamic nature of the field, the fifth edition has been thoroughly updated, and
new examples, images, case studies, dilemmas, and so on, have been included to illustrate the
changes. Included in this edition is a new chapter called “Data at Scale.” Collecting data has
never been easier. However, knowing what to do with it when designing new user experiences
is much more difficult. The chapter introduces key methods for collecting data at scale, dis-
cusses how to transform data at scale to be meaningful, and reviews a number of methods for
visualizing and exploring data at scale while introducing fundamental design principles
for making data at scale ethical. This is positioned just after two chapters that introduce data
gathering and data analysis that discuss fundamental methods.
In this edition, the chapter on the Process of Interaction Design has been re-located to
Chapter 2 in order to better frame the discussion of interaction design. It has been updated
with new process models and modified to fit its new location in the book structure. This means
that the other chapters have been renumbered to accommodate this and the new chapter.
Chapter 13, “Interaction Design in Practice,” has been updated to reflect recent devel-
opments in the use of practical UX methods. Old examples and methods no longer used in
the field have been removed to make way for the new material. Some chapters have been
completely rewritten, while others have been extensively revised. For example, Chapters 4, 5,
and 6 have been substantially updated to reflect new developments in social media and emo-
tional interaction, while also covering the new interaction design issues they raise, such as
privacy and addiction. Many examples of new interfaces and technologies have been added
to Chapter 7. Chapter 8 and Chapter 9 on data gathering and analysis have also been sub-
stantially updated. New case studies and examples have been added to Chapters 14–16 to
illustrate how evaluation methods have changed for use with the continuously evolving tech-
nology that is being developed for today’s users. The interviews accompanying the chapters
have been updated, and two new ones are included with leading figures involved in innova-
tive research, state-of-the-art design, and contemporary practice.
We have decided to continue to provide both a print-based version of the book and an
e-book. Both are in full color. The e-book supports note sharing, annotating, contextualized
navigating, powerful search features, inserted videos, links, and quizzes.
W H A T I S I N T E R A C T I O N D E S I G N ?
Objectives
The main goals of this chapter are to accomplish the following:
• Explain the difference between good and poor interaction design.
• Describe what interaction design is and how it relates to human-computer interaction
and other fields.
• Explain the relationship between the user experience and usability.
• Introduce what is meant by accessibility and inclusiveness in relation to human-
computer interaction.
• Describe what and who is involved in the process of interaction design.
• Outline the different forms of guidance used in interaction design.
• Enable you to evaluate an interactive product and explain what is good and bad about
it in terms of the goals and core principles of interaction design.
1.1 Introduction
How many interactive products are there in everyday use? Think for a minute about what you
use in a typical day: a smartphone, tablet, computer, laptop, remote control, coffee machine,
ticket machine, printer, GPS, smoothie maker, e-reader, smart TV, alarm clock, electric tooth-
brush, watch, radio, bathroom scales, fitness tracker, game console . . . the list is endless. Now
think for a minute about how usable they are. How many are actually easy, effortless, and
1.1 Introduction
1.2 Good and Poor Design
1.3 What Is Interaction Design?
1.4 The User Experience
1.5 Understanding Users
1.6 Accessibility and Inclusiveness
1.7 Usability and User Experience Goals
Chapter 1
1 W H AT I S I N T E R A C T I O N D E S I G N ?2
enjoyable to use? Some, like the iPad, are a joy to use, where tapping an app and flicking
through photos is simple, smooth, and enjoyable. Others, like working out how to buy the
cheapest train ticket from a ticket machine that does not recognize your credit card after
completing a number of steps and then makes you start again from scratch, can be very frus-
trating. Why is there a difference?
Many products that require users to interact with them, such as smartphones and fit-
ness trackers, have been designed primarily with the user in mind. They are generally easy
and enjoyable to use. Others have not necessarily been designed with the users in mind;
rather, they have been engineered primarily as software systems to perform set functions.
An example is setting the time on a stove that requires a combination of button presses
that are not obvious as to which ones to press together or separately. While they may work
effectively, it can be at the expense of how easily they will be learned and therefore used in a
real-world context.
Alan Cooper (2018), a well-known user experience (UX) guru, bemoans the fact that
much of today’s software suffers from the same interaction errors that were around 20 years
ago. Why is this still the case, given that interaction design has been in existence for more
than 25 years and that there are far more UX designers now in industry than ever before?
He points out how many interfaces of new products do not adhere to the interaction design
principles validated in the 1990s. For example, he notes that many apps do not follow even
the most basic of UX principles, such as offering an “undo” option. He exclaims that it is
“inexplicable and unforgivable that these violations continue to resurface in new prod-
ucts today.”
How can we rectify this situation so that the norm is that all new products are designed
to provide good user experiences? To achieve this, we need to be able to understand how
to reduce the negative aspects (such as frustration and annoyance) of the user experience
while enhancing the positive ones (for example, enjoyment and efficacy). This entails devel-
oping interactive products that are easy, effective, and pleasurable to use from the users’
perspective.
In this chapter, we begin by examining the basics of interaction design. We look at the
difference between good and poor design, highlighting how products can differ radically in
how usable and enjoyable they are. We then describe what and who is involved in the process
of interaction design. The user experience, which is a central concern of interaction design,
is then introduced. Finally, we outline how to characterize the user experience in terms of
usability goals, user experience goals, and design principles. An in-depth activity is presented
at the end of the chapter in which you have the opportunity to put into practice what you
have read by evaluating the design of an interactive product.
1.2 Good and Poor Design
A central concern of interaction design is to develop interactive products that are usable. By
this we mean products that are generally easy to learn, effective to use, and provide an enjoy-
able user experience. A good place to start thinking about how to design usable interactive
products is to compare examples of well-designed and poorly designed ones. Through identi-
fying the specific weaknesses and strengths of different interactive products, we can begin to
1 . 2 G O O D A N D P O O R D E S I G N 3
understand what it means for something to be usable or not. Here, we describe two examples
of poorly designed products that have persisted over the years—a voice-mail system used in
hotels and the ubiquitous remote control—and contrast these with two well-designed exam-
ples of the same products that perform the same function.
1.2.1 Voice-Mail System
Imagine the following scenario. You are staying at a hotel for a week while on a business
trip. You see a blinking red light on the landline phone beside the bed. You are not sure what
this means, so you pick up the handset. You listen to the tone and it goes “beep, beep, beep.”
Maybe this means that there is a message for you. To find out how to access the message,
you have to read a set of instructions next to the phone. You read and follow the first step:
1. Touch 41.
The system responds: “You have reached the Sunny Hotel voice message center. Please
enter the room number for which you would like to leave a message.”
You wait to hear how to listen to a recorded message. But there are no further instruc-
tions from the phone. You look down at the instruction sheet again and read:
2. Touch*, your room number, and #.
You do so and the system replies: “You have reached the mailbox for room 106. To leave
a message, type in your password.”
You type in the room number again, and the system replies: “Please enter room number
again and then your password.”
You don’t know what your password is. You thought it was the same as your room num-
ber, but clearly it is not. At this point, you give up and call the front desk for help. The person at
the desk explains the correct procedure for listening to messages. This involves typing in,
at the appropriate times, the room number and the extension number of the phone (the latter is
the password, which is different from the room number). Moreover, it takes six steps to access
a message. You give up.
What is problematic with this voice-mail system?
• It is infuriating.
• It is confusing.
• It is inefficient, requiring you to carry out a number of steps for basic tasks.
• It is difficult to use.
• It has no means of letting you know at a glance whether any messages have been left or
how many there are. You have to pick up the handset to find out and then go through a
series of steps to listen to them.
• It is not obvious what to do: The instructions are provided partially by the system and
partially by a card beside the phone.
Now compare it to the phone answering machine shown in Figure 1.1 The illustration
shows a small sketch of a phone answering machine. Incoming messages are represented
using marbles. The number of marbles that have moved into the pinball-like chute indicates
the number of messages. Placing one of these marbles into a dent on the machine causes the
recorded message to play. Dropping the same marble into a different dent on the phone dials
the caller who left the message.
1 W H AT I S I N T E R A C T I O N D E S I G N ?4
How does the marble answering machine differ from the voice-mail system?
• It uses familiar physical objects that indicate visually at a glance how many messages have
been left.
• It is aesthetically pleasing and enjoyable to use.
• It requires only one-step actions to perform core tasks.
• It is a simple but elegant design.
• It offers less functionality and allows anyone to listen to any of the messages.
The marble answering machine is considered a design classic. It was created by Durrell
Bishop while he was a student at the Royal College of Art in London (described by Crampton
Smith, 1995). One of his goals was to design a messaging system that represented its basic
functionality in terms of the behavior of everyday objects. To do this, he capitalized on people’s
everyday knowledge of how the physical world works. In particular, he made use of the ubiq-
uitous everyday action of picking up a physical object and putting it down in another place.
This is an example of an interactive product designed with the users in mind. The focus is
on providing them with a pleasurable experience but one that also makes efficient the activity
of receiving messages. However, it is important to note that although the marble answering
machine is an elegant and usable design, it would not be practical in a hotel setting. One
of the main reasons is that it is not robust enough to be used in public places; for instance,
the marbles could easily get lost or be taken as souvenirs. Also, the need to identify the user
before allowing the messages to be played is essential in a hotel setting.
Therefore, when considering the design of an interactive product, it is important to con-
sider where it is going to be used and who is going to use it. The marble answering machine
would be more suitable in a home setting—provided that there were no children around who
might be tempted to play with the marbles!
Video Durrell Bishop’s answering machine: http://vimeo.com/19930744.
Figure 1.1 The marble answering machine
Source: Adapted from Crampton Smith (1995)
1 . 2 G O O D A N D P O O R D E S I G N 5
1.2.2 Remote Control
Every home entertainment system, be it the smart TV, set-top box, stereo system, and so
forth, comes with its own remote control. Each one is different in terms of how it looks and
works. Many have been designed with a dizzying array of small, multicolored, and double-
labeled buttons (one on the button and one above or below it) that often seem arbitrarily
positioned in relation to one another. Many viewers, especially when sitting in their living
rooms, find it difficult to locate the right ones, even for the simplest of tasks, such as pausing
or finding the main menu. It can be especially frustrating for those who need to put on their
reading glasses each time to read the buttons. The remote control appears to have been put
together very much as an afterthought.
In contrast, much effort and thought went into the design of the classic TiVo remote con-
trol with the user in mind (see Figure 1.2). TiVo is a digital video recorder that was originally
developed to enable the viewer to record TV shows. The remote control was designed with
large buttons that were clearly labeled and logically arranged, making them easy to locate
and use in conjunction with the menu interface that appeared on the TV screen. In terms of
its physical form, the remote device was designed to fit into the palm of a hand, having a
peanut shape. It also has a playful look and feel about it: colorful buttons and cartoon icons
are used that are distinctive, making it easy to identify them.
Figure 1.2 The TiVo remote control
Source: https://business.tivo.com/
1 W H AT I S I N T E R A C T I O N D E S I G N ?6
How was it possible to create such a usable and appealing remote device where so many
others have failed? The answer is simple: TiVo invested the time and effort to follow a user-
centered design process. Specifically, TiVo’s director of product design at the time involved
potential users in the design process, getting their feedback on everything from the feel of the
device in the hand to where best to place the batteries, making them easy to replace but not
prone to falling out. He and his design team also resisted the trap of “buttonitis” to which so
many other remote controls have fallen victim; that is one where buttons breed like rabbits—
a button for every new function. They did this by restricting the number of control buttons
embedded in the device to the essential ones. Other functions were then represented as part of
the menu options and dialog boxes displayed on the TV screen, which could then be selected
via the core set of physical control buttons. The result was a highly usable and pleasing device
that has received much praise and numerous design awards.
DILEMMA
What Is the Best Way to Interact with a Smart TV?
A challenge facing smart TV providers is how to enable users to interact with online
content. Viewers can select a whole range of content via their TV screens, but it involves
scrolling through lots of menus and screens. In many ways, the TV interface has become
more like a computer interface. This raises the question of whether the remote control is
the best input device to use for someone who sits on a sofa or chair that is some distance
from the wide TV screen. Smart TV developers have addressed this challenge in a num-
ber of ways.
An early approach was to provide an on-screen keyboard and numeric keypad that pre-
sented a grid of alphanumeric characters (see Figure 1.3a), which were selected by pressing
a button repeatedly on a remote control. However, entering the name of a movie or an email
address and password using this method can be painstakingly slow; it is also easy to overshoot
and select the wrong letter or number when holding a button down on the remote to reach a
target character.
More recent remote controls, such as those provided by Apple TV, incorporate a
touchpad to enable swiping akin to the control commonly found on laptops. While this
form of touch control expedites skipping through a set of letters displayed on a TV screen,
it does not make it any easier to type in an email address and password. Each letter,
number, or special character still has to be selected. Swiping is also prone to overshoot-
ing when aiming for a target letter, number, or character. Instead of providing a grid, the
Apple TV interface displays two single lines of letters, numbers, and special characters
to swipe across (see Figure 1.3b). While this can make it quicker for someone to reach a
character, it is still tedious to select a sequence of characters in this way. For example, if
you select a Y and the next letter is an A, you have to swipe all the way back to the begin-
ning of the alphabet.
1 . 2 G O O D A N D P O O R D E S I G N 7
1.2.1 What to Design
Designing interactive products requires considering who is going to be using them, how
they are going to be used, and where they are going to be used. Another key concern is
to understand the kind of activities people are doing when interacting with these prod-
ucts. The appropriateness of different kinds of interfaces and arrangements of input and
output devices depends on what kinds of activities are to be supported. For example,
if the activity is to enable people to bank online, then an interface that is secure, trust-
worthy, and easy to navigate is essential. In addition, an interface that allows the user to
find out information about new services offered by the bank without it being intrusive
would be useful.
Might there be a better way to interact with a smart TV while sitting on the sofa? An
alternative is to use voice control. Remote controls, like Siri or TiVo, for example, have a
speech button that when pressed allows viewers to ask for movies by name or more generally
by category, for instance, “What are the best sci-fi movies on Netflix?” Smart speakers, such
as Amazon Echo, can also be connected to a smart TV via an HDMI port, and, similarly, the
user can ask for something general or more specific, for example, “Alexa, play Big Bang The-
ory, Season 6, Episode 5, on the TV.” On recognizing the command, it will switch on the TV,
switch to the right HDMI channel, open Netflix, and begin streaming the specific episode.
Some TV content, however, requires the viewer to say that they are over a certain age by
checking a box on the TV display. If the TV could ask the viewer and check that they are over
18, then that would be really smart! Also, if the TV needs the viewer to provide a password to
access on-demand content, they won’t want to say it out aloud, character by character, espe-
cially in front of others who might also be in the room with them. The use of biometrics, then,
may be the answer.
(a) (b)
Figure 1.3 Typing on a TV screen (a) by selecting letters and numbers from a square matrix
and (b) by swiping along a single line of letters and numbers
Source: (b) https://support.apple.com/en-us/HT200107
https://support.apple.com/en-us/HT200107
1 W H AT I S I N T E R A C T I O N D E S I G N ?8
The world is becoming suffused with technologies that support increasingly diverse
activities. Just think for a minute about what you can currently do using digital technology:
send messages, gather information, write essays, control power plants, program, draw, plan,
calculate, monitor others, and play games—just to name but a few. Now think about the
types of interfaces and interactive devices that are available. They too are equally diverse:
multitouch displays, speech-based systems, handheld devices, wearables, and large interactive
displays—again, to name but a few. There are also many ways of designing how users can
interact with a system, for instance, via the use of menus, commands, forms, icons, gestures,
and so on. Furthermore, ever more innovative everyday artifacts are being created using
novel materials, such as e-textiles and wearables (see Figure 1.4).
The Internet of Things (IoT) now means that many products and sensors can be con-
nected to each other via the Internet, which enables them to talk to each other. Popular
household IoT-enabled products include smart heating and lighting and home security sys-
tems where users can change the controls from an app on their phone or check out who is
knocking on their door via a doorbell webcam. Other apps that are being developed are
meant to make life easier for people, like finding a car parking space in busy areas.
The interfaces for everyday consumer items, such as cameras, microwave ovens, toasters,
and washing machines, which used to be physical and the realm of product design, are now
predominantly digitally based, requiring interaction design (called consumer electronics). The
move toward transforming human-human transactions into solely interface-based ones has
also introduced a new kind of customer interaction. Self-checkouts at grocery stores and librar-
ies are now the norm where it is commonplace for customers to check out their own goods or
books themselves, and at airports, where passengers check in their own luggage. While more
cost-effective and efficient, it is impersonal and puts the onus on the person to interact with the
system. Furthermore, accidentally pressing the wrong button or standing in the wrong place at
a self-service checkout can result in a frustrating, and sometimes mortifying, experience.
Figure 1.4 Turn signal biking jacket using e-textiles developed by Leah Beuchley
Source: Used courtesy of Leah Buechley
1 . 3 W H AT I S I N T E R A C T I O N D E S I G N ? 9
What this all amounts to is a multitude of choices and decisions that interaction design-
ers have to make for an ever-increasing range of products. A key question for interaction
design is this: “How do you optimize the users’ interactions with a system, environment, or
product so that they support the users’ activities in effective, useful, usable and pleasurable
ways?” One could use intuition and hope for the best. Alternatively, one can be more prin-
cipled in deciding which choices to make by basing them on an understanding of the users.
This involves the following:
• Considering what people are good and bad at
• Considering what might help people with the way they currently do things
• Thinking through what might provide quality user experiences
• Listening to what people want and getting them involved in the design
• Using user-centered techniques during the design process
The aim of this book is to cover these aspects with the goal of showing you how to carry
out interaction design. In particular, it focuses on how to identify users’ needs and the context
of their activities. From this understanding, we move on to consider how to design usable,
useful, and pleasurable interactive products.
1.3 What Is Interaction Design?
By interaction design, we mean the following:
Designing interactive products to support the way people communicate and interact in their
everyday and working lives
Put another way, it is about creating user experiences that enhance and augment the
way people work, communicate, and interact. More generally, Terry Winograd originally
described it as “designing spaces for human communication and interaction” (1997, p. 160).
John Thackara viewed it as “the why as well as the how of our daily interactions using com-
puters” (2001, p. 50), while Dan Saffer emphasized its artistic aspects: “the art of facilitating
interactions between humans through products and services” (2010, p. 4).
A number of terms have been used since to emphasize different aspects of what is being
designed, including user interface design (UI), software design, user-centered design, product
design, web design, user experience design, and interactive system design. Interaction design
is generally used as the overarching term to describe the field, including its methods, theories,
and approaches. UX is used more widely in industry to refer to the profession. However, the
terms can be used interchangeably. Also, it depends on their ethos and brand.
1.3.1 The Components of Interaction Design
We view interaction design as fundamental to many disciplines, fields, and approaches that
are concerned with researching and designing computer-based systems for people. Figure 1.5
presents the core ones along with interdisciplinary fields that comprise one or more of these,
such as cognitive ergonomics. It can be confusing to try to work out the differences between
them as many overlap. The main differences between interaction design and the other
approaches referred to in the figure come largely down to which methods, philosophies, and
lenses they use to study, analyze, and design products. Another way they vary is in terms of
1 W H AT I S I N T E R A C T I O N D E S I G N ?10
the scope and problems they address. For example, information systems is concerned with the
application of computing technology in domains such as business, health, and education,
whereas ubiquitous computing is concerned with the design, development, and deployment
of pervasive computing technologies (for example, IoT) and how they facilitate social inter-
actions and human experiences.
BOX 1.1
Is Interaction Design Beyond HCI?
We see the main difference between interaction design (ID) and human-computer interaction
(HCI) as one of scope. Historically, HCI had a narrow focus on the design and usability of
computing systems, while ID was seen as being broader, concerned with the theory, research,
and practice of designing user experiences for all manner of technologies, systems, and prod-
ucts. That is one of the reasons why we chose to call our book Interaction Design: beyond
human-computer interaction, to reflect this wider range. However, nowadays, HCI has greatly
expanded in its scope (Churchill et al., 2013), so much so that it overlaps much more with ID
(see Figure 1.6).
Interaction
Design
Academic
Disciplines
Ergonomics
Psychology/
Cognitive Science
Informatics
Design
Engineering
Computer Science/
Software Engineering
Social Sciences
(e.g., Sociology,
Anthropology)
Ubiquitous
Computing
Human
Factors (HF)
Cognitive
Engineering
Human-Computer
Interaction (HCI)
Cognitive
Ergonomics
Information
Systems
Computer-
Supported
Cooperative
Work (CSCW)
Film Industry
Industrial Design
Artist-Design
Product Design
Graphic Design
Design Practices
Interdisciplinary Overlapping Fields
Figure 1.5 Relationship among contributing academic disciplines, design practices, and interdisci-
plinary fields concerned with interaction design (double-headed arrows mean overlapping)
1 . 3 W H AT I S I N T E R A C T I O N D E S I G N ? 11
1.3.2 Who Is Involved in Interaction Design?
Figure 1.5 also shows that many people are involved in performing interaction design, rang-
ing from social scientists to movie-makers. This is not surprising given that technology has
become such a pervasive part of our lives. But it can all seem rather bewildering to the
onlooker. How does the mix of players work together?
Designers need to know many different things about users, technologies, and the interac-
tions among them to create effective user experiences. At the least, they need to understand
how people act and react to events and how they communicate and interact with each other.
To be able to create engaging user experiences, they also need to understand how emotions
work, what is meant by aesthetics, desirability, and the role of narrative in human experi-
ence. They also need to understand the business side, technical side, manufacturing side, and
marketing side. Clearly, it is difficult for one person to be well versed in all of these diverse
areas and also know how to apply the different forms of knowledge to the process of interac-
tion design.
Interaction design is ideally carried out by multidisciplinary teams, where the skill sets
of engineers, designers, programmers, psychologists, anthropologists, sociologists, marketing
people, artists, toy makers, product managers, and others are drawn upon. It is rarely the case,
Figure 1.6 HCI out of the box: broadening its reach to cover more areas
1 W H AT I S I N T E R A C T I O N D E S I G N ?12
however, that a design team would have all of these professionals working together. Who to
include in a team will depend on a number of factors, including a company’s design philoso-
phy, size, purpose, and product line.
One of the benefits of bringing together people with different backgrounds and training
is the potential of many more ideas being generated, new methods developed, and more crea-
tive and original designs being produced. However, the downside is the costs involved. The
more people there are with different backgrounds in a design team, the more difficult it can
be to communicate and make progress with the designs being generated. Why? People with
different backgrounds have different perspectives and ways of seeing and talking about the
world. What one person values as important others may not even see (Kim, 1990). Similarly,
a computer scientist’s understanding of the term representation is often very different from
that of a graphic designer or psychologist.
What this means in practice is that confusion, misunderstanding, and communication
breakdowns can surface in a team. The various team members may have different ways
of talking about design and may use the same terms to mean quite different things. Other
problems can arise when a group of people who have not previously worked as a team are
thrown together. For example, Aruna Balakrishnan et al. (2011) found that integration across
different disciplines and expertise is difficult in many projects, especially when it comes to
agreeing on and sharing tasks. The more disparate the team members—in terms of culture,
background, and organizational structures—the more complex this is likely to be.
ACTIVITY 1.1
In practice, the makeup of a given design team depends on the kind of interactive product
being built. Who do you think should be involved in developing
• A public kiosk providing information about the exhibits available in a science museum?
• An interactive educational website to accompany a TV series?
Comment
Ideally, each team will have a number of different people with different skill sets. For example,
the first interactive product would include the following individuals:
• Graphic and interaction designers, museum curators, educational advisers, software engi-
neers, software designers, and ergonomists
The second project would include these types of individuals:
• TV producers, graphic and interaction designers, teachers, video experts, software engi-
neers, and software designers
In addition, as both systems are being developed for use by the general public, representa-
tive users, such as school children and parents, should be involved.
In practice, design teams often end up being quite large, especially if they are working on
a big project to meet a fixed deadline. For example, it is common to find teams of 15 or more
people working on a new product like a health app. This means that a number of people from
each area of expertise are likely to be working as part of the project team.
1 . 4 T H E U S E R E X P E R I E N C E 13
1.3.3 Interaction Design Consultancies
Interaction design is now widespread in product and services development. In particular,
website consultants and the computing industries have realized its pivotal role in successful
interactive products. But it is not just IT companies that are realizing the benefits of having
UXers on board. Financial services, retail, governments, and the public sector have realized
too the value of interaction design. The presence or absence of good interaction design can
make or break a company. Getting noticed in the highly competitive field of web products
requires standing out. Being able to demonstrate that your product is easy, effective, and
engaging to use is seen as central to this. Marketing departments are also realizing how
branding, the number of hits, the customer return rate, and customer satisfaction are greatly
affected by the usability of a website.
There are many interaction design consultancies now. These include established compa-
nies, such as Cooper, NielsenNorman Group, and IDEO, and more recent ones that specialize
in a particular area, such as job board software (for example, Madgex), digital media (think of
Cogapp), or mobile design (such as CXpartners). Smaller consultancies, such as Bunnyfoot and
Dovetailed, promote diversity, interdisciplinarity, and scientific user research, having psycholo-
gists, researchers, interaction designers, usability, and customer experience specialists on board.
Many UX consultancies have impressive websites, providing case studies, tools, and
blogs. For example, Holition publishes an annual glossy booklet as part of its UX Series
(Javornik et al., 2017) to disseminate the outcomes of their in-house research to the wider
community, with a focus on the implications for commercial and cultural aspects. This shar-
ing of UX knowledge enables them to contribute to the discussion about the role of technol-
ogy in the user experience.
1.4 The User Experience
The user experience refers to how a product behaves and is used by people in the real world.
Jakob Nielsen and Don Norman (2014) define it as encompassing “all aspects of the end-
user’s interaction with the company, its services, and its products.” As stressed by Jesse Gar-
rett (2010, p. 10), “Every product that is used by someone has a user experience: newspapers,
ketchup bottles, reclining armchairs, cardigan sweaters.” More specifically, it is about how
people feel about a product and their pleasure and satisfaction when using it, looking at it,
holding it, and opening or closing it. It includes their overall impression of how good it is
to use, right down to the sensual effect small details have on them, such as how smoothly a
switch rotates or the sound of a click and the touch of a button when pressing it. An impor-
tant aspect is the quality of the experience someone has, be it a quick one, such as taking a
photo; a leisurely one, such as playing with an interactive toy; or an integrated one, such as
visiting a museum (Law et al., 2009).
It is important to point out that one cannot design a user experience, only design for a
user experience. In particular, one cannot design a sensual experience, but only create the
design features that can evoke it. For example, the outside case of a smartphone can be
designed to be smooth, silky, and fit in the palm of a hand; when held, touched, looked at,
and interacted with, that can provoke a sensual and satisfying user experience. Conversely, if
it is designed to be heavy and awkward to hold, it is much more likely to end up providing a
poor user experience—one that is uncomfortable and unpleasant.
1 W H AT I S I N T E R A C T I O N D E S I G N ?14
Designers sometimes refer to UX as UXD. The addition of the D to UX is meant to
encourage design thinking that focuses on the quality of the user experience rather than
on the set of design methods to use (Allanwood and Beare, 2014). As Don Norman (2004)
has stressed for many years, “It is not enough that we build products that function, that are
understandable and usable, we also need to build joy and excitement, pleasure and fun, and
yes, beauty to people’s lives.”
ACTIVITY 1.2
The iPod Phenomenon
Apple’s classic (and subsequent) generations of portable music players, called iPods, including
the iPod Touch, Nano, and Shuffle, released during the early 2000s were a phenomenal success.
Why do you think this occurred? Has there been any other product that has matched this quality
of experience? With the exception of the iPod Touch, Apple stopped production of them in 2017.
Playing music via a smartphone became the norm, superseding the need for a separate device.
Comment
Apple realized early on that successful interaction design involves creating interactive prod-
ucts that have a quality user experience. The sleek appearance of the iPod music player (see
Figure 1.7), its simplicity of use, its elegance in style, its distinct family of rainbow colors, a
novel interaction style that many people discovered was a sheer pleasure to learn and use,
and the catchy naming of its product and content (iTunes, iPod), among many other design
features, led to it becoming one of the greatest products of its kind and a must-have fashion
item for teenagers, students, and adults alike. While there were many competing players on
the market at the time—some with more powerful functionality, others that were cheaper and
easier to use, or still others with bigger screens, more memory, and so forth—the quality of the
overall user experience paled in comparison to that provided by the iPod.
Figure 1.7 The iPod Nano
Source: David Paul Morris / Getty Images
1 . 5 U N D E R S TA N D I N G U S E R S 15
There are many aspects of the user experience that can be considered and many ways of
taking them into account when designing interactive products. Of central importance are the
usability, functionality, aesthetics, content, look and feel, and emotional appeal. In addition,
Jack Carroll (2004) stresses other wide-reaching aspects, including fun, health, social capital
(the social resources that develop and are maintained through social networks, shared values,
goals, and norms), and cultural identity, such as age, ethnicity, race, disability, family status,
occupation, and education.
Several researchers have attempted to describe the experiential aspect of a user experi-
ence. Kasper Hornbæk and Morten Hertzum (2017) note how it is often described in terms
of the way that users perceive a product, such as whether a smartwatch is seen as sleek or
chunky, and their emotional reaction to it, such as whether people have a positive experi-
ence when using it. Marc Hassenzahl’s (2010) model of the user experience is the most well-
known, where he conceptualizes it in terms of pragmatic and hedonic aspects. By pragmatic,
it is meant how simple, practical, and obvious it is for the user to achieve their goals. By
hedonic, it is meant how evocative and stimulating the interaction is to them. In addition
to a person’s perceptions of a product, John McCarthy and Peter Wright (2004) discuss the
importance of their expectations and the way they make sense of their experiences when
using technology. Their Technology as Experience framework accounts for the user experi-
ence largely in terms of how it is felt by the user. They recognize that defining experience
is incredibly difficult because it is so nebulous and ever-present to us, just as swimming in
water is to a fish. Nevertheless, they have tried to capture the essence of human experience
by describing it in both holistic and metaphorical terms. These comprise a balance of sensual,
cerebral, and emotional threads.
How does one go about producing quality user experiences? There is no secret sauce
or magical formula that can be readily applied by interaction designers. However, there are
numerous conceptual frameworks, tried and tested design methods, guidelines, and relevant
research findings, which are described throughout the book.
1.5 Understanding Users
A main reason for having a better understanding of people in the contexts in which they live,
work, and learn is that it can help designers understand how to design interactive products
that provide good user experiences or match a user’s needs. A collaborative planning tool
for a space mission, intended to be used by teams of scientists working in different parts of
the world, will have quite different needs from one targeted at customer and sales agents,
to be used in a furniture store to draw up kitchen layout plans. Understanding individual
The nearest overall user experience that has all of the above is not so much for a product
but for a physical store. The design of the Apple Store as a completely new customer experi-
ence for buying technology has been very successful in how it draws people in and what they
do when browsing, discovering, and purchasing goods in the store. The products are laid out
in a way to encourage interaction.
1 W H AT I S I N T E R A C T I O N D E S I G N ?16
differences can also help designers appreciate that one size does not fit all; what works for
one user group may be totally inappropriate for another. For example, children have different
expectations than adults about how they want to learn or play. They may find having inter-
active quizzes and cartoon characters helping them along to be highly motivating, whereas
most adults find them annoying. Conversely, adults often like talking-head discussions about
topics, but children find them boring. Just as everyday objects like clothes, food, and games
are designed differently for children, teenagers, and adults, so too should interactive products
be designed for different kinds of users.
Learning more about people and what they do can also reveal incorrect assumptions
that designers may have about particular user groups and what they need. For example, it
is often assumed that because of deteriorating vision and dexterity, old people want things
to be big—be it text or graphical elements appearing on a screen or the physical controls,
like dials and switches, used to control devices. This may be true for some elderly people,
but studies have shown that many people in their 70s, 80s, and older are perfectly capa-
ble of interacting with standard-size information and even small interfaces, for example,
smartphones, just as well as those in their teens and 20s, even though, initially, some might
think they will find it difficult (Siek et al., 2005). It is increasingly the case that as people
get older, they do not like to consider themselves as lacking in cognitive and manual skills.
Being aware of people’s sensitivities, such as aging, is as important as knowing how to
design for their capabilities (Johnson and Finn, 2017). In particular, while many older adults
now feel comfortable with and use a range of technologies (for instance, email, online shop-
ping, online games, or social media), they may resist adopting new technologies. This is not
because they don’t perceive them as being useful to their lives but because they don’t want
to waste their time getting caught up by the distractions that digital life brings (Knowles and
Hanson, 2018), for example, not wanting to be “glued to one’s mobile phone” like younger
generations.
Being aware of cultural differences is also an important concern for interaction design,
particularly for products intended for a diverse range of user groups from different countries.
An example of a cultural difference is the dates and times used in different countries. In the
United States, for example, the date is written as month, day, year (05/21/20), whereas in
other countries, it is written in the sequence of day, month, year (21/05/20). This can cause
problems for designers when deciding on the format of online forms, especially if intended
for global use. It is also a concern for products that have time as a function, such as operating
systems, digital clocks, or car dashboards. To which cultural group do they give preference?
How do they alert users to the format that is set as default? This raises the question of how
easily an interface designed for one user group can be used and accepted by another. Why is
it that certain products, like a fitness tracker, are universally accepted by people from all parts
of the world, whereas websites are designed differently and reacted to differently by people
from different cultures?
To understand more about users, we have included three chapters (Chapters 4–6) that
explain in detail how people act and interact with one another, with information, and with
various technologies, together with describing their abilities, emotions, needs, desires, and
what causes them to get annoyed, frustrated, lose patience, and get bored. We draw upon
relevant psychological theory and social science research. Such knowledge enables designers
to determine which solutions to choose from the many design alternatives available and how to
develop and test these further.
1 . 6 A C C E S S I B I L I T Y A N D I N C L U S I V E N E S S 17
1.6 Accessibility and Inclusiveness
Accessibility refers to the extent to which an interactive product is accessible by as many
people as possible. Companies like Google and Apple provide tools for their developers to
promote this. The focus is on people with disabilities. For example, Android OS provides a
range of tools for those with disabilities, such as hearing aid compatibility to a built-in screen
reader, while Apple VoiceOver lets the user know what’s happening on its devices, so they
can easily navigate and even know who is in a selfie just taken, by listening to the phone.
Inclusiveness means being fair, open, and equal to everyone. Inclusive design is an over-
arching approach where designers strive to make their products and services accommodate
the widest possible number of people. An example is ensuring that smartphones are being
designed for all and made available to everyone—regardless of their disability, education,
age, or income.
Whether or not a person is considered to be disabled changes over time with age, or
as recovery from an accident progresses throughout their life. In addition, the severity and
impact of an impairment can vary over the course of a day or in different environmental
conditions. Disability can result because technologies are often designed in such a way as to
necessitate a certain type of interaction that is impossible for someone with an impairment.
Disability in this context is viewed as the result of poor interaction design between a user and
the technology, not the impairment alone. Accessibility, on the other hand, opens up experi-
ences so that they are accessible to all. Technologies that are now mainstream once started out
as solutions to accessibility challenges. For example, SMS was designed for hearing-impaired
people before it became a mainstream technology. Furthermore, designing for accessibility
inherently results in inclusive design for all.
Accessibility can be achieved in two ways: first, through the inclusive design of
technology, and second, through the design of assistive technology. When designing for
accessibility, it is essential to understand the types of impairments that can lead to dis-
ability as they come in many forms. They are often classified by the type of impairment,
for example:
• Sensory impairment (such as loss of vision or hearing)
• Physical impairment (having loss of functions to one or more parts of the body, for exam-
ple, after a stroke or spinal cord injury)
• Cognitive (for instance, learning impairment or loss of memory/cognitive function due to
old age or a condition such as Alzheimer’s disease)
Within each type is a complex mix of people and capabilities. For example, a person
might have only peripheral vision, be color blind, or have no light perception (and be regis-
tered blind). All are forms of visual impairment, and all require different design approaches.
Color blindness can be overcome by an inclusive design approach. Designers can choose
colors that will appear as separate colors to everyone. However, peripheral vision loss or
complete blindness will often need an assistive technology to be designed.
Impairment can also be categorized as follows:
• Permanent (for example, long-term wheelchair user)
• Temporary (such as after an accident or illness)
• Situational (for instance, a noisy environment means a person can’t hear)
1 W H AT I S I N T E R A C T I O N D E S I G N ?18
The number of people living with permanent disability increases with age. Fewer than
20 percent of people are born with a disability, whereas 80 percent of people will have a
disability once they reach 85. As people age, their functional abilities diminish. For exam-
ple, people older than 50 often find it difficult to hear conversations in rooms with hard
surfaces and lots of background noise. This is a disability that will come to most of us at
some point.
People with permanent disabilities often use assistive technology in their everyday life,
which they consider to be life-essential and an extension of their self (Holloway and Dawes,
2016). Examples include wheelchairs (people now refer to “wearing their wheels,” rather
than “using a wheelchair”) and augmented and alternative communication aids. Much cur-
rent HCI research into disability explores how new technologies, such as IoT, wearables, and
virtual reality, can be used to improve upon existing assistive technologies.
Aimee Mullens is an athlete, actor, and fashion model who has shown how prosthetics
can be designed to move beyond being purely functional (and often ugly) to being desirable
and highly fashionable. She became a bilateral amputee when her legs were amputated below
the knee as a one-year-old. She has done much to blur the boundary between disabled and
nondisabled people, and she uses fashion as a tool to achieve this. Several prosthetic compa-
nies now incorporate fashion design into their products, including striking leg covers that are
affordable by all (see Figure 1.8).
Figure 1.8 Fashionable leg cover designed by Alleles Design Studio
Source: https://alleles.ca/. Used courtesy of Alison Andersen
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 19
1.7 Usability and User Experience Goals
Part of the process of understanding users is to be clear about the primary objective of devel-
oping an interactive product for them. Is it to design an efficient system that will allow them
to be highly productive in their work? Is it to design a learning tool that will be challenging
and motivating? Or, is it something else? To help identify the objectives, we suggest classify-
ing them in terms of usability and user experience goals. Traditionally, usability goals are
concerned with meeting specific usability criteria, such as efficiency, whereas user experience
goals are concerned with explicating the nature of the user experience, for instance, to be
aesthetically pleasing. It is important to note, however, that the distinction between the two
types of goals is not clear-cut since usability is often fundamental to the quality of the user
experience and, conversely, aspects of the user experience, such as how it feels and looks, are
inextricably linked with how usable the product is. We distinguish between them here to help
clarify their roles but stress the importance of considering them together when designing for
a user experience. Also, historically HCI was concerned primarily with usability, but it has
since become concerned with understanding, designing for, and evaluating a wider range of
user experience aspects.
1.7.1 Usability Goals
Usability refers to ensuring that interactive products are easy to learn, effective to use, and
enjoyable from the user’s perspective. It involves optimizing the interactions people have with
interactive products to enable them to carry out their activities at work, at school, and in
their everyday lives. More specifically, usability is broken down into the following six goals:
• Effective to use (effectiveness)
• Efficient to use (efficiency)
• Safe to use (safety)
• Having good utility (utility)
• Easy to learn (learnability)
• Easy to remember how to use (memorability)
Usability goals are typically operationalized as questions. The purpose is to provide the
interaction designer with a concrete means of assessing various aspects of an interactive
product and the user experience. Through answering the questions, designers can be alerted
very early on in the design process to potential design problems and conflicts that they might
not have considered. However, simply asking “Is the system easy to learn?” is not going to be
very helpful. Asking about the usability of a product in a more detailed way—for example,
“How long will it take a user to figure out how to use the most basic functions for a new
smartwatch; how much can they capitalize on from their prior experience; and how long
would it take the user to learn the whole set of functions?”—will elicit far more information.
The following are descriptions of the usability goals and a question for each one:
(i) Effectiveness is a general goal, and it refers to how good a product is at doing what it is
supposed to do.
Question: Is the product capable of allowing people to learn, carry out their work effi-
ciently, access the information that they need, or buy the goods that they want?
1 W H AT I S I N T E R A C T I O N D E S I G N ?20
(ii) Efficiency refers to the way a product supports users in carrying out their tasks. The
marble answering machine described earlier in this chapter was considered efficient in
that it let the user carry out common tasks, for example, listening to messages, through
a minimal number of steps. In contrast, the voice-mail system was considered inefficient
because it required the user to carry out many steps and learn an arbitrary set of sequences
for the same common task. This implies that an efficient way of supporting common
tasks is to let the user use single button or key presses. An example of where this kind of
efficiency mechanism has been employed effectively is in online shopping. Once users
have entered all of the necessary personal details in an online form to make a purchase,
they can let the website save all of their personal details. Then, if they want to make
another purchase at that site, they don’t have to re-enter all of their personal details. A
highly successful mechanism patented by Amazon.com is the one-click option, which
requires users to click only a single button when they want to make another purchase.
Question: Once users have learned how to use a product to carry out their tasks, can they
sustain a high level of productivity?
(iii) Safety involves protecting the user from dangerous conditions and undesirable situa-
tions. In relation to the first ergonomic aspect, it refers to the external conditions where
people work. For example, where there are hazardous conditions—such as X-ray
machines or toxic chemicals—operators should be able to interact with and control
computer-based systems remotely. The second aspect refers to helping any kind of user
in any kind of situation to avoid the dangers of carrying out unwanted actions acciden-
tally. It also refers to the perceived fears that users might have of the consequences of
making errors and how this affects their behavior. Making interactive products safer in
this sense involves (1) preventing the user from making serious errors by reducing the
risk of wrong keys/buttons being mistakenly activated (an example is not placing the
quit or delete-file command right next to the save command on a menu) and (2) provid-
ing users with various means of recovery should they make errors, such as an undo func-
tion. Safe interactive systems should engender confidence and allow the user the
opportunity to explore the interface to try new operations (see Figure 1.9a). Another
safety mechanism is confirming dialog boxes that give users another chance to consider
their intentions (a well-known example is the appearance of a dialog box after issuing
the command to delete everything in the trash, saying: “Are you sure you want to remove
the items in the Trash permanently?”) (see Figure 1.9b).
Question: What is the range of errors that are possible using the product, and what
measures are there to permit users to recover easily from them?
(iv) Utility refers to the extent to which the product provides the right kind of functionality
so that users can do what they need or want to do. An example of a product with high
utility is an accounting software package that provides a powerful computational tool
that accountants can use to work out tax returns. An example of a product with low
utility is a software drawing tool that does not allow users to draw freehand but forces
them to use a mouse to create their drawings, using only polygon shapes.
Question: Does the product provide an appropriate set of functions that will enable users
to carry out all of their tasks in the way they want to do them?
(v) Learnability refers to how easy a system is to learn to use. It is well known that people
don’t like spending a long time learning how to use a system. They want to get started
right away and become competent at carrying out tasks without too much effort. This is
Http://www.Amazon.com
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 21
especially true for interactive products intended for everyday use (for example social media,
email, or a GPS) and those used only infrequently (for instance, online tax forms). To a
certain extent, people are prepared to spend a longer time learning more complex systems
that provide a wider range of functionality, such as web authoring tools. In these situations,
pop-up tutorials can help by providing contextualized step-by-step material with hands-on
exercises. A key concern is determining how much time users are prepared to spend learn-
ing a product. It seems like a waste if a product provides a range of functionality that the
majority of users are unable or unprepared to spend the time learning how to use.
Question: Is it possible for the user to work out how to use the product by exploring the
interface and trying certain actions? How hard will it be to learn the whole set of func-
tions in this way?
(vi) Memorability refers to how easy a product is to remember how to use, once learned. This
is especially important for interactive products that are used infrequently. If users haven’t
used an operation for a few months or longer, they should be able to remember or at least
rapidly be reminded how to use it. Users shouldn’t have to keep relearning how to carry
(a)
(b)
Figure 1.9 (a) A safe and unsafe menu. Which is which and why? (b) A warning dialog
box for Mac OS X
1 W H AT I S I N T E R A C T I O N D E S I G N ?22
out tasks. Unfortunately, this tends to happen when the operations required to be learned
are obscure, illogical, or poorly sequenced. Users need to be helped to remember how to
do tasks. There are many ways of designing the interaction to support this. For example,
users can be helped to remember the sequence of operations at different stages of a task
through contextualized icons, meaningful command names, and menu options. Also,
structuring options and icons so that they are placed in relevant categories of options, for
example, placing all of the drawing tools in the same place on the screen, can help the
user remember where to look to find a particular tool at a given stage of a task.
Question: What types of interface support have been provided to help users remember
how to carry out tasks, especially for products and operations they use infrequently?
In addition to couching usability goals in terms of specific questions, they are turned into
usability criteria. These are specific objectives that enable the usability of a product to be
assessed in terms of how it can improve (or not improve) a user’s performance. Examples
of commonly used usability criteria are time to complete a task (efficiency), time to learn a
task (learnability), and the number of errors made when carrying out a given task over time
(memorability). These can provide quantitative indicators of the extent to which productivity
has increased, or how work, training, or learning have been improved. They are also useful
for measuring the extent to which personal, public, and home-based products support leisure
and information gathering activities. However, they do not address the overall quality of the
user experience, which is where user experience goals come into play.
1.7.2 User Experience Goals
A diversity of user experience goals has been articulated in interaction design, which covers
a range of emotions and felt experiences. These include desirable and undesirable ones, as
shown in Table 1.1.
Desirable aspects
Satisfying Helpful Fun
Enjoyable Motivating Provocative
Engaging Challenging Surprising
Pleasurable Enhancing sociability Rewarding
Exciting Supporting creativity Emotionally fulfilling
Entertaining Cognitively stimulating Experiencing flow
Undesirable aspects
Boring Unpleasant
Frustrating Patronizing
Making one feel guilty Making one feel stupid
Annoying Cutesy
Childish Gimmicky
Table 1.1 Desirable and undesirable aspects of the user experience
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 23
Many of these are subjective qualities and are concerned with how a system feels to
a user. They differ from the more objective usability goals in that they are concerned with
how users experience an interactive product from their perspective, rather than assessing how
useful or productive a system is from its own perspective. Whereas the terms used to describe
usability goals comprise a small distinct set, many more terms are used to describe the mul-
tifaceted nature of the user experience. They also overlap with what they are referring to. In
so doing, they offer subtly different options for expressing the way an experience varies for
the same activity over time, technology, and place. For example, we may describe listening to
music in the shower as highly pleasurable, but consider it more apt to describe listening
to music in the car as enjoyable. Similarly, listening to music on a high-end powerful music
system may invoke exciting and emotionally fulfilling feelings, while listening to it on a
smartphone that has a shuffle mode may be serendipitously enjoyable, especially not know-
ing what tune is next. The process of selecting terms that best convey a user’s feelings, state
of being, emotions, sensations, and so forth when using or interacting with a product at a
given time and place can help designers understand the multifaceted and changing nature of
the user experience.
The concepts can be further defined in terms of elements that contribute to making
a user experience pleasurable, fun, exciting, and so on. They include attention, pace, play,
interactivity, conscious and unconscious control, style of narrative, and flow. The concept of
flow (Csikszentmihalyi, 1997) is popular in interaction design for informing the design of
user experiences for websites, video games, and other interactive products. It refers to a state
of intense emotional involvement that comes from being completely involved in an activity,
like playing music, and where time flies. Instead of designing web interfaces to cater to visi-
tors who know what they want, they can be designed to induce a state of flow, leading the
visitor to some unexpected place, where they become completely absorbed. In an interview
with Wired magazine, Mihaly Csikszentmihalyi (1996) uses the analogy of a gourmet meal
to describe how a user experience can be designed to be engrossing, “starting off with the
appetizers, moving on to the salads and entrées, and building toward dessert and not know-
ing what will follow.”
The quality of the user experience may also be affected by single actions performed
at an interface. For example, people can get much pleasure from turning a knob that has
the perfect level of gliding resistance; they may enjoy flicking their finger from the bottom
of a smartphone screen to reveal a new menu, with the effect that it appears by magic, or
enjoy the sound of trash being emptied from the trashcan on a screen. These one-off actions
can be performed infrequently or several times a day—which the user never tires of doing.
Dan Saffer (2014) has described these as micro-interactions and argues that designing these
moments of interaction at the interface—despite being small—can have a big impact on the
user experience.
ACTIVITY 1.3
There are more desirable than undesirable aspects of the user experience listed in Table 1.1.
Why do you think this is so? Should you consider all of these when designing a product?
(Continued)
1 W H AT I S I N T E R A C T I O N D E S I G N ?24
Comment
The two lists we have come up with are not meant to be exhaustive. There are likely to be
more—both desirable and undesirable—as new products surface. The reason for there being
more of the former is that a primary goal of interaction design is to create positive experi-
ences. There are many ways of achieving this.
Not all usability and user experience goals will be relevant to the design and evaluation
of an interactive product being developed. Some combinations will also be incompatible. For
example, it may not be possible or desirable to design a process control system that is both
safe and fun. Recognizing and understanding the nature of the relationship between usability
and user experience goals is central to interaction design. It enables designers to become aware
of the consequences of pursuing different combinations when designing products and high-
lighting potential trade-offs and conflicts. As suggested by Jack Carroll (2004), articulating
the interactions of the various components of the user’s experience can lead to a deeper and
more significant interpretation of the role of each component.
BOX 1.3
Beyond Usability: Designing to Persuade
Eric Schaffer (2009) argues that we should be focusing more on the user experience and less
on usability. He points out how many websites are designed to persuade or influence rather
than enable users to perform their tasks in an efficient manner. For example, many online
shopping sites are in the business of selling services and products, where a core strategy is to
entice people to buy what they might not have thought they needed. Online shopping experi-
ences are increasingly about persuading people to buy rather than being designed to make
shopping easy. This involves designing for persuasion, emotion, and trust, which may or may
not be compatible with usability goals.
This entails determining what customers will do, whether it is to buy a product or renew
a membership, and it involves encouraging, suggesting, or reminding the user of things that
they might like or need. Many online travel sites try to lure visitors to purchase additional
items (such as hotels, insurance, car rental, car parking, or day trips) besides the flight they
originally wanted to book, and they will add a list full of tempting graphics to the visitor’s
booking form, which then has to be scrolled through before being able to complete the trans-
action. These opportunities need to be designed to be eye-catching and enjoyable, in the same
way that an array of products are attractively laid out in the aisles of a grocery store that one
is required to walk past before reaching one’s desired product.
Some online sites, however, have gone too far, for example, adding items to the cus-
tomer’s shopping basket (for example, insurance, special delivery, and care and handling)
that the shopper has to deselect if not desired or start all over again. This sneaky add-on
approach can often result in a negative experience. More generally, this deceptive approach
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 25
to UX has been described by Harry Brignull as dark patterns (see http://darkpatterns.org/).
Shoppers often become annoyed if they notice decisions that add cost to their purchase
have been made on their behalf without even being asked. For example, on clicking the
unsubscribe button on the website of a car rental company, as indicated in Figure 1.10,
the user is taken to another page where they have to uncheck additional boxes and then
Update. They are then taken to yet another page where they are asked for their reason.
The next screen says “Your email preferences have been updated. Do you need to hire a
vehicle?” without letting the user know whether they have been unsubscribed from their
mailing list.
(Continued)
Email preferences
y.rogers@ucl.ac.uk
Uncheck the emails you do not want to receive
Newsletters UK
* required fields
NiftyCars Partners offers About your rental
Update
Email preferences
We’d love to get some feedback on why you’re unsubscribing.
Emails were too frequent
Update
Emails were not relevant
I am no longer interested in this content
I never signed up for newsletters from NiftyCars
Figure 1.10 Dark pattern for a car rental company
http://darkpatterns.org/
1 W H AT I S I N T E R A C T I O N D E S I G N ?26
1.7.3 Design Principles
Design principles are used by interaction designers to aid their thinking when designing for
the user experience. These are generalizable abstractions intended to orient designers toward
thinking about different aspects of their designs. A well-known example is feedback: Products
should be designed to provide adequate feedback to the users that informs them about what
has already been done so that they know what to do next in the interface. Another one that is
important is findability (Morville, 2005). This refers to the degree to which a particular object
is easy to discover or locate—be it navigating a website, moving through a building, or finding
the delete image option on a digital camera. Related to this is the principle of navigability: Is
it obvious what to do and where to go in an interface; are the menus structured in a way that
allows the user to move smoothly through them to reach the option they want?
Design principles are derived from a mix of theory-based knowledge, experience, and com-
mon sense. They tend to be written in a prescriptive manner, suggesting to designers what to
provide and what to avoid at the interface—if you like, the dos and don’ts of interaction design.
More specifically, they are intended to help designers explain and improve their designs (Thim-
bleby, 1990). However, they are not intended to specify how to design an actual interface, for
instance, telling the designer how to design a particular icon or how to structure a web portal, but
to act more like triggers for designers, ensuring that they provide certain features in an interface.
A number of design principles have been promoted. The best known are concerned with
how to determine what users should see and do when carrying out their tasks using an
interactive product. Here we briefly describe the most common ones: visibility, feedback,
constraints, consistency, and affordance.
Visibility
The importance of visibility is exemplified by our contrasting examples at the beginning of
the chapter. The voice-mail system made the presence and number of waiting messages invis-
ible, while the answering machine made both aspects highly visible. The more visible functions
are, the more likely it is that users will be able to know what to do next. Don Norman (1988)
describes the controls of a car to emphasize this point. The controls for different operations are
clearly visible, such as indicators, headlights, horn, and hazard warning lights, indicating what
The key is to nudge people in subtle and pleasant ways with which they can trust and feel
comfortable. Natasha Loma (2018) points out how dark pattern design is “deception and
dishonesty by design.” She describes in a TechCrunch article the many kinds of dark patterns
that are now used to deceive users. A well-known example that most of us have experienced
is unsubscribing from a marketing mailing list. Many sites go to great lengths to make it dif-
ficult for you to leave; you think you have unsubscribed, but then you discover that you need
to type in your email address and click several more buttons to reaffirm that you really want
to quit. Then, just when you think you are safe, they post a survey asking you to answer a few
questions about why you want to leave. Like Harry Brignull, she argues that companies
should adopt fair and ethical design where users have to opt in to any actions that benefit the
company at the expense of the users’ interests.
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 27
can be done. The relationship between the way the controls have been positioned in the car and
what they do makes it easy for the driver to find the appropriate control for the task at hand.
In contrast, when functions are out of sight, it makes them more difficult to find and
to know how to use. For example, devices and environments that have become automated
through the use of sensor technology (usually for hygiene and energy-saving reasons)—like
faucets, elevators, and lights—can sometimes be more difficult for people to know how to
control, especially how to activate or deactivate them. This can result in people getting caught
short and frustrated. Figure 1.11 shows a sign that explains how to use the automatically
controlled faucet for what is normally an everyday and well-learned activity. It also states
that the faucets cannot be operated if wearing black clothing. It does not explain, however,
what to do if you are wearing black clothing! Increasingly, highly visible controlling devices,
like knobs, buttons, and switches, which are intuitive to use, have been replaced by invisible
and ambiguous activating zones where people have to guess where to move their hands, bod-
ies, or feet—on, into, or in front of—to make them work.
Feedback
Related to the concept of visibility is feedback. This is best illustrated by an analogy to what
everyday life would be like without it. Imagine trying to play a guitar, slice bread using a
knife, or write using a pen if none of the actions produced any effect for several seconds.
Figure 1.11 A sign in the restrooms at the Cincinnati airport
Source: http://www.baddesigns.com
http://www.baddesigns.com
1 W H AT I S I N T E R A C T I O N D E S I G N ?28
There would be an unbearable delay before the music was produced, the bread was cut, or
the words appeared on the paper, making it almost impossible for the person to continue
with the next strum, cut, or stroke.
Feedback involves sending back information about what action has been done and what
has been accomplished, allowing the person to continue with the activity. Various kinds of
feedback are available for interaction design—audio, tactile, verbal, visual, and combinations
of these. Deciding which combinations are appropriate for different types of activities and
interactivities is central. Using feedback in the right way can also provide the necessary vis-
ibility for user interaction.
Constraints
The design concept of constraining refers to determining ways of restricting the kinds of user inter-
action that can take place at a given moment. There are various ways that this can be achieved.
A common design practice in graphical user interfaces is to deactivate certain menu options by
shading them gray, thereby restricting the user only to actions permissible at that stage of the
activity (see Figure 1.12). One of the advantages of this form of constraining is that it prevents
the user from selecting incorrect options and thereby reduces the chance of making a mistake.
The use of different kinds of graphical representations can also constrain a person’s
interpretation of a problem or information space. For example, flow chart diagrams show
which objects are related to which, thereby constraining the way that the information can be
perceived. The physical design of a device can also constrain how it is used; for example, the
Figure 1.12 A menu showing restricted availability of options as an example of logical constraining.
Gray text indicates deactivated options.
Source: https://www.ucl.ac.uk
https://www.ucl.ac.uk
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 29
external slots in a computer have been designed to allow a cable or card to be inserted in a
certain way only. Sometimes, however, the physical constraint is ambiguous, as shown in Fig-
ure 1.13. The figure shows part of the back of a computer. There are two sets of connectors;
the two on the right are for a mouse and a keyboard. They look identical and are physically
constrained in the same way. How do you know which is which? Do the labels help?
Consistency
This refers to designing interfaces to have similar operations and use similar elements for achiev-
ing similar tasks. In particular, a consistent interface is one that follows rules, such as using the
same operation to select all objects. For example, a consistent operation is using the same input
action to highlight any graphical object on the interface, such as always clicking the left mouse
button. Inconsistent interfaces, on the other hand, allow exceptions to a rule. An example is
where certain graphical objects (for example, email messages presented in a table) can be high-
lighted only by using the right mouse button, while all other operations are highlighted using
the left mouse button. The problem with this kind of inconsistency is that it is quite arbitrary,
making it difficult for users to remember and making its use more prone to mistakes.
One of the benefits of consistent interfaces, therefore, is that they are easier to learn and
use. Users have to learn only a single mode of operation that is applicable to all objects. This
principle works well for simple interfaces with limited operations, such as a portable radio
with a small number of operations mapped onto separate buttons. Here, all the user has to
do is to learn what each button represents and select accordingly. However, it can be more
problematic to apply the concept of consistency to more complex interfaces, especially when
many different operations need to be designed. For example, consider how to design an inter-
face for an application that offers hundreds of operations, such as a word-processing appli-
cation. There is simply not enough space for a thousand buttons, each of which maps to an
individual operation. Even if there were, it would be extremely difficult and time-consuming
for the user to search through all of them to find the desired operation. A much more effec-
tive design solution is to create categories of commands that can be mapped into subsets of
operations that can be displayed at the interface, for instance, via menus.
Figure 1.13 Ambiguous constraints on the back of a computer
Source: http://www.baddesigns.com
http://www.baddesigns.com
1 W H AT I S I N T E R A C T I O N D E S I G N ?30
Affordance
This is a term used to refer to an attribute of an object that allows people to know how to
use it. For example, a mouse button invites pushing (in so doing, activating clicking) by the
way it is physically constrained in its plastic shell. At a simple level, to afford means “to give
a clue”’ (Norman, 1988). When the affordances of a physical object are perceptually obvious,
it is easy to know how to interact with it. For example, a door handle affords pulling, a cup
handle affords grasping, and a mouse button affords pushing. The term has since been much
popularized in interaction design, being used to describe how interfaces should make it obvi-
ous as to what can be done when using them. For example, graphical elements like buttons,
icons, links, and scrollbars are discussed with respect to how to make it appear obvious how
they should be used: icons should be designed to afford clicking, scrollbars to afford moving
up and down, and buttons to afford pushing.
Don Norman (1999) suggests that there are two kinds of affordance: perceived and real.
Physical objects are said to have real affordances, like grasping, that are perceptually obvious
and do not have to be learned. In contrast, user interfaces that are screen-based are virtual and
do not have these kinds of real affordances. Using this distinction, he argues that it does
not make sense to try to design for real affordances at the interface, except when designing
physical devices, like control consoles, where affordances like pulling and pressing are help-
ful in guiding the user to know what to do. Alternatively, screen-based interfaces are better
conceptualized as perceived affordances, which are essentially learned conventions. However,
watching a one-year-old swiping smartphone screens, zooming in and out on images with
their finger and thumb, and touching menu options suggests that kind of learning comes
naturally.
Applying Design Principles in Practice
One of the challenges of applying more than one of the design principles in interaction
design is that trade-offs can arise among them. For example, the more you try to constrain
an interface, the less visible information becomes. The same can also happen when trying
to apply a single design principle. For example, the more an interface is designed to afford
through trying to resemble the way physical objects look, the more it can become clut-
tered and difficult to use. It can also be the case that the more an interface is designed to
be aesthetic, the less usable it becomes. Consistency can be a problematic design principle;
trying to design an interface to be consistent with something can make it inconsistent with
something else. Furthermore, sometimes inconsistent interfaces are actually easier to use
than consistent interfaces. This is illustrated by Jonathan Grudin’s classic (1989) use of the
analogy of where knives are stored in a house. Knives come in a variety of forms, including
butter knives, steak knives, table knives, and fish knives. An easy place to put them all and
subsequently locate them is in the top drawer by the sink. This makes it easy for everyone
to find them and follows a simple consistent rule. But what about the knives that don’t fit
or are too sharp to put in the drawer, like carving knives and bread knives? They are placed
in a wooden block. And what about the best knives kept only for special occasions? They
are placed in the cabinet in another room for safekeeping. And what about other knives
like putty knives and paint-scraping knives used in home improvement projects (kept in the
garage) and jack-knives (kept in one’s pockets or backpack)? Very quickly, the consistency
rule begins to break down.
1 . 7 U S A B I L I T Y A N D U S E R E X P E R I E N C E G O A L S 31
Jonathan Grudin notes how, in extending the number of places where knives are kept,
inconsistency is introduced, which in turn increases the time needed to learn where they are
all stored. However, the placement of the knives in different places often makes it easier to
find them because they are at hand for the context in which they are used and are also next
to the other objects used for a specific task; for instance, all of the home improvement project
tools are stored together in a box in the garage. The same is true when designing interfaces:
introducing inconsistency can make it more difficult to learn an interface, but in the long run
it can make it easier to use.
ACTIVITY 1.4
One of the main design principles for website design is simplicity. Jakob Nielsen (1999) pro-
posed that designers go through all of their design elements and remove them one by one. If
a design works just as well without an element, then remove it. Do you think this is a good
design principle? If you have your own website, try doing this and seeing what happens. At
what point does the interaction break down?
Comment
Simplicity is certainly an important design principle. Many designers try to cram too much
into a screenful of space, making it unwieldy for people to find the element in which they are
interested. Removing design elements to see what can be discarded without affecting the over-
all function of the website can be a salutary lesson. Unnecessary icons, buttons, boxes, lines,
graphics, shading, and text can be stripped, leaving a cleaner, crisper, and easier-to-navigate
website. However, graphics, shading, coloring, and formatting can make a site aesthetically
pleasing and enjoyable to use. Plain vanilla sites consisting solely of lists of text and a few links
may not be as appealing and may put certain visitors off, never to return. Good interaction
design involves getting the right balance between aesthetic appeal and the optimal amount
and kind of information per page.
In-Depth Activity
This activity is intended for you to put into practice what you have studied in this chapter.
Specifically, the objective is to enable you to define usability and user experience goals and to
transform these and other design principles into specific questions to help evaluate an inter-
active product.
Find an everyday handheld device, for example, a remote control, digital camera, or
smartphone and examine how it has been designed, paying particular attention to how the
user is meant to interact with it.
(Continued)
1 W H AT I S I N T E R A C T I O N D E S I G N ?32
(a) From your first impressions, write down what is good and bad about the way the
device works.
(b) Give a description of the user experience resulting from interacting with it.
(c) Outline some of the core micro-interactions that are supported by it. Are they pleasurable,
easy, and obvious?
(d) Based on your reading of this chapter and any other material you have come across about
interaction design, compile a set of usability and user experience goals that you think will
be most relevant in evaluating the device. Decide which are the most important ones and
explain why.
(e) Translate each of your sets of usability and user experience goals into two or three specific
questions. Then use them to assess how well your device fares.
(f) Repeat steps (c) and (d), but this time use the design principles outlined in the chapter.
(g) Finally, discuss possible improvements to the interface based on the answers obtained in
steps (d) and (e).
Summary
In this chapter, we have looked at what interaction design is and its importance when developing
apps, products, services, and systems. To begin, a number of good and bad designs were pre-
sented to illustrate how interaction design can make a difference. We described who and what
is involved in interaction design and the need to understand accessibility and inclusiveness. We
explained in detail what usability and user experience are, how they have been characterized,
and how to operationalize them to assess the quality of a user experience resulting from interact-
ing with an interactive product. The increasing emphasis on designing for the user experience
and not just products that are usable was stressed. A number of core design principles were also
introduced that provide guidance for helping to inform the interaction design process.
Key Points
• Interaction design is concerned with designing interactive products to support the way
people communicate and interact in their everyday and working lives.
• Interaction design is multidisciplinary, involving many inputs from wide-ranging disciplines
and fields.
• The notion of the user experience is central to interaction design.
• Optimizing the interaction between users and interactive products requires consideration
of a number of interdependent factors, including context of use, types of activity, UX goals,
accessibility, cultural differences, and user groups.
• Identifying and specifying relevant usability and user experience goals can help lead to the
design of good interactive products.
• Design principles, such as feedback and simplicity, are useful heuristics for informing, ana-
lyzing, and evaluating aspects of an interactive product.
F U R T H E R R E A D I N G 33
Further Reading
Here we recommend a few seminal readings on interaction design and the user experience
(in alphabetical order).
COOPER, A., REIMANN, R., CRONIN, D. AND NOESSEL, C. (2014) About Face: The
Essentials of Interaction Design (4th ed.). John Wiley & Sons Inc. This fourth edition of
About Face provides an updated overview of what is involved in interaction design, and it is
written in a personable style that appeals to practitioners and students alike.
GARRETT, J. J. (2010) The Elements of User Experience: User-Centered Design for the Web
and Beyond (2nd ed.). New Riders Press. This is the second edition of the popular coffee-
table introductory book to interaction design. It focuses on how to ask the right questions
when designing for a user experience. It emphasizes the importance of understanding how
products work on the outside, that is, when a person comes into contact with those products
and tries to work with them. It also considers a business perspective.
LIDWELL, W., HOLDEN, K. AND BUTLER, J. (2010) Revised and Updated: 125 Ways to
Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions and
Teach Through Design. Rockport Publishers, Inc. This book presents classic design principles
such as consistency, accessibility, and visibility in addition to some lesser-known ones, such as
constancy, chunking, and symmetry. They are alphabetically ordered (for easy reference) with
a diversity of examples to illustrate how they work and can be used.
NORMAN, D.A. (2013) The Design of Everyday Things: Revised and Expanded Edition.
MIT Press. This book was first published in 1988 and became an international best seller,
introducing the world of technology to the importance of design and psychology. It covers
the design of everyday things, such as refrigerators and thermostats, providing much food for
thought in relation to how to design interfaces. This latest edition is comprehensively revised
showing how principles from psychology apply to a diversity of old and new technologies.
The book is highly accessible with many illustrative examples.
SAFFER, D. (2014) Microinteractions: Designing with Details. O’Reilly. This highly acces-
sible book provides many examples of the small things in interaction design that make a big
difference between a pleasant experience and a nightmare one. Dan Saffer describes how to
design them to be efficient, understandable, and enjoyable user actions. He goes into detail
about their structure and the different kinds, including many examples with lots of illustra-
tions. The book is a joy to dip into and enables you to understand right away why and how
it is important to get the micro-interactions right.
1 W H AT I S I N T E R A C T I O N D E S I G N ?34
INTERVIEW with Harry Brignull
Harry Brignull is a user experience con-
sultant based in the United Kingdom. He
has a PhD in cognitive science, and his
work involves building better experiences
by blending user research and interaction
design. In his work, Harry has consulted
for companies including Spotify, Smart
Pension, The Telegraph, British Airways,
Vodafone, and many others. In his spare
time, Harry also runs a blog on interaction
design that has attracted a lot of eyeballs. It
is called 90percentofeverything.com, and it
is well worth checking out.
What are the characteristics of a good
interaction designer?
I think of interaction design, user expe-
rience design, service design, and user
research as a combined group of disci-
plines that are tricky to tease apart. Every
company has slightly different terminol-
ogy, processes, and approaches. I’ll let you
into a secret, though. They’re all making
it up as they go along. When you see any
organization portraying its design and
research publicly, they’re showing you
a fictionalized view of it for recruitment
and marketing purposes. The reality of the
work is usually very different. Research
and design is naturally messy. There’s a
lot of waste, false assumptions, and blind
alleys you have to go down before you
can define and understand a problem well
enough to solve it. If an employer doesn’t
understand this and they don’t give you
the space and time you need, then you
won’t be able to do a good job, regardless
of your skills and training.
A good interaction designer has skills
that work like expanding foam. You expand
to fill the skill gaps in your team. If you
don’t have a writer present, you need to be
able to step up and do it yourself, at least
to the level of a credible draft. If you don’t
have a researcher, you’ll need to step up
and do it yourself. The same goes for devel-
oping code-based prototypes, planning the
user journeys, and so on. You’ll soon learn
to become used to working outside of your
comfort zone and relish the new challenges
that each project brings.
How has interaction design changed in the
past few years?
In-housing of design teams is a big trend
at the moment. When I started my con-
sultancy career in the mid-2000s, the
main route to getting a career in industry
was to get a role at an agency, like a UX
consultancy, a research agency, or a full-
service agency. Big organizations didn’t
even know where to start with hiring and
building their own teams, so they paid
enormous sums to agencies to design and
build their products. This turned out to
be a pretty ineffective model—when the
agencies finish a project, they take all
the acquired expertise away with them to
their next clients.
These days, digital organizations have
wised up, and they’ve started building their
own in-house teams. This means that a big
theme in design these days is organizational
change. You can’t do good design in an
organization that isn’t set up for it. In fact,
in old, large organizations, the political
http://90percentofeverything.com
35
structure often seems to be set up to sab-
otage good design and development prac-
tices. It sounds crazy, but it’s very common
to walk into an organization to find a proj-
ect manager brandishing a waterfall Gantt
chart while ranting obsessively about Agile
(which is a contradiction in terms) or to
find a product owner saying in one breath
they value user research yet in the next
breath getting angry with researchers for
bringing them bad news. As well as “leg-
acy technology,” organizations naturally
end up with “legacy thinking.” It’s really
tricky to change it. Design used to be just
a department. Nowadays it’s understood
that good design requires the entire organi-
zation to work together in a cohesive way.
What projects are you working on now?
I’m currently head of UX at a FinTech start-
up called Smart Pension in London. Pen-
sions pose a really fascinating user-centered
design challenge. Consumers hate thinking
about pensions, but they desperately need
them. In a recent research session, one of
the participants said something that really
stuck with me: “Planning your pension
is like planning for your own funeral.”
Humans are pretty terrible at long-term
planning over multiple decades. Nobody
likes to think about their own mortality.
But this is exactly what you need to do if
you want to have a happy retirement.
The pension industry is full of jar-
gon and off-putting technical complexity.
Even fundamental financial concepts like
risk aren’t well understood by many con-
sumers. In some recent research, one of
our participants got really tongue-tied try-
ing to understand the idea that since they
were young, it would be “high risk” (in the
loose nontechnical definition of the word)
to put their money into a “low-risk” fund
(in the technical definition of the word)
since they’d probably end up with lower
returns when they got older. Investment
is confusing unless you’ve had training.
Then, there’s the problem that “a little
knowledge can hurt.” Some consumers
who think they know what they’re doing
can end up suffering when they think
they can beat the market by moving their
money around between funds every week.
Self-service online pension (retirement
plans) platforms don’t do anything to help
people make the right decisions because
that would count as advice, which they’re
not able to give because of the way it’s reg-
ulated. Giving an average person a self-
service platform and telling them to go sort
out their pension is like giving them a Unix
terminal and telling them to sort out their
own web server. A few PDF fact sheets just
aren’t going to help. If consumers want
advice, they have to go to a financial advi-
sor, which can be expensive and doesn’t
make financial sense unless you have a lot
of money in the first place. There’s a gap in
the market, and we’re working these sorts
of challenges in my team at Smart Pension.
What would you say are the biggest chal-
lenges facing you and other consultants
doing interaction design these days?
A career in interaction design is one of con-
tinual education and training. The biggest
challenge is to keep this going. Even if you
feel that you’re at the peak of your skills,
the technology landscape will be shifting
under your feet, and you need to keep an
eye on what’s coming next so you don’t get
left behind. In fact, things move so quickly
in interaction design that by the time you
read this interview, it will already be dated.
If you ever find yourself in a “com-
fortable” role doing the same thing every
(Continued)
I N T E R V I E W W I T H H A R R Y B R I G N U L L
1 W H AT I S I N T E R A C T I O N D E S I G N ?36
day, then beware—you’re doing yourself a
disservice. Get out there, stretch yourself,
and make sure you spend some time every
week outside your comfort zone.
If you’re asked to evaluate a prototype ser-
vice or product and you discover it is really
bad, how do you break the news?
It depends what your goal is. If you want
to just deliver the bad news and leave, then
by all means be totally brutal and don’t
pull any punches. But if you want to build
a relationship with the client, you’re going
to need to help them work out how to
move forward.
Remember, when you deliver bad news
to a client, you’re basically explaining to
them that they’re in a dark place and it’s
their fault. It can be quite embarrassing
and depressing. It can drive stakeholders
apart when really you need to bring them
together and give them a shared vision to
work toward. Discovering bad design is an
opportunity for improvement. Always pair
the bad news with a recommendation of
what to do next.
NOTE
We use the term interactive products generically to refer to all classes of interactive
systems, technologies, environments, tools, applications, services, and devices.
Chapter 2
T H E P R O C E S S O F
I N T E R A C T I O N D E S I G N
Objectives
The main goals of this chapter are to accomplish the following:
• Reflect on what interaction design involves.
• Explain some of the advantages of involving users in development.
• Explain the main principles of a user-centered approach.
• Introduce the four basic activities of interaction design and how they are related in a
simple lifecycle model.
• Ask some important questions about the interaction design process and provide the
answers.
• Consider how interaction design activities can be integrated into other development
lifecycles.
2.1 Introduction
Imagine that you have been asked to design a cloud-based service to enable people to share
and curate their photos, movies, music, chats, documents, and so on, in an efficient, safe, and
enjoyable way. What would you do? How would you start? Would you begin by sketching
how the interface might look, work out how the system architecture should be structured, or
just start coding? Or, would you start by asking users about their current experiences with
sharing files and examine the existing tools, for example, Dropbox and Google Drive, and
based on this begin thinking about how you were going to design the new service? What
would you do next? This chapter discusses the process of interaction design, that is, how to
design an interactive product.
There are many fields of design, such as graphic design, architectural design, industrial
design, and software design. Although each discipline has its own approach to design, there
2.1 Introduction
2.2 What Is Involved in Interaction Design?
2.3 Some Practical Issues
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N38
are commonalities. The Design Council of the United Kingdom captures these in the double-
diamond of design, as shown in Figure 2.1. This approach has four phases which are iterated:
• Discover: Designers try to gather insights about the problem.
• Define: Designers develop a clear brief that frames the design challenge.
• Develop: Solutions or concepts are created, prototyped, tested, and iterated.
• Deliver: The resulting project is finalized, produced, and launched.
Interaction design also follows these phases, and it is underpinned by the philosophy of
user-centered design, that is, involving users throughout development. Traditionally, interac-
tion designers begin by doing user research and then sketching their ideas. But who are the
users to be researched, and how can they be involved in development? Will they know what
they want or need if we just ask them? From where do interaction designers get their ideas,
and how do they generate designs?
In this chapter, we raise and answer these kinds of questions, discuss user-centered
design, and explore the four basic activities of the interaction design process. We also intro-
duce a lifecycle model of interaction design that captures these activities and the relationships
among them.
2.2 What Is Involved in Interaction Design?
Interaction design has specific activities focused on discovering requirements for the prod-
uct, designing something to fulfill those requirements, and producing prototypes that are
then evaluated. In addition, interaction design focuses attention on users and their goals.
Discover
P
ro
b
le
m
S
o
lu
ti
o
n
P
ro
b
le
m
D
ef
in
it
io
n
D
es
ig
n
B
ri
ef
Define Develop Deliver
insight into the problem the area to focus upon potential solutions solutions that work
Figure 2.1 The double diamond of design
Source: Adapted from https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond
https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 39
For example, the artifact’s use and target domain are investigated by taking a user-centered
approach to development, users’ opinions and reactions to early designs are sought, and
users are involved appropriately in the development process itself. This means that users’
concerns direct the development rather than just technical concerns.
Design is also about trade-offs—about balancing conflicting requirements. One common
form of trade-off when developing a system to offer advice, for example, is deciding how
much choice will be given to the user and how much direction the system should offer. Often,
the division will depend on the purpose of the system, for example, whether it is for playing
music tracks or for controlling traffic flow. Getting the balance right requires experience, but
it also requires the development and evaluation of alternative solutions.
Generating alternatives is a key principle in most design disciplines and one that is also
central to interaction design. Linus Pauling, twice a Nobel Prize winner, once said, “The best
way to get a good idea is to get lots of ideas.” Generating lots of ideas is not necessarily hard,
but choosing which of them to pursue is more difficult. For example, Tom Kelley (2016)
describes seven secrets for successful brainstorms, including sharpening the focus (having a
well-honed problem statement), having playful rules (to encourage ideas), and getting physi-
cal (using visual props).
Involving users and others in the design process means that the designs and potential
solutions will need to be communicated to people other than the original designer. This
requires the design to be captured and expressed in a form that allows review, revision, and
improvement. There are many ways of doing this, one of the simplest being to produce a
series of sketches. Other common approaches are to write a description in natural language,
to draw a series of diagrams, and to build a prototype, that is, a limited version of the final
product. A combination of these techniques is likely to be the most effective. When users are
involved, capturing and expressing a design in a suitable format is especially important since
they are unlikely to understand jargon or specialist notations. In fact, a form with which
users can interact is most effective, so building prototypes is an extremely powerful approach.
ACTIvITY 2.1
This activity asks you to apply the double diamond of design to produce an innovative inter-
active product for your own use. By focusing on a product for yourself, the activity deliber-
ately de-emphasizes issues concerned with involving other users, and instead it emphasizes the
overall process.
Imagine that you want to design a product that helps you organize a trip. This might be
for a business or vacation trip, to visit relatives halfway around the world, or for a bike ride
on the weekend—whatever kind of trip you like. In addition to planning the route or booking
tickets, the product may help to check visa requirements, arrange guided tours, investigate the
facilities at a location, and so on.
1. Using the first three phases of the double diamond of design, produce an initial design
using a sketch or two, showing its main functionality and its general look and feel. This
activity omits the fourth phase, as you are not expected to deliver a working solution.
2. Now reflect on how your activities fell into these phases. What did you do first? What was
your instinct to do first? Did you have any particular artifacts or experiences upon which
to base your design?
(Continued)
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N40
Comment
1. The first phase focuses on discovering insights about the problem, but is there a problem?
If so, what is it? Although most of us manage to book trips and travel to destinations
with the right visas and in comfort, upon reflection the process and the outcome can be
improved. For example, dietary requirements are not always fulfilled, and the accommoda-
tion is not always in the best location. There is a lot of information available to support
organizing travel, and there are many agents, websites, travel books, and tourist boards
that can help. The problem is that it can be overwhelming.
The second phase is about defining the area on which to focus. There are many rea-
sons for travelling—both individual and family—but in my experience organizing business
trips to meetings worldwide is stressful, and minimizing the complexity involved in these
would be worthwhile. The experience would be improved if the product offers advice from
the many possible sources of information and tailors that advice to individual preferences.
The third phase focuses on developing solutions, which in this case is a sketch of the
design itself. Figure 2.2 shows an initial design. This has two versions of the product—one
as an app to run on a mobile device and one to run on a larger screen. The assumptions
underlying the choice to build two versions are based on my experience; I would normally
plan the details of the trip at my desk, while requiring updates and local information while
traveling. The mobile app has a simple interaction style that is easy to use on the go, while
the larger-screen version is more sophisticated and shows a lot of information and the vari-
ous choices available.
(a) (b)
Figure 2.2 Initial sketches of the trip organizer showing (a) a large screen covering the entire
journey from home to Beerwah in Australia and (b) the smartphone screen available for the leg
of the journey at Paris (Charles de Gaulle) airport
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 41
2.2.1 Understanding the Problem Space
Deciding what to design is key, and exploring the problem space is one way in which to
decide. This is the first phase in the double diamond, but it can be overlooked by those new
to interaction design, as you may have discovered in Activity 2.1. In the process of creating an
interactive product, it can be tempting to begin at the nuts and bolts level of design. By this
we mean working out how to design the physical interface and what technologies and inter-
action styles to use, for example, whether to use multitouch, voice, graphical user interface,
heads-up display, augmented reality, gesture-based, and so forth. The problem with starting
here is that potential users and their context can be misunderstood, and usability and user
experience goals can be overlooked, both of which were discussed in Chapter 1, “What Is
Interaction Design?”
For example, consider the augmented reality displays and holographic navigation sys-
tems that are available in some cars nowadays (see Figure 2.3). They are the result of decades
of research into human factors of information displays (for instance, Campbell et al., 2016),
the driving experience itself (Perterer et al., 2013; Lee et al., 2005), and the suitability of dif-
ferent technologies (for example, Jose et al., 2016), as well as improvements in technology.
Understanding the problem space has been critical in arriving at workable solutions that are
safe and trusted. Having said that, some people may not be comfortable using a holographic
navigation system and choose not to have one installed.
2. Initially, it wasn’t clear that there was a problem to address, but on reflection the complex-
ity of the available information and the benefit of tailoring choices became clearer. The
second phase guided me toward thinking about the area on which to focus. Worldwide
business trips are the most difficult, and reducing the complexity of information sources
through customization would definitely help. It would be good if the product learned
about my preferences, for example, recommending flights from my favorite airline and
finding places to have a vegan meal.
Developing solutions (the third phase) led me to consider how to interact with the
product—seeing detail on a large screen would be useful, but a summary that can be
shown on a mobile device is also needed. The type of support also depends on where the
meeting is being held. Planning a trip abroad requires both a high-level view to check visas,
vaccinations, and travel advice, as well as a detailed view about the proximity of accom-
modation to the meeting venue and specific flight times. Planning a local trip is much less
complicated.
The exact steps taken to create a product will vary from designer to designer, from
product to product, and from organization to organization (see Box 2.1). Capturing con-
crete ideas, through sketches or written descriptions, helps to focus the mind on what is
being designed, the context of the design, and what user experience is to be expected. The
sketches can capture only some elements of the design, however, and other formats are
needed to capture everything intended. Throughout this activity, you have been making
choices between alternatives, exploring requirements in detail, and refining your ideas
about what the product will do.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N42
While it is certainly necessary at some point to choose which technology to employ and
decide how to design the physical aspects, it is better to make these decisions after articulat-
ing the nature of the problem space. By this we mean understanding what is currently the
user experience or the product, why a change is needed, and how this change will improve
the user experience. In the previous example, this involves finding out what is problem-
atic with existing support for navigating while driving. An example is ensuring that drivers
can continue to drive safely without being distracted when looking at a small GPS display
mounted on the dashboard to figure out on which road it is asking them to “turn left.” Even
when designing for a new user experience, it still requires understanding the context for
which it will be used and the possible current user expectations.
The process of articulating the problem space is typically done as a team effort. Invari-
ably, team members will have differing perspectives on it. For example, a project manager
is likely to be concerned about a proposed solution in terms of budgets, timelines, and
staffing costs, whereas a software engineer will be thinking about breaking it down into
specific technical concepts. The implications of pursuing each perspective need to be con-
sidered in relation to one another. Although time-consuming and sometimes resulting in
disagreements among the design team, the benefits of this process can far outweigh the
associated costs: there will be much less chance of incorrect assumptions and unsupported
(a)
(b)
Figure 2.3 (a) Example of the holographic navigation display from WayRay which overlays GPS
navigation instructions onto the road ahead and gathers and shares driver statistics (b) an aug-
mented reality navigation system available in some cars today
Sources: (a) Used courtesy of WayRay, (b) Used courtesy of Muhammad Saad
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 43
claims creeping into a design solution that later turn out to be unusable or unwanted.
Spending time enumerating and reflecting upon ideas during the early stages of the design
process enables more options and possibilities to be considered. Furthermore, designers
are increasingly expected to justify their choice of problems and to be able to present
clearly and convincingly their rationale in business as well as design language. Being able
to think and analyze, present, and argue is valued as much as the ability to create a product
(Kolko, 2011).
2.2.2 The Importance of Involving Users
Chapter 1 stressed the importance of understanding users, and the previous description
emphasizes the need to involve users in interaction design. Involving users in development is
important because it’s the best way to ensure that the end product is usable and that it indeed
will be used. In the past, it was common for developers to talk only to managers, experts, or
BOX 2.1
Four Approaches to Interaction Design
Dan Saffer (2010) suggests four main approaches to interaction design, each of which is based
on a distinct underlying philosophy: User-centered design, Activity-centered design, Systems
design, and Genius design.
Dan Saffer acknowledges that the purest form of any of these approaches is unlikely to
be realized, and he takes an extreme view of each in order to distinguish among them. In user-
centered design, the user knows best and is the guide to the designer; the designer’s role is to
translate the users’ needs and goals into a design solution.
Activity-centered design focuses on the behavior surrounding particular tasks. Users still
play a significant role, but it is their behavior rather than their goals and needs that is impor-
tant. Systems design is a structured, rigorous, and holistic design approach that focuses on
context and is particularly appropriate for complex problems. In systems design, it is the
system (that is, the people, computers, objects, devices, and so on) that the center of attention,
while the users’ role is to set the goals of the system.
Finally, genius design is different from the other three approaches because it relies largely
on the experience and creative flair of a designer. Jim Leftwich, an experienced interaction
designer interviewed by Dan Saffer (2010, pp. 44–45), prefers the term rapid expert design. In
this approach, the users’ role is to validate ideas generated by the designer, and users are not
involved during the design process itself. Dan Saffer points out that this is not necessarily by
choice, but it may be because of limited or no resources for user involvement.
Different design problems lend themselves more easily to different approaches, and dif-
ferent designers will tend to gravitate toward using the approach that suits them best. Although
an individual designer may prefer a particular approach, it is important that the approach for
any one design problem is chosen with that design problem in mind.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N44
proxy users, or even to use their own judgment without reference to anyone else. While oth-
ers involved in designing the product can provide useful information, they will not have the
same perspective as the target user who performs the activity every day or who will use
the intended product on a regular basis.
In commercial projects, a role called the product owner is common. The product owner’s
job is to filter user and customer input to the development cycle and to prioritize require-
ments or features. This person is usually someone with business and technical knowledge, but
not interaction design knowledge, and they are rarely (if ever) a direct user of the product.
Although the product owner may be called upon to assess designs, they are a proxy user at
best, and their involvement does not avoid the need for user involvement.
The best way to ensure that developers gain a good understanding of users’ goals, lead-
ing to a more appropriate, more usable product, is to involve target users throughout devel-
opment. However, two other aspects unrelated to functionality are equally as important if the
product is to be usable and used: expectation management and ownership.
Expectation management is the process of making sure that the users’ expectations of
the new product are realistic. Its purpose is to ensure that there are no surprises for users
when the product arrives. If users feel they have been cheated by promises that have not been
fulfilled, then this will cause resistance and even rejection. Marketing of the new arrival must
be careful not to misrepresent the product, although it may be particularly difficult to achieve
with a large and complex system (Nevo and Wade, 2007). How many times have you seen an
advertisement for something that you thought would be really good to have, but when you
actually see one, you discover that the marketing hype was a little exaggerated? We expect
that you felt quite disappointed and let down. This is the kind of feeling that expectation
management tries to avoid.
Involving users throughout development helps with expectation management because they
can see the product’s capabilities from an early stage. They will also understand better how it
will affect their jobs and lives and why the features are designed that way. Adequate and timely
training is another technique for managing expectations. If users have the chance to work with
the product before it is released through training or hands-on demonstrations of a prerelease
version, then they will understand better what to expect when the final product is available.
A second reason for user involvement is ownership. Users who are involved and feel that
they have contributed to a product’s development are more likely to feel a sense of ownership
toward it and support its use (Bano et al., 2017).
How to involve users, in what roles, and for how long, needs careful planning, as dis-
cussed in the next Dilemma box.
DIlEMMA
Too Much of a Good Thing?
Involving users in development is a good thing, but what evidence is there that user involve-
ment is productive? How much should users be involved and in what role(s)? Is it appropriate
for users to lead a technical development project, or is it more beneficial for them to focus on
evaluating prototypes?
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 45
2.2.3 Degrees of User Involvement
Different degrees of user involvement are possible, ranging from fully engaged throughout
all iterations of the development process to targeted participation in specific activities and
from small groups of individual users in face-to-face contexts to hundreds of thousands of
potential users and stakeholders online. Where available, individual users may be co-opted
onto the design team so that they are major contributors to the development. This has pros
and cons. On the downside, full-time involvement may mean that they become out of touch
with their user community, while part-time involvement might result in a high workload for
them. On the positive side, having a user engaged full or part-time does mean that the input
is available continually throughout development. On the other hand, users may take part in
specific activities to inform the development or to evaluate designs once they are available.
This is a valuable form of involvement, but the users’ input is limited to that particular activ-
ity. Where the circumstances around a project limit user involvement in this way, there are
techniques to keep users’ concerns uppermost in developers’ minds, such as through personas
(see Chapter 11, “Discovering Requirements”).
Uli Abelein et al. (2013) performed a detailed review of the literature in this area and
concluded that, overall, the evidence indicates that user involvement has a positive effect on
user satisfaction and system use. However, they also found that even though the data clearly
indicates this positive effect, some links have a large variation, suggesting that there is still no
clear way to measure the effects consistently. In addition, they found that most studies with
negative correlations involving users and system success were published more than 10 years
previously.
Ramanath Subrayaman et al. (2010) investigated the impact of user participation on
levels of satisfaction with the product by both developers and users. They found that for new
products, developer satisfaction increased as user participation increased. On the other hand,
user satisfaction was higher if their participation was low, and satisfaction dropped as their
participation increased. They also identified that high levels of user involvement can generate
conflicts and increased reworking. For maintenance projects, both developers and users were
most satisfied with a moderate level of participation (approximately 20 percent of overall
project development time). Based just on user satisfaction as an indicator of project success,
then, it seems that low user participation is most beneficial.
The kind of product being developed, the kind of user involvement possible, the activities
in which they are involved, and the application domain all have an impact on the effectiveness
of user input (Bano and Zowghi, 2015). Peter Richard et al. (2014) investigated the effect of
user involvement in transport design projects. They found that involving users at later stages
of development mainly resulted in suggestions for service improvement, whereas users
involved at earlier stages of innovation suggested more creative ideas.
Recent moves toward an agile way of working (see Chapter 13, “Interaction Design
in Practice”) has emphasized the need for feedback from customers and users, but this also
has its challenges. Kurt Schmitz et al. (2018) suggests that in tailoring their methods, teams
consider the distinction between frequent participation in activities and effective engagement.
User involvement is undoubtedly beneficial, but the levels and types of involvement
require careful consideration and balance.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N46
Initially, user involvement took the form of small groups or individuals taking part in
face-to-face information-gathering design, or evaluation sessions, but increasing online con-
nectivity has led to a situation in which many thousands of potential users can contribute
to product development. There is still a place for face-to-face user involvement and in situ
studies, but the range of possibilities for user involvement is now much wider. One example
of this is online feedback exchange (OFE) systems, which are increasingly used to test design
concepts with millions of target users before going to market (Foong et al., 2017).
In fact, design is becoming increasingly participative through crowdsourcing design ideas
and examples, for instance (Yu et al., 2016). Where crowdsourcing is used, a range of differ-
ent people are encouraged to contribute, and this can include any and all of the stakeholders.
This wide participation helps to bring different perspectives to the process, which enhances
the design itself, produces more user satisfaction with the final product, and engenders a
sense of ownership. Another example of involving users at scale is citizen engagement, the
goal of which is to engage a population—civic or otherwise—with the aim of promoting
empowerment through technology. The underlying aim is to involve members of the public
in helping them make a change in their lives where technology is often viewed as an integral
part of the process.
Participatory design, also sometimes referred to as cooperative design or co-design, is
an overarching design philosophy that places those for whom systems, technologies, and
services are being designed, as central actors in creation activities. The idea is that instead
of being passive receivers of new technological or industrial artifacts, end users and stake-
holders are active participants in the design process. Chapter 12, “Design, Prototyping, and
Construction,” provides more information on participatory design.
The individual circumstances of the project affect what is realistic and appropriate. If
the end-user groups are identifiable, for example, the product is for a particular company,
then it is easier to involve them. If, however, the product is intended for the open market, it
is unlikely that users will be available to join the design team. In this case, targeted activities
and online feedback systems may be employed. Box 2.2 outlines an alternative way to obtain
user input from an existing product, and Box 2.5 discusses A/B testing, which draws on user
feedback to choose between alternative designs.
BOX 2.2
User Involvement After Product Release
Once a product has been released, a different kind of user involvement is possible—one that
captures data and user feedback based on day-to-day use of the product. The prevalence of
customer reviews has grown considerably in recent years, and they significantly affect the
popularity and success of a product (Harman et al., 2012). These reviews provide useful
and far-ranging user feedback. For example, Hammad Khalid et al. (2015) studied reviews
of mobile apps to see what reviewers complained about. They identified 12 complaint types,
including privacy and ethics, interface, and feature removal. Customer reviews can provide
useful insight to help improve products, but detailed analysis of feedback gathered this way
is time-consuming.
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 47
2.2.4 What Is a User-Centered Approach?
Throughout this book, we emphasize the need for a user-centered approach to development.
By this we mean that the real users and their goals, not just technology, are the driving force
behind product development. As a consequence, a well-designed system will make the most
of human skill and judgment, will be directly relevant to the activity in hand, and will sup-
port rather than constrain the user. This is less of a technique and more of a philosophy.
Error reporting systems (ERSs, also called online crashing analysis) automatically collect
information from users that is used to improve applications in the longer term. This is done
with users’ permission, but with a minimal reporting burden. Figure 2.4 shows two dialog
boxes for the Windows error reporting system that is built into Microsoft operating systems.
This kind of reporting can have a significant effect on the quality of applications. For example,
29 percent of the errors fixed by the Windows XP (Service Pack 1) team were based on infor-
mation collected through their ERS (Kinshumann et al., 2011). While Windows XP is no
longer being supported, this statistic illustrates the impact ERSs can have. The system uses a
sophisticated approach to error reporting based on five strategies: automatic aggregation of
error reports; progressive data collection so that the data collected (such as abbreviated or full
stack and memory dumps) varies depending on the level of data needed to diagnose the error;
minimal user interaction; preserving user privacy; and providing solutions directly to users
where possible. By using these strategies, plus statistical analysis, effort can be focused on the
bugs that have the highest impact on the most users.
Figure 2.4 Two typical dialog boxes from the Windows error reporting system
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N48
When the field of HCI was being established, John Gould and Clayton Lewis (1985) laid
down three principles that they believed would lead to a “useful and easy to use computer
system.” These principles are as follows:
1. Early focus on users and tasks. This means first understanding who the users will be by
directly studying their cognitive, behavioral, anthropomorphic, and attitudinal character-
istics. This requires observing users doing their normal tasks, studying the nature of those
tasks, and then involving users in the design process.
2. Empirical measurement. Early in development, the reactions and performance of intended
users to printed scenarios, manuals, and so forth, are observed and measured. Later,
users interact with simulations and prototypes, and their performance and reactions are
observed, recorded, and analyzed.
3. Iterative design. When problems are found in user testing, they are fixed, and then more
tests and observations are carried out to see the effects of the fixes. This means that design
and development are iterative, with cycles of design-test-measure-redesign being repeated
as often as necessary.
These three principles are now generally accepted as the basis for a user-centered
approach. When this paper was written, however, they were not accepted by most develop-
ers. We discuss these principles in more detail in the following sections.
Early Focus on Users and Tasks
This principle can be expanded and clarified through the following five further principles:
1. Users’ tasks and goals are the driving force behind the development.
While technology will inform design options and choices, it is not the driving force.
Instead of saying “Where can we deploy this new technology?” say “What technologies
are available to provide better support for users’ goals?”
2. Users’ behavior and context of use are studied, and the system is designed to support them.
This is not just about capturing users’ tasks and goals. How people perform their tasks
is also significant. Understanding behavior highlights priorities, preferences, and implicit
intentions.
3. Users’ characteristics are captured and designed for.
When things go wrong with technology, people often think it is their fault. People are
prone to making errors and have certain limitations, both cognitive and physical. Prod-
ucts designed to support people should take these limitations into account and try to
prevent mistakes from being made. Cognitive aspects, such as attention, memory, and per-
ception issues are introduced in Chapter 4, “Cognitive Aspects.” Physical aspects include
height, mobility, and strength. Some characteristics are general, such as color blindness,
which affects about 4.5 percent of the population, but some characteristics are associated
with a particular job or task. In addition to general characteristics, those traits specific to
the intended user group also need to be captured.
4. Users are consulted throughout development from earliest phases to the latest.
As discussed earlier, there are different levels of user involvement, and there are different
ways in which to consult users.
5. All design decisions are taken within the context of the users, their activities, and their
environment.
This does not necessarily mean that users are actively involved in design decisions, but
that is one option.
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 49
Empirical Measurement
Where possible, specific usability and user experience goals should be identified, clearly doc-
umented, and agreed upon at the beginning of the project. They can help designers choose
between alternative designs and check on progress as the product is developed. Identifying
specific goals up front means that the product can be empirically evaluated at regular stages
throughout development.
Iterative Design
Iteration allows designs to be refined based on feedback. As users and designers engage with
the domain and start to discuss requirements, needs, hopes, and aspirations, then different
insights into what is needed, what will help, and what is feasible will emerge. This leads to a
need for iteration—for the activities to inform each other and to be repeated. No matter how
good the designers are and however clear the users may think their vision is of the required
artifact, ideas will need to be revised in light of feedback, likely several times. This is particu-
larly true when trying to innovate. Innovation rarely emerges whole and ready to go. It takes
time, evolution, trial and error, and a great deal of patience. Iteration is inevitable because
designers never get the solution right the first time (Gould and Lewis, 1985).
ACTIvITY 2.2
Assume you are involved in developing a novel online experience for buying garden plants.
Although many websites exist for buying plants online, you want to produce a distinct expe-
rience to increase the organization’s market share. Suggest ways of applying the previous
principles in this task.
Comment
To address the first three principles, you would need to find out about the tasks and goals,
behavior, and characteristics of potential customers of the new experience, together with any
different contexts of use. Studying current users of existing online plant shops will provide
some information, and it will also identify some challenges to be addressed in the new experi-
ence. However, as you want to increase the organization’s market share, consulting existing
users alone would not be enough. Alternative avenues of investigation include physical shop-
ping situations—for example, shopping at the market, in the local corner shop, and so on,
and local gardening clubs, radio programs, or podcasts. These alternatives will help you find
the advantages and disadvantages of buying plants in different settings, and you will observe
different behaviors. By looking at these options, a new set of potential users and contexts can
be identified.
For the fourth principle, the set of new users will emerge as investigations progress, but
people who are representative of the user group may be accessible from the beginning. Work-
shops or evaluation sessions could be run with them, possibly in one of the alternative shop-
ping environments such as the market. The last principle could be supported through the
creation of a design room that houses all of the data collected, and it is a place where the devel-
opment team can go to find out more about the users and the product goals.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N50
2.2.5 Four Basic Activities of Interaction Design
The four basic activities for interaction design are as follows:
1. Discovering requirements for the interactive product.
2. Designing alternatives that meet those requirements.
3. Prototyping the alternative designs so that they can be communicated and assessed.
4. Evaluating the product and the user experience it offers throughout the process.
Discovering Requirements
This activity covers the left side of the double diamond of design, and it is focused on dis-
covering something new about the world and defining what will be developed. In the case of
interaction design, this includes understanding the target users and the support an interactive
product could usefully provide. This understanding is gleaned through data gathering and
analysis, which are discussed in Chapters 8–10. It forms the basis of the product’s require-
ments and underpins subsequent design and development. The requirements activity is dis-
cussed further in Chapter 11.
Designing Alternatives
This is the core activity of designing and is part of the Develop phase of the double diamond:
proposing ideas for meeting the requirements. For interaction design, this activity can be viewed
as two subactivities: conceptual design and concrete design. Conceptual design involves pro-
ducing the conceptual model for the product, and a conceptual model describes an abstraction
outlining what people can do with a product and what concepts are needed to understand how
to interact with it. Concrete design considers the detail of the product including the colors,
sounds, and images to use, menu design, and icon design. Alternatives are considered at every
point. Conceptual design is discussed in Chapter 3, and more design issues for specific interface
types are in Chapter 7; more detail about how to design an interactive product is in Chapter 12.
Prototyping
Prototyping is also part of the Develop phase of the double diamond. Interaction design involves
designing the behavior of interactive products as well as their look and feel. The most effec-
tive way for users to evaluate such designs is to interact with them, and this can be achieved
through prototyping. This does not necessarily mean that a piece of software is required. There
are different prototyping techniques, not all of which require a working piece of software. For
example, paper-based prototypes are quick and cheap to build and are effective for identifying
problems in the early stages of design, and through role-playing users can get a real sense of
what it will be like to interact with the product. Prototyping is covered in Chapter 12.
Evaluating
Evaluating is also part of the Develop phase of the double diamond. It is the process of deter-
mining the usability and acceptability of the product or design measured in terms of a variety
of usability and user-experience criteria. Evaluation does not replace activities concerned
with quality assurance and testing to make sure that the final product is fit for its intended
purpose, but it complements and enhances them. Chapters 14–16 cover evaluation.
The activities to discover requirements, design alternatives, build prototypes, and evalu-
ate them are intertwined: alternatives are evaluated through the prototypes, and the results
are fed back into further design or to identify alternative requirements.
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 51
2.2.6 A Simple Lifecycle Model for Interaction Design
Understanding what activities are involved in interaction design is the first step to being able
to do it, but it is also important to consider how the activities are related to one another.
The term lifecycle model (or process model) is used to represent a model that captures a set
of activities and how they are related. Existing models have varying levels of sophistication
and complexity and are often not prescriptive. For projects involving only a few experienced
developers, a simple process is adequate. However, for larger systems involving tens or hun-
dreds of developers with hundreds or thousands of users, a simple process just isn’t enough
to provide the management structure and discipline necessary to engineer a usable product.
Many lifecycle models have been proposed in fields related to interaction design. For
example, software engineering lifecycle models include the waterfall, spiral, and V models
(for more information about these models, see Pressman and Maxim [2014]). HCI has been
less associated with lifecycle models, but two well-known ones are the Star (Hartson and
Hix, 1989) and an international standard model ISO 9241-210. Rather than explaining the
details of these models, we focus on the classic lifecycle model shown in Figure 2.5. This
model shows how the four activities of interaction design are related, and it incorporates the
three principles of user-centered design discussed earlier.
Many projects start by discovering requirements from which alternative designs are gen-
erated. Prototype versions of the designs are developed and then evaluated. During prototyp-
ing or based on feedback from evaluations, the team may need to refine the requirements
or to redesign. One or more alternative designs may follow this iterative cycle in parallel.
Implicit in this cycle is that the final product will emerge in an evolutionary fashion from
an initial idea through to the finished product or from limited functionality to sophisticated
functionality. Exactly how this evolution happens varies from project to project. However
many times through the cycle the product goes, development ends with an evaluation activity
that ensures that the final product meets the prescribed user experience and usability criteria.
This evolutionary production is part of the Delivery phase of the double diamond.
In recent years, a wide range of lifecycle models has emerged, all of which encom-
pass these activities but with different emphases on activities, relationships, and outputs.
For example, Google Design Sprints (Box 2.3) emphasize problem investigation, solution
development, and testing with customers all in one week. This does not result in a robust
final product, but it does make sure that the solution idea is acceptable to customers. The
in-the-wild approach (Box 2.4) emphasizes the development of novel technologies that are
not necessarily designed for specific user needs but to augment people, places, and settings.
Further models are discussed in Chapter 13.
Source: Fran / Cartoon Stock
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N52
PROTOTYPING
EVALUATING
FINAL PRODUCT
DESIGNING
ALTERNATIVES
DISCOVERING
REQUIREMENTS
Figure 2.5 A simple interaction design lifecycle model
BOX 2.3
Google Design Sprints (Adapted from Knapp et al. (2016))
Google Ventures has developed a structured approach to design that supports rapid idea-
tion and testing of potential solutions to a design challenge. This is called the Google Design
Sprint. A sprint is divided into five phases, and each phase is completed in a day. This means
that in five days, you can go from a design challenge to a solution that has been tested with
customers. As the authors say, “You won’t finish with a complete, detailed, ready-to-ship
product. But you will make rapid progress, and know for sure if you’re headed in the right
direction” (Knapp et al., 2016, p16–17). Teams are encouraged to iterate on the last two
phases and to develop and re-test prototypes. If necessary, the first idea can be thrown away
and the process started again at Phase 1. There is preparation to be done before the sprint
begins. This preparation and the five phases are described next (see Figure 2.6).
Google Design Sprint
Unpack Sketch Decide Prototype Test Iterate
Unpack
Sketch
Decide
Prototype
Test1 2 3 4 5
Figure 2.6 The five phases of the Google Design Sprint
Source: www.agilemarketing.net/google-design-sprints. Used courtesy of Agile Marketing
www.agilemarketing.net/google-design-sprints.
2 . 2 W H AT I S I N v O lv E D I N I N T E R A C T I O N D E S I G N ? 53
Setting the Stage
This time is used to choose the right design challenge, gather the right team, and organize the
time and space to run the sprint (that is, full-time for everyone for five days). The sprint can
help in high-stake challenges, when you’re running out of time, or if you’re just stuck. The
team composition depends on the product, but it has about seven people including a decider
(who chooses the design to show to the customer), customer expert, technical expert, and
anyone who will bring a disruptive perspective.
Unpack
Day 1 focuses on making a map of the challenge and choosing a target, that is, a part of the
challenge that can be achieved in a week.
Sketch Competing Solutions
Day 2 focuses on generating solutions, with an emphasis on sketching and individual creativ-
ity rather than group brainstorming.
Decide on the Best
Day 3 focuses on critiquing the solutions generated on Day 1, choosing the one most likely
to meet the sprint’s challenge, and producing a storyboard. Whichever solution is chosen, the
decider needs to support the design.
Build a Realistic Prototype
Day 4 focuses on turning the storyboard into a realistic prototype, that is, something on which
customers can provide feedback. We discuss prototyping further in Chapter 12.
Test with Target Customers
Day 5 focuses on getting feedback from five customers and learning from their reactions.
The Google Design Sprint is a process for answering critical business questions through design,
prototyping, and testing ideas with customers. Marta Rey-Babarro, who works at Google as
a staff UX researcher and was the cofounder of Google’s internal Sprint Academy, describes
how they used a sprint to improve the experience of traveling for business.
We wanted to see if we could improve the business travel experience. We started
by doing research with Googlers to find out what experiences and what needs
they had when they traveled. We discovered that there were some Googlers who
traveled over 300 days a year and others who traveled only once or twice a
year. Their travel experiences and needs were very different. After this research,
some of us did a sprint in which we explored the whole travel experience, from
the planning phase to coming back home and submitting receipts. Within five
days we came up with a vision of what that experience could be. On the fifth
day of the sprint, we presented that vision to higher-level execs. They loved it
and sponsored the creation of a new team at Google that has developed new
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N54
BOX 2.4
Research in the Wild (Adapted from Rogers and Marshall (2017))
Research in the wild (RITW) develops technology solutions in everyday living by creating
and evaluating new technologies and experiences in situ. The approach supports designing
prototypes in which researchers often experiment with new technological possibilities that
can change and even disrupt behavior, rather than ones that fit in with existing practices. The
results of RITW studies can be used to challenge assumptions about technology and human
behavior in the real world and to inform the re-thinking of HCI theories. The perspective
taken by RITW studies is to observe how people react to technology and how they change and
integrate it into their everyday lives.
Figure 2.7 shows the framework for RITW studies. In terms of the four activities intro-
duced earlier, this framework focuses on designing, prototyping, and evaluating technology
and ideas and is one way in which requirements may be discovered. It also considers relevant
theory since often the purpose of an RITW study is to investigate a theory, idea, concept, or
observation. Any one RITW study may emphasize the elements of the framework to a differ-
ent degree.
Technology: Concerned with appropriating existing infrastructures/devices (e.g., Internet of
Things toolkit, mobile app) in situ or developing new ones for a given setting (e.g., a novel
public display).
Design: Covers the design space of an experience (e.g., iteratively creating a collaborative
travel planning tool for families to use or an augmented reality game for playing outdoors).
In situ study: Concerned with evaluating in situ an existing device/tool/service or novel
research-based prototype when placed in various settings or given to someone to use over a
period of time.
Theory: Investigating a theory, idea, concept or observation about a behavior, setting or other
phenomenon using existing ones or developing a new one or extending an existing one.
tools and experiences for the traveling Googler. Some of those internal online
experiences made it also to our external products and services outside of Google.
Marta Rey-Babarro
To see a more detailed description of the Google Design Sprint and to
access a set of five videos that describe what happens on each day of
the sprint, go to www.gv.com/sprint/#book.
http://www.gv.com/sprint/#book
2 . 3 S O M E P R A C T I C A l I S S u E S 55
2.3 Some Practical Issues
The discussion so far has highlighted some issues about the practical application of user-
centered design and the simple lifecycle of interaction design introduced earlier. These issues
are listed here:
• Who are the users?
• What are the users’ needs?
• How to generate alternative designs
• How to choose among alternatives
• How to integrate interaction design activities with other lifecycle models
2.3.1 Who Are the Users?
Identifying users may seem like a straightforward activity, but it can be harder than you
think. For example, Sha Zhao et al. (2016) found a more diverse set of users for smartphones
than most manufacturers recognize. Based on an analysis of one month’s smartphone app
Technology
In Situ
Studies
Design
Theory
Figure 2.7 A framework for research in the wild studies
Source: Rogers and Marshall (2017), p. 6. Used courtesy of Morgan & Claypool
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N56
usage, they discovered 382 distinct types of users, including Screen Checkers and Young
Parents. Charlie Wilson et al. (2015) found that little is understood about who the users of
smart homes in general are expected to be, beyond those focused on health-related condi-
tions. In part, this is because many products nowadays are being developed for use by large
sections of the population, and so it can be difficult to determine a clear description. Some
products (such as a system to schedule work shifts) have more constrained user communities,
for example a specific role (shop assistant) within a particular industrial sector (retail). In this
case, there may be a range of users with different roles who relate to the product in differ-
ent ways. Examples are those who manage direct users, those who receive outputs from the
system, those who test the system, those who make the purchasing decision, and those who
use competitive products (Holzblatt and Jones, 1993).
There is a surprisingly wide collection of people who all have a stake in the development
of a successful product. These people are called stakeholders. Stakeholders are the individu-
als or groups that can influence or be influenced by the success or failure of a project. Alan
Dix et al. (2004) observed that is pertinent to a user-centered view of development: “It will
frequently be the case that the formal ‘client’ who orders the system falls very low on the list
of those affected. Be very wary of changes which take power, influence or control from some
stakeholders without returning something tangible in its place.”
The group of stakeholders for a particular product will be larger than the group of users.
It will include customers who pay for it; users who interact with it; developers who design,
build, and maintain it; legislators who impose rules on the development and operation of it;
people who may lose their jobs because of its introduction; and so on (Sharp et al., 1999).
Identifying the stakeholders for a project helps to decide who to involve as users and to
what degree, but identifying relevant stakeholders can be tricky. Ian Alexander and Suzanne
Robertson (2004) suggest using an onion diagram to model stakeholders and their involve-
ment. This diagram shows concentric circles of stakeholder zones with the product being
developed sitting in the middle. Soo Ling Lim and Anthony Finkelstein (2012) developed a
method called StakeRare and supporting tool called StakeNet that relies on social networks
and collaborative filtering to identify and prioritize relevant stakeholders.
ACTIvITY 2.3
Who are the stakeholders for an electricity smart meter for use in the home to help households
control their energy consumption?
Comment
First, there are the people who live in the house, such as older adults and young children,
with a range of abilities and backgrounds. To varying degrees, they will be users of the meter,
and their stake in its success and usability is fairly clear and direct. Householders want to
make sure that their bills are controlled, that they can easily access suppliers if they want to,
and that their electricity supply is not interrupted. On the other hand, the entire family will
2 . 3 S O M E P R A C T I C A l I S S u E S 57
2.3.2 What Are the Users’ Needs?
If you had asked someone in the street in the late 1990s what they needed, their answer
probably wouldn’t have included a smart TV, a ski jacket with an integrated smartphone, or
a robot pet. If you presented the same person with these possibilities and asked whether they
would buy them if they were available, then the answer may have been more positive. Deter-
mining what product to build is not simply a question of asking people “What do you need?”
and then supplying it, because people don’t necessarily know what is possible. Suzanne and
James Robertson (2013) refer to “un-dreamed-of” needs, which are those that users are una-
ware they might have. Instead of asking users, this is approached by exploring the problem
space, investigating the users and their activities to see what can be improved, or trying out
ideas with potential users to see whether the ideas are successful. In practice, a mixture of
these approaches is often taken—trying ideas in order to discover requirements and decide
what to build, but with knowledge of the problem space, potential users, and their activities.
If a product is a new invention, then identifying the users and representative tasks for
them may be harder. This is where in-the-wild studies or rapid design sprints that provide
authentic user feedback on early ideas are valuable. Rather than imagining who might want
to use a product and what they might want to do with it, it’s more effective to put it out there
and find out—the results might be surprising!
It may be tempting for designers simply to design what they would like to use them-
selves, but their ideas would not necessarily coincide with those of the target user group,
because they have different experiences and expectations. Several practitioners and com-
mentators have observed that it’s an “eye-opening experience” when developers or designers
see a user struggling to complete a task that seemed so clear to them (Ratcliffe and McNeill,
2012, p. 125).
Focusing on people’s goals, usability goals and user experience goals is a more promising
approach to interaction design than simply expecting stakeholders to be able to articulate the
requirements for a product.
want to continue to live in the house in comfort, for example, with enough heat and light.
Then there are the people who install and maintain the meter. They make sure that the meter
is installed correctly and that it continues to work effectively. Installers and maintainers want
the meter to be straightforward to install and to be robust and reliable to reduce the need for
return visits or maintenance calls. Outside of these groups are electricity suppliers and dis-
tributors who also want to provide a competitive service so that the householders are satis-
fied and to minimize maintenance costs. They also don’t want to lose customers and money
because the meters are faulty or are providing inaccurate information. Other people who will
be affected by the success of the meter include those who work on the powerlines and at
electricity generation plants, those who work in other energy industries, and ultimately the
government of the country that will want to maintain steady supply for its industry and
population.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N58
2.3.3 How to Generate Alternative Designs
A common human tendency is to stick with something that works. While recognizing that a
better solution may exist, it is easy to accept the one that works as being “good enough.” Set-
tling for a solution that is good enough may be undesirable because better alternatives may
never be considered, and considering alternative solutions is a crucial step in the process of
design. But where do these alternative ideas come from?
One answer to this question is that they come from the individual designer’s flair and cre-
ativity (the genius design described in Box 2.1). Although it is certainly true that some people
are able to produce wonderfully inspired designs while others struggle to come up with any
ideas at all, very little in this world is completely new. For example, the steam engine, com-
monly regarded as an invention, was inspired by the observation that steam from a kettle
boiling on the stove lifted the lid. An amount of creativity and engineering was needed to
make the jump from a boiling kettle to a steam engine, but the kettle provided inspiration to
translate this experience into a set of principles that could be applied in a different context.
Innovations often arise through cross-fertilization of ideas from different perspectives, indi-
viduals, and contexts; the evolution of an existing product through use and observation; or
straightforward copying of other, similar products.
Cross-fertilization may result from discussing ideas with other designers, while Bill Bux-
ton (2007) reports that different perspectives from users generated original ideas about alter-
native designs. As an example of evolution, consider the cell phone and its descendant, the
smartphone. The capabilities of the phone in your pocket have increased from the time they
first appeared. Initially, the cell phone simply made and received phone calls and texts, but
now the smartphone supports a myriad of interactions, can take photos and record audio,
play movies and games, and record your exercise routine.
Creativity and invention are often wrapped in mystique, but a lot has been uncovered
about the process and of how creativity can be enhanced or inspired (for example, see
Rogers, 2014). For instance, browsing a collection of designs will inspire designers to con-
sider alternative perspectives and hence alternative solutions. As Roger Schank (1982, p. 22)
puts it, “An expert is someone who gets reminded of just the right prior experience to help
him in processing his current experiences.” And while those experiences may be the designer’s
own, they can equally well be others’.
Another approach to creativity has been adopted by Neil Maiden et al. (2007). They
ran creativity workshops to generate innovative requirements in an air traffic management
(ATM) application domain. Their idea was to introduce experts in different fields into the
workshop and then invite stakeholders to identify analogies between their own field and this
new one. For example, they have invited an Indian textile expert, a musician, a TV program
scheduler, and a museum exhibit designer. Although not all obviously analogical domains,
they sparked creative ideas for the air traffic management application. For example, partici-
pants reported that one textile design was elegant, that is, simple, beautiful, and symmetrical.
They then transferred these properties to a key area of the ATM domain—that of aircraft
conflict resolution. They explored the meaning of elegance within this context and realized
that elegance is perceived differently by different controllers. From this they generated the
requirement that the system should be able to accommodate different air traffic controller
styles during conflict resolution.
2 . 3 S O M E P R A C T I C A l I S S u E S 59
A more pragmatic answer to this question, then, is that alternatives come from seeking
different perspectives and looking at other designs. The process of inspiration and creativ-
ity can be enhanced by prompting a designer’s own experience and studying others’ ideas
and suggestions. Deliberately seeking out suitable sources of inspiration is a valuable step
in any design process. These sources may be very close to the intended new product, such as
competitors’ products; they may be earlier versions of similar systems; or they may be from
a completely different domain.
Under some circumstances, the scope to consider alternative designs is limited. Design is
a process of balancing constraints and trading off one set of requirements with another, and
the constraints may mean that there are few viable alternatives available. For example, when
designing software to run under the Windows operating system, the design must conform to
the Windows look and feel and to other constraints intended to make Windows programs
consistent for the user. When producing an upgrade to an existing system, keeping familiar
elements of it to retain the same user experience may be prioritized.
2.3.4 How to Choose Among Alternative Designs
Choosing among alternatives is mostly about making design decisions: Will the device use
keyboard entry or a touch screen? Will the product provide an automatic memory function
or not? These decisions will be informed by the information gathered about users and their
tasks and by the technical feasibility of an idea. Broadly speaking, though, the decisions
fall into two categories: those that are about externally visible and measurable features
ACTIvITY 2.4
Consider the product introduced in Activity 2.1. Reflecting on the process again, what inspired
your initial design? Are there any innovative aspects to it?
Comment
For our design, existing sources of information and their flaws were influential. For example,
there is so much information available about travel, destinations, hotel comparisons, and so
forth, that it can be overwhelming. However, travel blogs contain useful and practical insights,
and websites that compare alternative options are informative. We were also influenced by
some favorite mobile and desktop applications such as the United Kingdom’s National Rail
smartphone app for its real-time updating and by the Airbnb website for its mixture of sim-
plicity and detail.
Perhaps you were inspired by something that you use regularly, like a particularly enjoy-
able game or a device that you like to use? I’m not sure how innovative our ideas were, but
the main goal was for the application to tailor its advice for the user’s preferences. There are
probably other aspects that make your design unique and that may be innovative to a greater
or lesser degree.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N60
and those that are about characteristics internal to the system that cannot be observed
or measured without dissecting it. For example, in a photocopier, externally visible and
measurable factors include the physical size of the machine, the speed and quality of copy-
ing, the different sizes of paper it can use, and so on. Underlying each of these factors are
BOX 2.5
A Box Full of Ideas
The innovative product design company IDEO was mentioned in Chapter 1. Underlying
some of its creative flair is a collection of weird and wonderful engineering housed in
a large flatbed filing cabinet called the TechBox. The TechBox holds hundreds of gizmos
and interesting materials, divided into categories such as Amazing Materials, Cool Mech-
anisms, Interesting Manufacturing Processes, Electronic Technologies, and Thermal and
Optical. Each item has been placed in the box because it represents a neat idea or a new
process. The staff at IDEO take along a selection of items from the TechBox to brainstorming
meetings. The items may be chosen because they provide useful visual props or possible solu-
tions to a particular issue or simply to provide some light relief.
Each item is clearly labeled with its name and category, but further information can be
found by accessing the TechBox’s online catalog. Each item has its own page detailing what
the item is, why it is interesting, where it came from, and who has used it or knows more
about it. Items in the box include an example of metal-coated wood and materials with and
without holes that stretch, bend, and change shape or color at different temperatures.
Each of IDEO’s offices has a TechBox, and each TechBox has its own curator who is
responsible for maintaining and cataloging the items and for promoting its use within
the office. Anyone can submit a new item for consideration. As items become common-
place, they are removed from the TechBox to make way for the next generation of
fascinating curios.
2 . 3 S O M E P R A C T I C A l I S S u E S 61
other considerations that cannot be observed or studied without dissecting the machine. For
example, the choice of materials used in a photocopier may depend on its friction rating and
how much it deforms under certain conditions. In interaction design, the user experience
is the driving force behind the design and so externally visible and measurable behavior is
the main focus. Detailed internal workings are still important to the extent that they affect
external behavior or features.
DIlEMMA
Copying for Inspiration: Is It Legal?
Designers draw on their experience of design when approaching a new project. This includes
the use of previous designs that they know work—both designs that they have created them-
selves and those that others have created. Others’ creations often spark inspiration that also
leads to new ideas and innovation. This is well known and understood. However, the expres-
sion of an idea is protected by copyright, and people who infringe on that copyright can be
taken to court and prosecuted. Note that copyright covers the expression of an idea and not
the idea itself. This means, for example, that while there are numerous smartphones all with
similar functionality, this does not represent an infringement of copyright as the idea has
been expressed in different ways and it is the expression that has been copyrighted. Copy-
right is free and is automatically invested in the author, for instance, the writer of a book or
a programmer who develops a program, unless they sign the copyright over to someone else.
Employment contracts often include a statement that the copyright relating to anything pro-
duced in the course of that employment is automatically assigned to the employer and does
not remain with the employee.
Patenting is an alternative to copyright that does protect the idea rather than the expres-
sion of the idea. There are various forms of patenting, each of which is designed to allow the
inventor to capitalize on their idea. For example, Amazon patented its one-click purchasing
process, which allows regular users simply to choose a purchase and buy it with one mouse
click (US Patent No. 5960411, September 29, 1999). This is possible because the system stores
its customers’ details and recognizes them when they access the Amazon site again.
In recent years, the creative commons community (https://creativecommons.org/) has
suggested more flexible licensing arrangements that allow others to reuse and extend a piece
of created work, thereby supporting collaboration. In the open source software development
movement, for example, software code is freely distributed and can be modified, incorporated
into other software, and redistributed under the same open source conditions. No royalty fees
are payable on any use of open source code. These movements do not replace copyright or
patent law, but they provide an alternative route for the dissemination of ideas.
So, the dilemma comes in knowing when it is OK to use someone else’s work as a source
of inspiration and when you are infringing copyright or patent law. The issues are complex
and detailed and well beyond the scope of this book, but Bainbridge (2014) is a good resource
to understand this area better.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N62
One answer to the previous question is that choosing between alternative designs is
informed by letting users and stakeholders interact with them and by discussing their expe-
riences, preferences, and suggestions for improvement. To do this, the designs must be in
a form that can be reasonably evaluated by users, not in technical jargon or notation that
seems impenetrable to them. Documentation is one traditional way to communicate a design,
for example, a diagram showing the product’s components or a description of how it works.
But a static description cannot easily capture the dynamics of behavior, and for an interactive
product this needs to be communicated so that users can see what it will be like to operate it.
Prototyping is often used to overcome potential client misunderstandings and to test the
technical feasibility of a suggested design and its production. It involves producing a limited
version of the product with the purpose of answering specific questions about the design’s
feasibility or appropriateness. Prototypes give a better impression of the user experience
than simple descriptions; different kinds of prototyping are suitable for different stages of
development and for eliciting different kinds of feedback. When a deployable version of the
product is available, another way to choose between alternative designs is to deploy two dif-
ferent variations and collect data from actual use that is then used to inform the choice. This
is called A/B testing, and it is often used for alternative website designs (see Box 2.5).
Another basis on how to choose between alternatives is quality, but that requires a clear
understanding of what quality means, and people’s views of quality vary. Everyone has a
notion of the level of quality that is expected, wanted, or needed from a product. Whether
this is expressed formally, informally, or not at all, it exists and informs the choice between
alternatives. For example, one smartphone design might make it easy to access a popular
music channel but restrict sound settings, while another requires more complicated key
sequences to access the channel but has a range of sophisticated sound settings. One user’s
view of quality may lean toward ease of use, while another may lean toward sophisticated
sound settings.
Most projects involve a range of different stakeholder groups, and it is common for each
of them to define quality differently and to have different acceptable limits for it. For exam-
ple, although all stakeholders may agree on goals for a video game such as “characters will be
appealing” or “graphics will be realistic,” the meaning of these statements can vary between
different groups. Disputes will arise if, later in development, it transpires that “realistic”
to a stakeholder group of teenage players is different from “realistic” to a group of parent
stakeholders or to developers. Capturing these different views clearly clarifies expectations,
provides a benchmark against which products and prototypes can be compared, and forms a
basis on which to choose among alternatives.
The process of writing down formal, verifiable—and hence measurable—usability crite-
ria is a key characteristic of an approach to interaction design called usability engineering.
This has emerged over many years and with various proponents (Whiteside et al., 1988;
Nielsen, 1993). Most recently, it is often applied in health informatics (for example, see
Kushniruk et al., 2015). Usability engineering involves specifying quantifiable measures of
product performance, documenting them in a usability specification, and assessing the prod-
uct against them.
2 . 3 S O M E P R A C T I C A l I S S u E S 63
BOX 2.5
A/B Testing
A/B testing is an online method to inform the choice between two alternatives. It is most
commonly used for comparing different versions of web pages or apps, but the principles
and mathematics behind it came about in the 1920s (Gallo, 2017). In an interaction design
context, different versions of web pages or apps are released for use by users performing their
everyday tasks. Typically, users are unaware that they are contributing to an evaluation. This
is a powerful way to involve users in choosing between alternatives because a huge number of
users can be involved and the situations are authentic.
On the one hand, it’s a simple idea—give one set of users one version and a second set the
other version and see which set scores more highly against the success criteria. But dividing up
the sets, choosing the success criteria, and working out the metrics to use are nontrivial (for
example, see Deng and Shi, 2016). Pushing this idea further, it is common to have “multivari-
ate” testing in which several options are tried at once, so you end up doing A/B/C testing or
even A/B/C/D testing.
ACTIvITY 2.5
Consider your product from Activity 2.1. Suggest some usability criteria that could be applied
to determine its quality. Use the usability goals introduced in Chapter 1—effectiveness, effi-
ciency, safety, utility, learnability, and memorability. Be as specific as possible. Check the crite-
ria by considering exactly what to measure and how to measure its performance.
Then try to do the same thing for some of the user experience goals introduced in
Chapter 1. (These relate to whether a system is satisfying, enjoyable, motivating, rewarding,
and so on.)
Comment
Finding measurable characteristics for some of these is not easy. Here are some suggestions,
but there are others. Where possible, criteria that are measurable and specific are preferable.
• Effectiveness: Identifying measurable criteria for this goal is particularly difficult since it is
a combination of the other goals. For example, does the system support travel organiza-
tion, choosing transport routes, booking accommodation, and so on? In other words, is the
product used?
• Efficiency: Is it clear how to ask for recommendations from the product? How quickly does
it identify a suitable route or destination details?
(Continued)
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N64
2.3.5 How to Integrate Interaction Design Activities Within Other
Lifecycle Models
As illustrated in Chapter 1 (Figure 1.4), many other disciplines contribute to interaction
design, and some of these disciplines have lifecycles of their own. Prominent among them
are those associated with software development, and integrating interaction design activities
within software development has been discussed for many years; for example, see Carmelo
Ardito et al. (2014) and Ahmed Seffah et al. (2005).
The latest attempts to integrate these practices focus on agile software development.
Agile methods began to emerge in the late 1990s. The most well-known of these are eXtreme
Programming (Beck and Andres, 2005), Scrum (Schwaber and Beedle, 2002), and Kanban
(Anderson, 2010). The Dynamic Systems Development Method (DSDM) (DSDM, 2014),
although established before the current agile movement, also belongs to the agile family as
it adheres to the agile manifesto. These methods differ, but they all stress the importance of
iteration, early and repeated user feedback, being able to handle emergent requirements, and
striking a good balance between flexibility and structure. They also all emphasize collabora-
tion, face-to-face communication, streamlined processes to avoid unnecessary activities, and
the importance of practice over process, that is, of getting work done.
The opening statement for the Manifesto for Agile Software Development (www
.agilemanifesto.org/) reads as follows:
We are uncovering better ways of developing software by doing it and helping others do
it. Through this work we have come to value:
• Individuals and interactions over processes and tools
• Working software over comprehensive documentation
• Customer collaboration over contract negotiation
• Responding to change over following a plan
• Safety: How often does data get lost or is the wrong option chosen? This may be measured,
for example, as the number of times this happens per trip.
• Utility: How many functions offered are used for every trip, how many every other trip, and
how many are not used at all? How many tasks are difficult to complete in a reasonable
time because functionality is missing or the right subtasks aren’t supported?
• Learnability: How long does it take for a novice user to be able to do a series of set tasks,
for example, to book a hotel room in Paris near the meeting venue for the meeting dates,
identify appropriate flights from Sydney to Wellington, or find out whether a visa is needed
to go to China?
• Memorability: If the product isn’t used for a month, how many functions can the user
remember how to perform? How long does it take to remember how to perform the most
frequent task?
Finding measurable characteristics for the user experience criteria is harder. How do you
measure satisfaction, fun, motivation, or aesthetics? What is entertaining to one person may
be boring to another; these kinds of criteria are subjective and so cannot be measured as
objectively.
http://www.agilemanifesto.org/
http://www.agilemanifesto.org/
2 . 3 S O M E P R A C T I C A l I S S u E S 65
This manifesto is underpinned by a series of principles, which range from communica-
tion with the business to excellence of coding and maximizing the amount of work done. The
agile approach to development is particularly interesting from the point of view of interac-
tion design because it incorporates tight iterations and feedback and collaboration with the
customer. For example, in Scrum, each sprint is between one and four weeks, with a product
of value being delivered at the end of each sprint. Also, eXtreme1 Programming (XP) stipu-
lates that the customer should be on-site with developers. In practice, the customer role is
usually taken by a team rather than by one person (Martin et al., 2009), and integration is
far from straightforward (Ferreira et al., 2012). Many companies have integrated agile meth-
ods with interaction design practices to produce a better user experience and business value
(Loranger and Laubheimer, 2017), but it is not necessarily easy, as discussed in Chapter 13,
“Interaction Design in Practice.”
In-Depth Activity
These days, timepieces (such as clocks, wristwatches, and so on) have a variety of functions.
Not only do they tell the time and date, but they can speak to you, remind you when it’s time
to do something, and record your exercise habits among other things. The interface for these
devices, however, shows the time in one of two basic ways: as a digital number such as 11:40
or through an analog display with two or three hands—one to represent the hour, one for the
minutes, and one for the seconds.
This in-depth activity is to design an innovative timepiece. This could be in the form of
a wristwatch, a mantelpiece clock, a sculpture for a garden or balcony, or any other kind
of timepiece you prefer. The goal is to be inventive and exploratory by following these steps:
(a) Think about the interactive product that you are designing: What do you want it to do?
Find three to five potential users, and ask them what they would want. Write a list of
requirements for the clock, together with some usability criteria and user experience cri-
teria based on the definitions in Chapter 1.
(b) Look around for similar devices and seek out other sources of inspiration that you might
find helpful. Make a note of any findings that are interesting, useful, or insightful.
(c) Sketch some initial designs for the timepiece. Try to develop at least two distinct alterna-
tives that meet your set of requirements.
(d) Evaluate the two designs by using your usability criteria and by role-playing an interac-
tion with your sketches. Involve potential users in the evaluation, if possible. Does it do
what you want? Is the time or other information being displayed always clear? Design is
iterative, so you may want to return to earlier elements of the process before you choose
one of your alternatives.
1 The method is called extreme because it pushes a key set of good practices to the limit; that is, it is good practice
to test often, so in XP the development is test-driven, and a complete set of tests is executed many times a day. It is
good practice to talk to people about their requirements, so rather than having weighty documentation, XP reduces
documentation to a minimum, thus forcing communication, and so on.
2 T H E P R O C E S S O F I N T E R A C T I O N D E S I G N66
Further Reading
ASHMORE, S. and RUNYAN, K. (2015) Introduction to Agile Methods, Addison Wesley.
This book introduces the basics of agile software development and the most popular agile
methods in an accessible way. It touches on usability issues and the relationship between
agile and marketing. It is a good place to start for someone new to the agile way of working.
KELLEY, T., with LITTMAN, J. (2016) The Art of Innovation, Profile Books. Tom Kelley is
a partner at IDEO. In this book, Kelley explains some of the innovative techniques used at
Summary
In this chapter, we looked at user-centered design and the process of interaction design. That
is, what is user-centered design, what activities are required in order to design an interactive
product, and how are these activities related? A simple interaction design lifecycle model
consisting of four activities was introduced, and issues surrounding the involvement and
identification of users, generating alternative designs, evaluating designs, and integrating user-
centered concerns with other lifecycles were discussed.
Key Points
• Different design disciplines follow different approaches, but they have commonalities that
are captured in the double diamond of design.
• It is important to have a good understanding of the problem space before trying to
build anything.
• The interaction design process consists of four basic activities: discover requirements,
design alternatives that meet those requirements, prototype the designs so that they can be
communicated and assessed, and evaluate them.
• User-centered design rests on three principles: early focus on users and tasks, empirical
measurement, and iterative design. These principles are also key for interaction design.
• Involving users in the design process assists with expectation management and feelings of
ownership, but how and when to involve users requires careful planning.
• There are many ways to understand who users are and what their goals are in using a prod-
uct, including rapid iterations of working prototypes.
• Looking at others’ designs and involving other people in design provides useful inspiration
and encourages designers to consider alternative design solutions, which is key to effec-
tive design.
• Usability criteria, technical feasibility, and users’ feedback on prototypes can all be used to
choose among alternatives.
• Prototyping is a useful technique for facilitating user feedback on designs at all stages.
• Interaction design activities are becoming better integrated with lifecycle models from other
related disciplines such as software engineering.
F u R T H E R R E A D I N G 67
IDEO, but more importantly he talks about the culture and philosophy underlying IDEO’s
success. There are some useful practical hints in here as well as an informative story about
building and maintaining a successful design company.
PRESSMAN, R.S. and MAXIM, B.R. (2014) Software Engineering: A Practitioner’s Approach
(Int’l Ed), McGraw-Hill Education. If you are interested in pursuing the software engineering
aspects of the lifecycle models section, then this book provides a useful overview of the main
models and their purpose.
SIROKER. D. and KOOMEN, P. (2015) A/B Testing: The Most Powerful Way to Turn Clicks
into Customers, John Wiley. This book is written by two experienced practitioners who have
been using A/B testing with a range of organizations. It is particularly interesting because of the
example cases that show the impact that applying A/B testing successfully can have.
ROGERS. Y. (2014) Secrets of Creative People (PDF available from www.id-book.com/).
This short book summarizes the findings from a two-year research project into creativity.
It emphasizes the importance of different perspectives to creativity and describes how suc-
cessful creativity arises from sharing, constraining, narrating, connecting, and even sparring
with others.
http://www.id-book.com/
Chapter 3
C O N C E P T U A L I Z I N G I N T E R A C T I O N
Objectives
The main goals of this chapter are to accomplish the following:
• Explain how to conceptualize interaction.
• Describe what a conceptual model is and how to begin to formulate one.
• Discuss the use of interface metaphors as part of a conceptual model.
• Outline the core interaction types for informing the development of a conceptual
model.
• Introduce paradigms, visions, theories, models, and frameworks informing interaction
design.
3.1 Introduction
When coming up with new ideas as part of a design project, it is important to conceptualize
them in terms of what the proposed product will do. Sometimes, this is referred to as creat-
ing a proof of concept. In relation to the double diamond framework, it can be viewed as an
initial pass to help define the area and also when exploring solutions. One reason for needing
to do this is as a reality check where fuzzy ideas and assumptions about the benefits of the
proposed product are scrutinized in terms of their feasibility: How realistic is it to develop
what they have suggested, and how desirable and useful will it actually be? Another reason is
to enable designers to begin articulating what the basic building blocks will be when develop-
ing the product. From a user experience (UX) perspective, it can lead to better clarity, forcing
designers to explain how users will understand, learn about, and interact with the product.
3.1 Introduction
3.2 Conceptualizing Interaction
3.3 Conceptual Models
3.4 Interface Metaphors
3.5 Interaction Types
3.6 Paradigms, Visions, Theories, Models, and Frameworks
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N70
For example, consider the bright idea that a designer has of creating a voice-assisted
mobile robot that can help waiters in a restaurant take orders and deliver meals to customers
(see Figure 3.1). The first question to ask is: why? What problem would this address? The
designer might say that the robot could help take orders and entertain customers by having
a conversation with them at the table. They could also make recommendations that can be
customized to different customers, such as restless children or fussy eaters. However, none of
these addresses an actual problem. Rather, they are couched in terms of the putative benefits
of the new solution. In contrast, an actual problem identified might be the following: “It is
difficult to recruit good wait staff who provide the level of customer service to which we have
become accustomed.”
Having worked through a problem space, it is important to generate a set of research
questions that need to be addressed, when considering how to design a robot voice interface
to wait on customers. These might include the following: How intelligent does it have to be?
How would it need to move to appear to be talking? What would the customers think of it?
Would they think it is too gimmicky and get easily tired of it? Or, would it always be a pleas-
ure for them to engage with the robot, not knowing what it would say on each new visit to
the restaurant? Could it be designed to be a grumpy extrovert or a funny waiter? What might
be the limitations of this voice-assisted approach?
Many unknowns need to be considered in the initial stages of a design project, especially
if it is a new product that is being proposed. As part of this process, it can be useful to show
where your novel ideas came from. What sources of inspiration were used? Is there any
theory or research that can be used to inform and support the nascent ideas?
Asking questions, reconsidering one’s assumptions, and articulating one’s concerns and
standpoints are central aspects of the early ideation process. Expressing ideas as a set of con-
cepts greatly helps to transform blue-sky and wishful thinking into more concrete models of
Figure 3.1 A nonspeaking robot waiter in Shanghai. What would be gained if it could also talk
with customers?
Source: ZUMA Press / Alamy Stock Photo
3 . 2 C O N C E P T U A L I Z I N G I N T E R A C T I O N 71
how a product will work, what design features to include, and the amount of functionality
that is needed. In this chapter, we describe how to achieve this through considering the dif-
ferent ways of conceptualizing interaction.
3.2 Conceptualizing Interaction
When beginning a design project, it is important to be clear about the underlying assump-
tions and claims. By an assumption, we mean taking something for granted that requires fur-
ther investigation; for example, people now want an entertainment and navigation system in
their cars. By a claim, we mean stating something to be true when it is still open to question.
For instance, a multimodal style of interaction for controlling this system—one that involves
speaking or gesturing while driving—is perfectly safe.
Writing down your assumptions and claims and then trying to defend and support them
can highlight those that are vague or wanting. In so doing, poorly constructed design ideas
can be reformulated. In many projects, this process involves identifying human activities and
interactivities that are problematic and working out how they might be improved through
being supported with a different set of functions. In others, it can be more speculative, requir-
ing thinking through how to design for an engaging user experience that does not exist.
Box 3.1 presents a hypothetical scenario of a team working through their assumptions
and claims; this shows how, in so doing, problems are explained and explored and leads to a
specific avenue of investigation agreed on by the team.
BOX 3.1
Working Through Assumptions and Claims
This is a hypothetical scenario of early design highlighting the assumptions and claims (itali-
cized) made by different members of a design team.
A large software company has decided that it needs to develop an upgrade of its web
browser for smartphones because its marketing team has discovered that many of the com-
pany’s customers have switched over to using another mobile browser. The marketing people
assume that something is wrong with their browser and that their rivals have a better product.
But they don’t know what the problem is with their browser.
The design team put in charge of this project assumes that they need to improve the
usability of a number of the browser’s functions. They claim that this will win back users by
making features of the interface simpler, more attractive, and more flexible to use.
The user researchers on the design team conduct an initial user study investigating how
people use the company’s web browser on a variety of smartphones. They also look at other
mobile web browsers on the market and compare their functionality and usability. They
observe and talk to many different users. They discover several things about the usability of
their web browser, some of which they were not expecting. One revelation is that many of their
customers have never actually used the bookmarking tool. They present their findings to the
rest of the team and have a long discussion about why each of them thinks it is not being used.
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N72
Explaining people’s assumptions and claims about why they think something might be
a good idea (or not) enables the design team as a whole to view multiple perspectives on
the problem space and, in so doing, reveals conflicting and problematic ones. The following
framework is intended to provide a set of core questions to aid design teams in this process:
• Are there problems with an existing product or user experience? If so, what are they?
• Why do you think there are problems?
• What evidence do you have to support the existence of these problems?
• How do you think your proposed design ideas might overcome these problems?
ACTIVITY 3.1
Use the framework in the previous list to guess what the main assumptions and claims were
behind 3D TV. Then do the same for curved TV, which was designed to be bendy so as to
make the viewing experience more immersive. Are the assumptions similar? Why were they
problematic?
Comment
There was much hype and fanfare about the enhanced user experience 3D and curved TVs
would offer, especially when watching movies, sports events, and dramas (see Figure 3.2).
One member claims that the web browser’s function for organizing bookmarks is tricky
and error-prone, and she assumes that this is the reason why many users do not use it. Another
member backs her up, saying how awkward it is to use this method when wanting to move
bookmarks between folders. One of the user experience architects agrees, noting how several
of the users with whom he spoke mentioned how difficult and time-consuming they found it
when trying to move bookmarks between folders and how they often ended up accidentally
putting them into the wrong folders.
A software engineer reflects on what has been said, and he makes the claim that the book-
mark function is no longer needed since he assumes that most people do what he does, which
is to revisit a website by flicking through their history of previously visited pages. Another
member of the team disagrees with him, claiming that many users do not like to leave a trail
of the sites they have visited and would prefer to be able to save only the sites that they think
they might want to revisit. The bookmark function provides them with this option. Another
option discussed is whether to include most-frequently visited sites as thumbnail images or as
tabs. The software engineer agrees that providing all of the options could be a solution but
worries how this might clutter the small screen interface.
After much discussion on the pros and cons of bookmarking versus history lists, the team
decides to investigate further how to support effectively the saving, ordering, and retrieving of
websites using a mobile web browser. All agree that the format of the existing web browser’s
structure is too rigid and that one of their priorities is to see how they can create a simpler way
of revisiting websites on a smartphone.
3 . 2 C O N C E P T U A L I Z I N G I N T E R A C T I O N 73
However, both never really took off. Why was this? One assumption for 3D TV was that
people would not mind wearing the glasses that were needed to see in 3D, nor would they
mind paying a lot more for a new 3D-enabled TV screen. A claim was that people would really
enjoy the enhanced clarity and color detail provided by 3D, based on the favorable feedback
received worldwide when viewing 3D films, such as Avatar, at a cinema. Similarly, an assump-
tion made about curved TV was that it would provide more flexibility for viewers to optimize
the viewing angles in someone’s living room.
The unanswered question for both concepts was this: Could the enhanced cinema view-
ing experience that both claimed become an actual desired living room experience? There was
no existing problem to overcome—what was being proposed was a new way of experiencing
TV. The problem they might have assumed existed was that the experience of viewing TV at
home was inferior to that of the cinema. The claim could have been that people would be
prepared to pay more for a better-quality viewing experience more akin to that of the cinema.
But were people prepared to pay extra for a new TV because of this enhancement? A
number of people did. However, a fundamental usability problem was overlooked—many
people complained of motion sickness when watching 3D TV. The glasses were also easily lost.
Moreover, wearing them made it difficult to do other things such as flicking through multiple
channels, texting, and tweeting. (Many people simultaneously use additional devices, such as
smartphones and tablets while watching TV.) Most people who bought 3D TVs stopped
watching them after a while because of these usability problems. While curved TV didn’t
require viewers to wear special glasses, it also failed because the actual benefits were not that
significant relative to the cost. While for some the curve provided a cool aesthetic look and an
improved viewing angle, for others it was simply an inconvenience.
Figure 3.2 A family watching 3D TV
Source: Andrey Popov/Shutterstock
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N74
Making clear what one’s assumptions are about a problem and the claims being made
about potential solutions should be carried out early on and throughout a project. Design
teams also need to work out how best to conceptualize the design space. Primarily, this
involves articulating the proposed solution as a conceptual model with respect to the user
experience. The benefits of conceptualizing the design space in this way are as follows:
Orientation Enabling the design team to ask specific kinds of questions about how the
conceptual model will be understood by the targeted users.
Open-Mindedness Allowing the team to explore a range of different ideas to address the
problems identified.
Common Ground Allowing the design team to establish a set of common terms that all can
understand and agree upon, reducing the chance of misunderstandings and confusion
arising later.
Once formulated and agreed upon, a conceptual model can then become a shared blue-
print leading to a testable proof of concept. It can be represented as a textual description and/
or in a diagrammatic form, depending on the preferred lingua franca used by the design team.
It can be used not just by user experience designers but also to communicate ideas to busi-
ness, engineering, finance, product, and marketing units. The conceptual model is used by the
design team as the basis from which they can develop more detailed and concrete aspects of
the design. In doing so, design teams can produce simpler designs that match up with users’
tasks, allow for faster development time, result in improved customer uptake, and need less
training and customer support (Johnson and Henderson, 2012).
3.3 Conceptual Models
A model is a simplified description of a system or process that helps describe how it works.
In this section, we look at a particular kind of model used in interaction design intended
to articulate the problem and design space—the conceptual model. In a later section, we
describe more generally how models have been developed to explain phenomena in human-
computer interaction.
Jeff Johnson and Austin Henderson (2002) define a conceptual model as “a high-level
description of how a system is organized and operates” (p. 26). In this sense, it is an abstrac-
tion outlining what people can do with a product and what concepts are needed to under-
stand how to interact with it. A key benefit of conceptualizing a design at this level is that
it enables “designers to straighten out their thinking before they start laying out their widg-
ets” (p. 28).
In a nutshell, a conceptual model provides a working strategy and a framework of gen-
eral concepts and their interrelations. The core components are as follows:
• Metaphors and analogies that convey to people how to understand what a product is used
for and how to use it for an activity (for example browsing and bookmarking).
• The concepts to which people are exposed through the product, including the task-domain
objects they create and manipulate, their attributes, and the operations that can be per-
formed on them (such as saving, revisiting, and organizing).
• The relationships between those concepts (for instance, whether one object contains another).
3 . 3 C O N C E P T U A L M O d E L s 75
• The mappings between the concepts and the user experience the product is designed to
support or invoke (for example, one can revisit a page through looking at a list of visited
sites, most-frequently visited, or saved websites).
How the various metaphors, concepts, and their relationships are organized determines
the user experience. By explaining these, the design team can debate the merits of providing
different methods and how they support the main concepts, for example, saving, revisiting,
categorizing, reorganizing, and their mapping to the task domain. They can also begin dis-
cussing whether a new overall metaphor may be preferable that combines the activities of
browsing, searching, and revisiting. In turn, this can lead the design team to articulate the
kinds of relationships between them, such as containership. For example, what is the best
way to sort and revisit saved pages, and how many and what types of containers should
be used (for example, folders, bars, or panes)? The same enumeration of concepts can be
repeated for other functions of the web browser—both current and new. In so doing, the
design team can begin to work out systematically what will be the simplest and most effective
and memorable way of supporting users while browsing the Internet.
The best conceptual models are often those that appear obvious and simple; that is,
the operations they support are intuitive to use. However, sometimes applications can
end up being based on overly complex conceptual models, especially if they are the result
of a series of upgrades, where more and more functions and ways of doing something
are added to the original conceptual model. While tech companies often provide videos
showing what new features are included in an upgrade, users may not pay much attention
to them or skip them entirely. Furthermore, many people prefer to stick to the methods
they have always used and trusted and, not surprisingly, become annoyed when they find
one or more have been removed or changed. For example, when Facebook rolled out its
revised newsfeed a few years back, many users were unhappy, as evidenced by their post-
ings and tweets, preferring the old interface that they had gotten used to. A challenge for
software companies, therefore, is how best to introduce new features that they have added
to an upgrade—and explain their assumed benefits to users—while also justifying why
they removed others.
BOX 3.2
Design Concept
Another term that is sometimes used is a design concept. Essentially, it is a set of ideas for a
design. Typically, it is composed of scenarios, images, mood boards, or text-based documents.
For example, Figure 3.3 shows the first page of a design concept developed for an ambient
display that was aimed at changing people’s behavior in a building, that is, to take the stairs
instead of the elevator. Part of the design concept was envisioned as an animated pattern of
twinkly lights that would be embedded in the carpet near the entrance of the building with the
intention of luring people toward the stairs (Hazlewood et al., 2010).
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N76
Most interface applications are actually based on well-established conceptual models.
For example, a conceptual model based on the core aspects of the customer experience when
at a shopping mall underlies most online shopping websites. These include the placement of
items that a customer wants to purchase into a shopping cart or basket and proceeding to
checkout when they’re ready to make the purchase. Collections of patterns are now readily
available to help design the interface for these core transactional processes, together with
many other aspects of a user experience, meaning interaction designers do not have to start
from scratch every time they design or redesign an application. Examples include patterns for
online forms and navigation on mobile phones.
It is rare for completely new conceptual models to emerge that transform the way daily
and work activities are carried out at an interface. Those that did fall into this category
include the following three classics: the desktop (developed by Xerox in the late 1970s), the
digital spreadsheet (developed by Dan Bricklin and Bob Frankston in the late 1970s), and
the World Wide Web (developed by Tim Berners Lee in the early 1980s). All of these inno-
vations made what was previously limited to a few skilled people accessible to all, while
greatly expanding what is possible. The graphical desktop dramatically changed how office
tasks could be performed (including creating, editing, and printing documents). Perform-
ing these tasks using the computers prevalent at the time was significantly more arduous,
having to learn and use a command language (such as DOS or UNIX). Digital spreadsheets
made accounting highly flexible and easier to accomplish, enabling a diversity of new com-
putations to be performed simply through filling in interactive boxes. The World Wide Web
allowed anyone to browse a network of information remotely. Since then, e-readers and digi-
tal authoring tools have introduced new ways of reading documents and books online, sup-
porting associated activities such as annotating, highlighting, linking, commenting, copying,
Figure 3.3 The first page of a design concept for an ambient display
3 . 3 C O N C E P T U A L M O d E L s 77
and tracking. The web has also enabled and made many other kinds of activities easier, such
as browsing for news, weather, sports, and financial information, as well as banking, shop-
ping, and learning online among other tasks. Importantly, all of these conceptual models
were based on familiar activities.
BOX 3.3
A Classic Conceptual Model: The Xerox Star
The Star interface, developed by Xerox in 1981 (see Figure 3.4), revolutionized the way that
interfaces were designed for personal computing (Smith et al., 1982; Miller and Johnson,
1996) and is viewed as the forerunner of today’s Mac and Windows desktop interfaces. Origi-
nally, it was designed as an office system, targeted at workers not interested in computing per
se, and it was based on a conceptual model that included the familiar knowledge of an office.
Paper, folders, filing cabinets, and mailboxes were represented as icons on the screen and were
designed to possess some of the properties of their physical counterparts. Dragging a docu-
ment icon across the desktop screen was seen as equivalent to picking up a piece of paper
in the physical world and moving it (but this, of course, is a very different action). Similarly,
dragging a digital document into a digital folder was seen as being analogous to placing a
physical document into a physical cabinet. In addition, new concepts that were incorporated
as part of the desktop metaphor were operations that could not be performed in the physical
world. For example, digital files could be placed onto an icon of a printer on the desktop,
resulting in the computer printing them out.
Figure 3.4 The Xerox Star
Source: Used courtesy of Xerox
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N78
3.4 Interface Metaphors
Metaphors are considered to be a central component of a conceptual model. They provide
a structure that is similar in some way to aspects of a familiar entity (or entities), but they
also have their own behaviors and properties. More specifically, an interface metaphor is one
that is instantiated in some way as part of the user interface, such as the desktop metaphor.
Another well-known one is the search engine, originally coined in the early 1990s to refer
to a software tool that indexed and retrieved files remotely from the Internet using various
algorithms to match terms selected by the user. The metaphor invites comparisons between
a mechanical engine, which has several working parts, and the everyday action of looking in
different places to find something. The functions supported by a search engine also include
other features besides those belonging to an engine that searches, such as listing and prior-
itizing the results of a search. It also does these actions in quite different ways from how a
mechanical engine works or how a human being might search a library for books on a given
topic. The similarities implied by the use of the term search engine, therefore, are at a general
level. They are meant to conjure up the essence of the process of finding relevant information,
enabling the user to link these to less familiar aspects of the functionality provided.
Interface metaphors are intended to provide familiar entities that enable people read-
ily to understand the underlying conceptual model and know what to do at the interface.
However, they can also contravene people’s expectations about how things should be, such
Video The history of the Xerox Star at http://youtu.be/Cn4vC80Pv6Q.
ACTIVITY 3.2
Go to a few online stores and see how the interface has been designed to enable the customer
to order and pay for an item. How many use the “add to shopping cart/basket” followed by the
“checkout” metaphor? Does this make it straightforward and intuitive to make a purchase?
Comment
Making a purchase online usually involves spending money by inputting one’s credit/debit
card details. People want to feel reassured that they are doing this correctly and do not get
frustrated with lots of forms to fill in. Designing the interface to have a familiar metaphor
(with an icon of a shopping cart/basket, not a cash register) makes it easier for people to know
what to do at the different stages of making a purchase. Most important, placing an item in
the basket does not commit the customer to purchase it there and then. It also enables them
to browse further and select other items, as they might in a physical store.
3 . 4 I N T E R F A C E M E TA P h O R s 79
as the recycle bin (trash can) that sits on the desktop. Logically and culturally (meaning, in
the real world), it should be placed under the desk. But users would not have been able to see
it because it would have been hidden by the desktop surface. So, it needed to go on the desk-
top. While some users found this irksome, most did not find it to be a problem. Once they
understood why the recycle bin icon was on the desktop, they simply accepted it being there.
An interface metaphor that has become popular in the last few years is the card. Many
of the social media apps, such as Facebook, Twitter, and Pinterest, present their content on
cards. Cards have a familiar form, having been around for a long time. Just think of how
many kinds there are: playing cards, business cards, birthday cards, credit cards, and post-
cards to name a few. They have strong associations, providing an intuitive way of organizing
limited content that is “card sized.” They can easily be flicked through, sorted, and themed.
They structure content into meaningful chunks, similar to how paragraphs are used to chunk
a set of related sentences into distinct sections (Babich, 2016). In the context of the smart-
phone interface, the Google Now card provides short snippets of useful information. This
appears on and moves across the screen in the way people would expect a real card to do—in
a lightweight, paper-based sort of way. The elements are also structured to appear as if they
were on a card of a fixed size, rather than, say, in a scrolling web page (see Figure 3.5).
Figure 3.5 Google Now card for restaurant recommendation in Germany
Source: Used courtesy of Johannes Schöning
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N80
In many cases, new interface metaphors rapidly become integrated into common par-
lance, as witnessed by the way people talk about them. For example, parents talk about how
much screen time children are allowed each day in the same way they talk more generally
about spending time. As such, the interface metaphors are no longer talked about as familiar
terms to describe less familiar computer-based actions; they have become everyday terms
in their own right. Moreover, it is hard not to use metaphorical terms when talking about
technology use, as they have become so ingrained in the language that we use to express our-
selves. Just ask yourself or someone else to describe Twitter and Facebook and how people
use them. Then try doing it without using a single metaphor.
Albrecht Schmidt (2017) suggests a pair of glasses as a good metaphor for thinking
about future technologies, helping us think more about how to amplify human cognition.
Just as they are seen as an extension of ourselves that we are not aware of most of the time
(except when they steam up!), he asks can we design new technologies that enable users to
do things without having to think about how to use them? He contrasts this “amplify” meta-
phor with the “tool” metaphor of a pair of binoculars that is used for a specific task—where
someone consciously has to hold them up against their eyes while adjusting the lens to bring
what they are looking at into focus. Current devices, like mobile phones, are designed more
like binoculars, where people have to interact with them explicitly to perform tasks.
BOX 3.4
Why Are Metaphors So Popular?
People frequently use metaphors and analogies (here we use the terms interchangeably) as
a source of inspiration for understanding and explaining to others what they are doing, or
trying to do, in terms that are familiar to them. They are an integral part of human language
(Lakoff and Johnson, 1980). Metaphors are commonly used to explain something that is
unfamiliar or hard to grasp by way of comparison with something that is familiar and easy
to grasp. For example, they are frequently employed in education, where teachers use them
to introduce something new to students by comparing the new material with something they
already understand. An example is the comparison of human evolution with a game. We are
all familiar with the properties of a game: there are rules, each player has a goal to win (or
lose), there are heuristics to deal with situations where there are no rules, there is the propen-
sity to cheat when the other players are not looking, and so on. By conjuring up these proper-
ties, the analogy helps us begin to understand the more difficult concept of evolution—how it
happens, what rules govern it, who cheats, and so on.
It is not surprising, therefore, to see how widely metaphors have been used in interaction
design to conceptualize abstract, hard-to-imagine, and difficult-to-articulate computer-based
concepts and interactions in more concrete and familiar terms and as graphical visualizations
at the interface level. Metaphors and analogies are used in these three main ways:
• As a way of conceptualizing what we are doing (for instance, surfing the web)
• As a conceptual model instantiated at the interface level (for example, the card metaphor)
• As a way of visualizing an operation (such as an icon of a shopping cart into which items
are placed that users want to purchase on an online shopping site)
3 . 5 I N T E R A C T I O N T Y P E s 81
3.5 Interaction Types
Another way of conceptualizing the design space is in terms of the interaction types that will
underlie the user experience. Essentially, these are the ways a person interacts with a product
or application. Originally, we identified four main types: instructing, conversing, manipulating,
and exploring (Preece et al., 2002). A fifth type has since been proposed by Christopher Lueg
et al. (2019) that we have added to ours, which they call responding. This refers to proactive
systems that initiate a request in situations to which a user can respond, for example, when
Netflix pauses a person’s viewing to ask them whether they would like to continue watching.
Deciding upon which of the interaction types to use, and why, can help designers formu-
late a conceptual model before committing to a particular interface in which to implement
them, such as speech-based, gesture-based, touch-based, menu-based, and so on. Note that
we are distinguishing here between interaction types (which we discuss in this section) and
interface types (which are discussed in Chapter 7, “Interfaces”). While cost and other product
constraints will often dictate which interface style can be used for a given application, consid-
ering the interaction type that will best support a user experience can highlight the potential
trade-offs, dilemmas, and pros and cons.
Here, we describe in more detail each of the five types of interaction. It should be noted
that they are not meant to be mutually exclusive (for example, someone can interact with a
system based on different kinds of activities); nor are they meant to be definitive. Also, the
label used for each type refers to the user’s action even though the system may be the active
partner in initiating the interaction.
• Instructing: Where users issue instructions to a system. This can be done in a number of
ways, including typing in commands, selecting options from menus in a windows environ-
ment or on a multitouch screen, speaking aloud commands, gesturing, pressing buttons, or
using a combination of function keys.
• Conversing: Where users have a dialog with a system. Users can speak via an interface or
type in questions to which the system replies via text or speech output.
• Manipulating: Where users interact with objects in a virtual or physical space by manipu-
lating them (for instance, opening, holding, closing, and placing). Users can hone their
familiar knowledge of how to interact with objects.
• Exploring: Where users move through a virtual environment or a physical space. Virtual
environments include 3D worlds and augmented and virtual reality systems. They enable
users to hone their familiar knowledge by physically moving around. Physical spaces that
use sensor-based technologies include smart rooms and ambient environments, also ena-
bling people to capitalize on familiarity.
• Responding: Where the system initiates the interaction and the user chooses whether to
respond. For example, proactive mobile location-based technology can alert people to
points of interest. They can choose to look at the information popping up on their phone
or ignore it. An example is the Google Now Card, shown in Figure 3.5, which pops up a
restaurant recommendation for the user to contemplate when they are walking nearby.
Besides these core activities of instructing, conversing, manipulating, exploring, and
responding, it is possible to describe the specific domain and context-based activities in which
users engage, such as learning, working, socializing, playing, browsing, writing, problem-
solving, decision-making, and searching—just to name but a few. Malcolm McCullough (2004)
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N82
suggests describing them as situated activities, organized by work (for example, presenting to
groups), home (such as resting), in town (for instance, eating), and on the road (for example,
walking). The rationale for classifying activities in this way is to help designers be more sys-
tematic when thinking about the usability of technology-modified places in the environment.
In the following sections we illustrate in more detail the five core interaction types and how
to design applications for them.
3.5.1 Instructing
This type of interaction describes how users carry out their tasks by telling the system what
to do. Examples include giving instructions to a system to perform operations such as tell the
time, print a file, and remind the user of an appointment. A diverse range of products has been
designed based on this model, including home entertainment systems, consumer electronics,
and computers. The way in which the user issues instructions can vary from pressing buttons
to typing in strings of characters. Many activities are readily supported by giving instructions.
In Windows and other graphical user interfaces (GUIs), control keys or the selection of
menu options via a mouse, touch pad, or touch screen are used. Typically, a wide range of
functions are provided from which users have to select when they want to do something to
the object on which they are working. For example, a user writing a report using a word
processor will want to format the document, count the number of words typed, and check
the spelling. The user instructs the system to do these operations by issuing appropriate
commands. Typically, commands are carried out in a sequence, with the system responding
appropriately (or not) as instructed.
One of the main benefits of designing an interaction based on issuing instructions is that
the interaction is quick and efficient. It is particularly fitting where there is a frequent need
to repeat actions performed on multiple objects. Examples include the repetitive actions of
saving, deleting, and organizing files.
ACTIVITY 3.3
There are many different kinds of vending machines in the world. Each offers a range of
goods, requiring users to part with some of their money. Figure 3.6 shows photos of two dif-
ferent types of vending machines: one that provides soft drinks and the other that delivers a
range of snacks. Both machines use an instructional mode of interaction. However, the way
they do so is quite different.
What instructions must be issued to obtain a soda from the first machine and a bar of
chocolate from the second? Why has it been necessary to design a more complex mode of inter-
action for the second vending machine? What problems can arise with this mode of interaction?
Comment
The first vending machine has been designed using simple instructions. There is a small num-
ber of drinks from which to choose, and each is represented by a large button displaying the
label of each drink. The user simply has to press one button, and this will have the effect of
delivering the selected drink. The second machine is more complex, offering a wider range
of snacks. The trade-off for providing more options, however, is that the user can no longer
3 . 5 I N T E R A C T I O N T Y P E s 83
3.5.2 Conversing
This form of interaction is based on the idea of a person having a conversation with a system,
where the system acts as a dialogue partner. In particular, the system is designed to respond in
a way that another human being might when having a conversation. It differs from the activ-
ity of instructing insofar as it encompasses a two-way communication process, with the sys-
tem acting like a partner rather than a machine that obeys orders. It has been most commonly
used for applications where the user needs to find out specific kinds of information or wants
to discuss issues. Examples include advisory systems, help facilities, chatbots, and robots.
instruct the machine using a simple one-press action but is required to follow a more complex
process involving (1) reading off the code (for example, C12) under the item chosen, then
(2) keying this into the number pad adjacent to the displayed items, and finally (3) checking
the price of the selected option and ensuring that the amount of money inserted is the same
or greater (depending on whether the machine provides change). Problems that can arise from
this type of interaction are the customer misreading the code and/or incorrectly keying the
code, resulting in the machine not issuing the snack or providing the wrong item.
A better way of designing an interface for a large number of options of variable cost
might be to continue to use direct mapping but use buttons that show miniature versions of
the snacks placed in a large matrix (rather than showing actual versions). This would use the
available space at the front of the vending machine more economically. The customer would
need only to press the button of the object chosen and put in the correct amount of money.
There is a lower chance of error resulting from pressing the wrong code or keys. The trade-off
for the vending company, however, is that the machine is less flexible in terms of which snacks
it can sell. If a new product line comes out, they will also need to replace part of the physical
interface to the machine, which would be costly.
Figure 3.6 Two different types of vending machine
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N84
The kinds of conversation that are currently supported range from simple voice-
recognition, menu-driven systems, to more complex natural language–based systems that
involve the system parsing and responding to queries typed in or spoken by the user. Examples
of the former include banking, ticket booking, and train-time inquiries, where the user talks to
the system in single-word phrases and numbers, that is, yes, no, three, and so on, in response
to prompts from the system. Examples of the latter include help systems, where the user
types in a specific query, such as “How do I change the margin widths?” to which the system
responds by giving various answers. Advances in AI during the last few years have resulted
in a significant improvement in speech recognition to the extent that many companies now
routinely employ speech-based and chatbot-based interaction for their customer queries.
A main benefit of developing a conceptual model that uses a conversational style of inter-
action is that it allows people to interact with a system in a way that is familiar to them. For
example, Apple’s speech system, Siri, lets you talk to it as if it were another person. You can
ask it to do tasks for you, such as make a phone call, schedule a meeting, or send a message.
You can also ask it indirect questions that it knows how to answer, such as “Do I need an
umbrella today?” It will look up the weather for where you are and then answer with some-
thing like, “I don’t believe it’s raining” while also providing a weather forecast (see Figure 3.7).
Figure 3.7 Siri’s response to the question “Do I need an umbrella today?”
3 . 5 I N T E R A C T I O N T Y P E s 85
A problem that can arise from using a conversational-based interaction type is that certain
kinds of tasks are transformed into cumbersome and one-sided interactions. This is especially
true for automated phone-based systems that use auditory menus to advance the interaction.
Users have to listen to a voice providing several options, then make a selection, and repeat
through further layers of menus before accomplishing their goal, for example, reaching a real
human or paying a bill. Here is the beginning of a dialogue between a user who wants to find
out about car insurance and an insurance company’s phone reception system:
“Welcome to St. Paul’s Insurance Company. Press 1 if you are a new customer; 2 if you
are an existing customer.”
“Thank you for calling St. Paul’s Insurance Company. If you require house insurance,
say 1; car insurance, say 2; travel insurance, say 3; health insurance, say 4; other, say 5.”
“You have reached the car insurance division. If you require information about fully
comprehensive insurance, say 1; third-party insurance, say 2. …”
3.5.3 Manipulating
This form of interaction involves manipulating objects, and it capitalizes on users’ knowledge
of how they do so in the physical world. For example, digital objects can be manipulated by
moving, selecting, opening, and closing. Extensions to these actions include zooming in and
out, stretching, and shrinking—actions that are not possible with objects in the real world.
Human actions can be imitated through the use of physical controllers (for example, the
Wii) or gestures made in the air, such as the gesture control technology now used in some
cars. Physical toys and robots have also been embedded with technology that enable them to
act and react in ways depending on whether they are squeezed, touched, or moved. Tagged
physical objects (such as balls, bricks, or blocks) that are manipulated in a physical world
(for example, placed on a surface) can result in other physical and digital events occurring,
such as a lever moving or a sound or animation being played.
A framework that has been highly influential (originating from the early days of HCI) in
guiding the design of GUI applications is direct manipulation (Shneiderman, 1983). It pro-
poses that digital objects be designed at the interface level so that they can be interacted with
in ways that are analogous to how physical objects in the physical world are manipulated.
Source: © Glasbergen. Reproduced with permission
of Glasbergen Cartoon Service
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N86
In so doing, direct manipulation interfaces are assumed to enable users to feel that they are
directly controlling the digital objects represented by the computer. The three core principles
are as follows:
• Continuous representation of the objects and actions of interest
• Rapid reversible incremental actions with immediate feedback about the object of interest
• Physical actions and button pressing instead of issuing commands with complex syntax
According to these principles, an object on the screen remains visible while a user per-
forms physical actions on it, and any actions performed on it are immediately visible. For
example, a user can move a file by dragging an icon that represents it from one part of the
desktop to another. The benefits of direct manipulation include the following:
• Helping beginners learn basic functionality rapidly
• Enabling experienced users to work rapidly on a wide range of tasks
• Allowing infrequent users to remember how to carry out operations over time
• Preventing the need for error messages, except rarely
• Showing users immediately how their actions are furthering their goals
• Reducing users’ experiences of anxiety
• Helping users gain confidence and mastery and feel in control
Many apps have been developed based on some form of direct manipulation, includ-
ing word processors, video games, learning tools, and image editing tools. However, while
direct manipulation interfaces provide a versatile mode of interaction, they do have their
drawbacks. In particular, not all tasks can be described by objects, and not all actions can
be undertaken directly. Some tasks are also better achieved through issuing commands. For
example, consider how you edit a report using a word processor. Suppose that you had ref-
erenced work by Ben Shneiderman but had spelled his name as Schneiderman throughout.
How would you correct this error using a direct manipulation interface? You would need to
read the report and manually select the c in every Schneiderman, highlight it, and then delete
it. This would be tedious, and it would be easy to miss one or two. By contrast, this opera-
tion is relatively effortless and also likely to be more accurate when using a command-based
interaction. All you need to do is instruct the word processor to find every Schneiderman
and replace it with Shneiderman. This can be done by selecting a menu option or using a
combination of command keys and then typing the changes required into the dialog box
that pops up.
3.5.4 Exploring
This mode of interaction involves users moving through virtual or physical environments.
For example, users can explore aspects of a virtual 3D environment, such as the interior of a
building. Physical environments can also be embedded with sensing technologies that, when
they detect the presence of someone or certain body movements, respond by triggering cer-
tain digital or physical events. The basic idea is to enable people to explore and interact with
an environment, be it physical or digital, by exploiting their knowledge of how they move
and navigate through existing spaces.
Many 3D virtual environments have been built that comprise digital worlds designed for
people to move between various spaces to learn (for example, virtual campuses) and fantasy
worlds where people wander around different places to socialize (for instance, virtual parties)
3 . 5 I N T E R A C T I O N T Y P E s 87
or play video games (such as Fortnite). Many virtual landscapes depicting cities, parks, build-
ings, rooms, and datasets have also been built, both realistic and abstract, that enable users
to fly over them and zoom in and out of different parts. Other virtual environments that have
been built include worlds that are larger than life, enabling people to move around them,
experiencing things that are normally impossible or invisible to the eye (see Figure 3.8a);
highly realistic representations of architectural designs, allowing clients and customers to
imagine how they will use and move through planned buildings and public spaces; and visu-
alizations of complex datasets that scientists can virtually climb inside and experience (see
Figure 3.8b).
3.5.5 Responding
This mode of interaction involves the system taking the initiative to alert, describe, or show
the user something that it “thinks” is of interest or relevance to the context the user is pres-
ently in. It can do this through detecting the location and/or presence of someone in a vicinity
(for instance, a nearby coffee bar where friends are meeting) and notifying them about it on
their phone or watch. Smartphones and wearable devices are becoming increasingly proactive
in initiating user interaction in this way, rather than waiting for the user to ask, command,
explore, or manipulate. An example is a fitness tracker that notifies the user of a milestone
they have reached for a given activity, for example, having walked 10,000 steps in a day.
The fitness tracker does this automatically without any requests made by the user; the user
responds by looking at the notification on their screen or listening to an audio announce-
ment that is made. Another example is when the system automatically provides some funny
or useful information for the user, based on what it has learned from their repeated behaviors
when carrying out particular actions in a given context. For example, after taking a photo
(a) (b)
Figure 3.8 (a) A CAVE that enables the user to stand near a huge insect, for example, a beetle, be
swallowed, and end up in its abdomen; and (b) NCSA’s CAVE being used by a scientist to move
through 3D visualizations of the datasets
Source: (a) Used courtesy of Alexei Sharov (b) Used courtesy of Kalev Leetaru, National Center for Supercom-
puting Applications, University of Illinois.
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N88
of a friend’s cute dog in the park, Google Lens will automatically pop up information that
identifies the breed of the dog (see Figure 3.9).
For some people, this kind of system-initiated interaction—where additional informa-
tion is provided which has not been requested—might get a bit tiresome or frustrating, espe-
cially if the system gets it wrong. The challenge is knowing when the user will find it useful
and interesting and how much and what kind of contextual information to provide without
overwhelming or annoying them. Also, it needs to know what to do when it gets it wrong.
For example, if it thinks the dog is a teddy bear, will it apologize? Will the user be able to
correct it and tell it what the photo actually is? Or will the system be given a second chance?
3.6 Paradigms, Visions, Theories, Models,
and Frameworks
Other sources of conceptual inspiration and knowledge that are used to inform design and
guide research are paradigms, visions, theories, models, and frameworks (Carroll, 2003). These
vary in terms of their scale and specificity to a particular problem space. A paradigm refers
to a general approach that has been adopted by a community of researchers and designers
for carrying out their work in terms of shared assumptions, concepts, values, and practices.
A vision is a future scenario that frames research and development in interaction design—
often depicted in the form of a film or a narrative. A theory is a well-substantiated explana-
tion of some aspect of a phenomenon; for example, the theory of information processing that
explains how the mind, or some aspect of it, is assumed to work. A model is a simplification
of some aspect of human-computer interaction intended to make it easier for designers to
predict and evaluate alternative designs. A framework is a set of interrelated concepts and/or
Figure 3.9 Google Lens in action, providing pop-up information about Pembroke Welsh Corgi
having recognized the image as one
Source: https://lens.google.com
https://lens.google.com
3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 89
a set of specific questions that are intended to inform a particular domain area (for example,
collaborative learning), or an analytic method (for instance, ethnographic studies).
3.6.1 Paradigms
Following a particular paradigm means adopting a set of practices upon which a community
has agreed. These include the following:
• The questions to be asked and how they should be framed
• The phenomena to be observed
• The way in which findings from studies are to be analyzed and interpreted (Kuhn, 1972)
In the 1980s, the prevailing paradigm in human-computer interaction was how to design
user-centered applications for the desktop computer. Questions about what and how to
design were framed in terms of specifying the requirements for a single user interacting with
a screen-based interface. Task analytic and usability methods were developed based on an
individual user’s cognitive capabilities. Windows, Icons, Menus, and Pointers (WIMP) was
used as a way of characterizing the core features of an interface for a single user. This was
later superseded by the graphical user interface (GUI). Now many interfaces have touch
screens that users tap, press and hold, pinch, swipe, slide, and stretch.
A big influence on the paradigm shift that took place in HCI in the 1990s was Mark
Weiser’s (1991) vision of ubiquitous technology. He proposed that computers would become
part of the environment, embedded in a variety of everyday objects, devices, and displays.
He envisioned a world of serenity, comfort, and awareness, where people were kept perpetu-
ally informed of what was happening around them, what was going to happen, and what
had just happened. Ubiquitous computing devices would enter a person’s center of attention
when needed and move to the periphery of their attention when not, enabling the person to
switch calmly and effortlessly between activities without having to figure out how to use a
computer when performing their tasks. In essence, the technology would be unobtrusive and
largely disappear into the background. People would be able to get on with their everyday
and working lives, interacting with information and communicating and collaborating with
others without being distracted or becoming frustrated with technology.
This vision was successful at influencing the computing community’s thinking; inspiring
them especially regarding what technologies to develop and problems to research (Abowd,
2012). Many HCI researchers began to think beyond the desktop and design mobile and perva-
sive technologies. An array of technologies was developed that could extend what people could
do in their everyday and working lives, such as smart glasses, tablets, and smartphones.
The next big paradigm shift that took place in the 2000s was the emergence of Big Data
and the Internet of Things (IoT). New and affordable sensor technologies enabled masses of
data to be collected about people’s health, well-being, and real-time changes happening in the
environment (for example, air quality, traffic congestion, and business). Smart buildings were
also built, where an assortment of sensors were embedded and experimented with in homes,
hospitals, and other public buildings. Data science and machine-learning algorithms were
developed to analyze the amassed data to draw new inferences about what actions to take
on behalf of people to optimize and improve their lives. This included introducing variable
speed limits on highways, notifying people via apps of dangerous pollution levels, crowds at
an airport, and so on. In addition, it became the norm for sensed data to be used to automate
mundane operations and actions—such as turning lights or faucets on and off or flushing
toilets automatically—replacing conventional knobs, buttons, and other physical controls.
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N90
3.6.2 Visions
Visions of the future, like Mark Weiser’s vision of ubiquitous technology, provide a powerful
driving force that can lead to a paradigm shift in terms of what research and development is
carried out in companies and universities. A number of tech companies have produced videos
about the future of technology and society, inviting audiences to imagine what life will be
like in 10, 15, or 20 years’ time. One of the earliest was Apple’s 1987 Knowledge Navigator,
which presented a scenario of a professor using a touchscreen tablet with a speech-based
intelligent assistant reminding him of what he needed to do that day while answering the
phone and helping him prepare his lectures. It was 25 years ahead of its time—set in 2011—
the actual year that Apple launched its speech system, Siri. It was much viewed and discussed,
inspiring widespread research into and development of future interfaces.
A current vision that has become pervasive is AI. Both utopian and dystopian visions
are being bandied about on how AI will make our lives easier on the one hand and how
it will take our jobs away on the other. This time, it is not just computer scientists who
are extolling the benefits or dangers of AI advances for society but also journalists, social
commentators, policy-makers, and bloggers. AI is now replacing the user interface for an
increasing number of applications where the user had to make choices, for example, smart-
phones learning your music preferences and home heating systems deciding when to turn
the heating on and off and what temperature you prefer. One objective is to reduce the
stress of people having to make decisions; another is to improve upon what they would
choose. For example, in the future instead of having to agonize over which clothes to buy,
or vacation to select, a personal assistant will be able to choose on your behalf. Another
example depicts what a driverless car will be like in a few years, where the focus is not so
much on current concerns with safety and convenience but more on improving comfort
and life quality in terms of the ultimate personalized passenger experience (for example, see
VW’s video). More and more everyday tasks will be transformed through AI learning what
choices are best in a given situation.
You can watch a video about the Apple Knowledge Navigator here: http://youtu
.be/hGYFEI6uLy0.
VW’s vision of its future car can be seen in this video: https://youtu.be/AyihacflLto.
Video IBM’s Internet of Things: http://youtu.be/sfEbMV295kk.
http://youtu.be/sfEbMV295Kk
3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 91
While there are many benefits of letting machines make decisions for us, we may feel a
loss of control. Moreover, we may not understand why the AI system chose to drive the car
along a particular route or why our voice-assisted home robot keeps ordering too much milk.
There are increasing expectations that AI researchers find ways of explaining the rationale
behind the decisions that AI systems make on the user’s behalf. This need is often referred
to as transparency and accountability—which we discuss further in Chapter 10. It is an area
that is of central concern to interaction design researchers, who have started conducting user
studies on transparency and developing explanations that are meaningful and reassuring to
the user (e.g., Radar et al., 2018).
Another challenge is to develop new kinds of interfaces and conceptual models that can
support the synergy of humans and AI systems, which will amplify and extend what they
can do currently. This could include novel ways of enhancing group collaboration, creative
problem-solving, forward planning, policy-making, and other areas that can become intrac-
table, complex, and messy, such as divorce settlements.
Science fiction has also become a source of inspiration in interaction design. By this, we
mean in movies, writing, plays, and games that envision what role technology may play in
the future. Dan Russell and Svetlana Yarosh (2018) discuss the pros and cons of using dif-
ferent kinds of science fiction for inspiration in HCI design, arguing that they can provide a
good grounding for debate but are often not a great source of accurate predictions of future
technologies. They point out how, although the visions can be impressively futuristic, their
embellishments and what they actually look like are often limited by the author’s ability to
extend and build upon the ideas and the cultural expectations associated with the current
era. For example, the holodeck portrayed in the Star Trek TV series had 3D-bubble indicator
lights and push-button designs on its bridge with the sound of a teletype in the background.
This is the case to such an extent that Russell and Yarosh even argue that the priorities and
concerns of the author’s time and their cultural upbringing can bias the science fiction toward
telling narratives from the perspective of the present, rather than providing new insights and
paving the way to future designs.
The different kinds of future visions provide concrete scenarios of how society can use
the next generation of imagined technologies to make their lives more comfortable, safe,
informative, and efficient. Furthermore, they also raise many questions concerning privacy,
trust, and what we want as a society. They provide much food for thought for research-
ers, policy-makers, and developers, challenging them to consider both positive and negative
implications.
Many new challenges, themes, and questions have been articulated through such visions
(see, for example, Rogers, 2006; Harper et al., 2008; Abowd, 2012), including the following:
• How to enable people to access and interact with information in their work, social, and
everyday lives using an assortment of technologies
• How to design user experiences for people using interfaces that are part of the environ-
ment but where there are no obvious controlling devices
• How and in what form to provide contextually relevant information to people at appropri-
ate times and places to support them while on the move
• How to ensure that information that is passed around via interconnected displays, devices,
and objects is secure and trustworthy
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N92
3.6.3 Theories
Over the past 30 years, numerous theories have been imported into human-computer interac-
tion, providing a means of analyzing and predicting the performance of users carrying out
tasks for specific types of computer interfaces and systems (Rogers, 2012). These have been
primarily cognitive, social, affective, and organizational in origin. For example, cognitive
theories about human memory were used in the 1980s to determine the best ways of repre-
senting operations, given people’s memory limitations. One of the main benefits of applying
such theories in interaction design is to help identify factors (cognitive, social, and affective)
relevant to the design and evaluation of interactive products. Some of the most influential
theories in HCI, including distributed cognition, will be covered in the next chapter.
3.6.4 Models
We discussed earlier why a conceptual model is important and how to generate one when
designing a new product. The term model has also been used more generally in interaction
design to describe, in a simplified way, some aspect of human behavior or human-computer
interaction. Typically, it depicts how the core features and processes underlying a phenom-
enon are structured and related to one another. It is usually abstracted from a theory coming
from a contributing discipline, like psychology. For example, Don Norman (1988) developed
a number of models of user interaction based on theories of cognitive processing, arising out
of cognitive science, which were intended to explain the way users interacted with interactive
technologies. These include the seven stages of action model that describes how users move
from their plans to executing physical actions that they need to perform to achieve them
to evaluating the outcome of their actions with respect to their goals. More recent models
developed in interaction design are user models, which predict what information users want
in their interactions and models that characterize core components of the user experience,
such as Marc Hassenzahl’s (2010) model of experience design.
3.6.5 Frameworks
Numerous frameworks have been introduced in interaction design to help designers con-
strain and scope the user experience for which they are designing. In contrast to a model,
a framework offers advice to designers as to what to design or look for. This can come in
a variety of forms, including steps, questions, concepts, challenges, principles, tactics, and
dimensions. Frameworks, like models, have traditionally been based on theories of human
behavior, but they are increasingly being developed from the experiences of actual design
practice and the findings arising from user studies.
Many frameworks have been published in the HCI/interaction design literature, cover-
ing different aspects of the user experience and a diversity of application areas. For example,
there are frameworks for helping designers think about how to conceptualize learning, work-
ing, socializing, fun, emotion, and so on, and others that focus on how to design particular
kinds of technologies to evoke certain responses, for example, persuasive technologies (see
Chapter 6, “Emotional Interaction”). There are others that have been specifically developed
to help researchers analyze the qualitative data they collect in a user study, such as Dis-
tributed Cognition (Rogers, 2012). One framework, called DiCoT (Furniss and Blandford,
2006), was developed to analyze qualitative data at the system level, allowing researchers
to understand how technologies are used by teams of people in work or home settings.
(Chapter 9, “Data Analysis,” describes DiCoT in more detail.)
3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 93
A classic example of a conceptual framework that has been highly influential in HCI is
Don Norman’s (1988) explanation of the relationship between the design of a conceptual
model and a user’s understanding of it. The framework comprises three interacting compo-
nents: the designer, the user, and the system. Behind each of these are the following:
Designer’s Model The model the designer has of how the system should work
System Image How the system actually works, which is portrayed to the user through the
interface, manuals, help facilities, and so on
User’s Model How the user understands how the system works
The framework makes explicit the relationship between how a system should function,
how it is presented to users, and how it is understood by them. In an ideal world, users should
be able to carry out activities in the way intended by the designer by interacting with the sys-
tem image that makes it obvious what to do. If the system image does not make the designer’s
model clear to the users, it is likely that they will end up with an incorrect understanding of
the system, which in turn will increase the likelihood of their using the system ineffectively
and making errors. This has been found to happen often in the real world. By drawing atten-
tion to this potential discrepancy, designers can be made aware of the importance of trying
to bridge the gap more effectively.
To summarize, paradigms, visions, theories, models, and frameworks are not mutually
exclusive, but rather they overlap in their way of conceptualizing the problem and design
space, varying in their level of rigor, abstraction, and purpose. Paradigms are overarching
approaches that comprise a set of accepted practices and framing of questions and phe-
nomena to observe; visions are scenarios of the future that set up challenges, inspirations,
and questions for interaction design research and technology development; theories tend to
be comprehensive, explaining human-computer interactions; models are an abstraction that
simplify some aspect of human-computer interaction, providing a basis for designing and
evaluating systems; and frameworks provide a set of core concepts, questions, or principles
to consider when designing for a user experience or analyzing data from a user study.
dILEMMA
Who Is in Control?
A recurrent theme in interaction design, especially in the current era of AI-based systems, is
who should be in control at the interface level. The different interaction types vary in terms
of how much control a user has and how much the computer has. While users are primarily
in control for instructing direct manipulation interfaces, they are less so in responding type
interfaces, such as sensor-based and context-aware environments where the system takes the
initiative to act. User-controlled interaction is based on the premise that people enjoy mastery
and being in control. It assumes that people like to know what is going on, be involved in the
action, and have a sense of power over the computer.
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N94
In contrast, autonomous and context-aware control assumes that having the environ-
ment monitor, recognize, and detect deviations in a person’s behavior can enable timely, help-
ful, and even critical information to be provided when considered appropriate (Abowd and
Mynatt, 2000). For example, elderly people’s movements can be detected in the home and
emergency or care services alerted if something untoward happens to them that might other-
wise go unnoticed, for instance, if they fall over and are unable to sound the alarm. But what
happens if a person chooses to take a rest in an unexpected area (on the carpet), which the
system detects as a fall? Will the emergency services be called out unnecessarily and cause
care givers undue worry? Will the person who triggered the alarm be mortified at triggering
a false alarm? And how will it affect their sense of privacy, knowing that their every move is
constantly being monitored?
Another concern is what happens when the locus of control switches between user and
system. For example, consider who is in control when using a GPS for vehicle navigation. At
the beginning, the driver is very much in control, issuing instructions to the system as to where
to go and what to include, such as highways, gas stations, and traffic alerts. However, once on
the road, the system takes over and is in control. People often find themselves slavishly follow-
ing what the GPS tells them to do, even though common sense suggests otherwise.
To what extent do you need to be in control in your everyday and working life? Are you
happy to let technology monitor and decide what you need or might be interested in know-
ing, or do you prefer to tell it what you want to do? What do you think of autonomous cars
that drive for you? While it might be safer and more fuel-efficient, will it take the pleasure
out of driving?
A tongue-in-cheek video made by Superflux, called Uninvited Guests, shows who is very
much in control when a man is given lots of smart gadgets by his children for his birthday to
help him live more healthily: https://vimeo.com/128873380.
Source: Cluff / Cartoon Stock
3 . 6 PA R A d I G M s , V I s I O N s , T h E O R I E s , M O d E L s , A N d F R A M E w O R k s 95
In-depth Activity
The aim of this in-depth activity is for you to think about the appropriateness of differ-
ent kinds of conceptual models that have been designed for similar physical and digital
information artifacts.
Compare the following:
• A paperback book and an ebook
• A paper-based map and a smartphone map app
What are the main concepts and metaphors that have been used for each? (Think about the
way time is conceptualized for each of them.) How do they differ? What aspects of the paper-
based artifact have informed the digital app? What is the new functionality? Are any aspects
of the conceptual model confusing? What are the pros and cons?
summary
This chapter explained the importance of conceptualizing the problem and design spaces before
trying to build anything. It stressed throughout the need to be explicit about the claims and
assumptions behind design decisions that are suggested. It described an approach to formu-
lating a conceptual model and explained the evolution of interface metaphors that have been
designed as part of the conceptual model. Finally, it considered other ways of conceptualizing
interaction in terms of interaction types, paradigms, visions, theories, models, and frameworks.
Key Points
• A fundamental aspect of interaction design is to develop a conceptual model.
• A conceptual model is a high-level description of a product in terms of what users can do
with it and the concepts they need to understand how to interact with it.
• Conceptualizing the problem space in this way helps designers specify what it is they are
doing, why, and how it will support users in the way intended.
• Decisions about conceptual design should be made before commencing physical design
(such as choosing menus, icons, dialog boxes).
• Interface metaphors are commonly used as part of a conceptual model.
• Interaction types (for example, conversing or instructing) provide a way of thinking about
how best to support the activities users will be doing when using a product or service.
• Paradigms, visions, theories, models, and frameworks provide different ways of framing
and informing design and research.
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N96
Further Reading
Here we recommend a few seminal readings on interaction design and the user experience
(in alphabetical order).
DOURISH, P. (2001) Where the Action Is. MIT Press. This book presents an approach for
thinking about the design of user interfaces and user experiences based on the notion of
embodied interaction. The idea of embodied interaction reflects a number of trends that have
emerged in HCI, offering new sorts of metaphors.
JOHNSON, J. and HENDERSON, A. (2012) Conceptual Models: Core to Good Design.
Morgan and Claypool Publishers. This short book, in the form of a lecture, provides a com-
prehensive overview of what is a conceptual model using detailed examples. It outlines how
to construct one and why it is necessary to do so. It is cogently argued and shows how and
where this design activity can be integrated into interaction design.
JU, W. (2015) The Design of Implicit Interactions. Morgan and Claypool Publishers. This
short book, in the form of a lecture, provides a new theoretical framework to help design
smart, automatic, and interactive devices by examining the small interactions that we engage
in our everyday lives, often without any explicit communication. It puts forward the idea of
implicit interaction as a central concept for designing future interfaces.
97
INTERVIEw with
Albrecht schmidt
Albrecht Schmidt is professor of human-
centered ubiquitous media in the com-
puter science department of the Ludwig-
Maximilians-Universität München in Ger-
many. He studied computer science in Ulm
and Manchester and received a PhD from
Lancaster University, United Kingdom,
in 2003. He held several prior academic
positions at different universities, includ-
ing Stuttgart, Cambridge, Duisburg-Essen,
and Bonn. He also worked as a researcher
at the Fraunhofer Institute for Intelligent
Analysis and Information Systems (IAIS)
and at Microsoft Research in Cambridge.
In his research, he investigates the inherent
complexity of human-computer interaction
in ubiquitous computing environments,
particularly in view of increasing computer
intelligence and system autonomy. Albrecht
has actively contributed to the scientific
discourse in human-computer interaction
through the development, deployment, and
study of functional prototypes of interac-
tive systems and interface technologies in
different real world domains. Most recently,
he focuses on how information technol-
ogy can provide cognitive and perceptual
support to amplify the human mind.
How do you think future visions inspire
research in interaction design? Can you
give an example from your own work?
Envisioning the future is key to research in
human-computer interaction. In contrast
to traditional fields that discover phenom-
ena (such as physics or sociology), research
in interaction design is constructive and
creates new things that potentially change
our world. Research in interaction design
also analyzes the world and aims to under-
stand phenomena but mainly as a means
to inspire and guide innovations. A major
aspect of research is then to create concrete
designs, build concepts and prototypes,
and evaluate them.
Future visions are an excellent way to
describe the big picture of a future where
we still have to invent and implement the
details. A vision enables us to communicate
the overall goal for which we are aiming.
In formulating the vision, we have to con-
textualize our ideas, link them to practices
in our lives, and describe the anticipated
impact on individuals and society. A pre-
requisite for formulating a coherent future
vision is a good understanding of the prob-
lems that we want to address. If formulated
well, the vision shows a clear direction,
but it leaves room for researchers in the
community to make their own interpreta-
tion. A well-formulated future vision also
leaves room for individuals to align their
research efforts with the goal or to criticize
it fundamentally through their research.
I N T E R V I E w w I T h A L B R E C h T s C h M I d T
3 C O N C E P T U A L I Z I N G I N T E R A C T I O N98
We have proposed the vision of ampli-
fying human perception and cognition
through digital technologies (see Schmidt,
2017a; 2017b). This vision emerged from
our various concrete research prototypes
over the last 10 years. We realized that
many of the prototypes and technologies
we developed were pointed in a similar
direction: enabling superhuman abilities
through devices and applications. At the
same time, we demonstrate why amplify-
ing human abilities is a timely endeavor,
in particular given the current advances in
artificial intelligence, in sensing technol-
ogies, as well as in personal display devices.
For our group and for colleagues with
whom we work, this vision has become a
means for inspiring new ideas, for investi-
gating relevant areas for potential innova-
tion systematically, and for assessing ideas
early on.
Why do metaphors persist in HCI?
Good metaphors allow people to transfer
their understanding and skills from another
domain in the real world to interaction
with computers and data. Good metaphors
are abstract enough to persist over time,
but concrete enough to simplify the use
of computers. Early metaphors included
computers as an advanced typewriter
and computers as intelligent assistants. Such
metaphors help in the design process to
create understandable interaction concepts
and user interfaces. A designer can take
their idea for a user interface or an inter-
action concept and evaluate it in the light
of the metaphor. They can assess if the
interaction is understandable for people
familiar with the concept or technologies
on which the metaphor is based. Meta-
phors often suggest interaction styles and
hence can help to create interfaces that are
more consistent and interaction designs
that can be used without explanation using
intuition (which in this case is the implicit
understanding of the metaphor).
Metaphors for which the underlying
concept has disappeared from everyday
usage may still persist. In many cases, users
will not know the original concept from
their own experience (for example, a type-
writer), but have grown up with technol-
ogies using the metaphor. For a metaphor
to persist, it must remain conducive and
helpful over time for new users as well as
for experienced ones.
What do you think of the rise of AI and
automation? Do you think there is a role
for HCI, and if so, what?
Advancements in AI and in automation
are exciting. They have the potential to
empower humans to do things, to think
things, and to experience things we cannot
even imagine right now. However, the key
to unlocking this potential is to create effi-
cient ways for interacting with artificial
intelligence. Meaningful automation and
intelligent systems always have boundaries
and intersections with human action. For
example, an autonomous car will transport
a human, a drone will deliver a parcel to a
person, an automated kitchen will prepare
a meal for a family, and large-scale data
analytics in companies will lead to better
services for their customers. With intelli-
gent systems and smart services taking a
more active role through artificial intelli-
gence, the way in which interaction and
interfaces are designed becomes even more
crucial. Creating a positive user experience
in the presence of artificial intelligence is
a challenge where new visions and meta-
phors are required.
One concept that we suggested is the
notion of intervention user interfaces
(Schmidt and Herrmann, 2017). The basic
99
expectation is that in the future many
intelligent systems in our environment will
work just fine, without any human inter-
action. However, to stay in control and to
tailor the system to current and unforesee-
able needs, as well as to customize the user
experience, human interventions should be
easily possible. Designing interaction con-
cepts for interventions and user interfaces
that empower humans to make the most
of a system driven by artificial intelligence
is a huge challenge and includes many ba-
sic research questions. Getting the interac-
tion with AI right, basically finding ways
for humans to harness the power of AI for
what they want to do, is as important as
developing the underlying algorithms. One
without the other is of very limited value.
What do you see are the challenges ahead
for HCI at scale?
There are many challenges ahead in the
context of autonomous systems and
artificial intelligence as outlined earlier.
Closely related to this is human data inter-
action beyond interactive visualization.
How can we empower humans to work
with big and unstructured data? Here is a
concrete example: I had a discussion with
medical professionals this morning. For a
specific cancer type, there are several thou-
sand publications readily available. Many
of them may have similar results, and
others may have conflicting ones. Reading
all of the publications in their current form
is an intractable problem for a human
reader, as it would take too long and
would overload a person’s working mem-
ory. The simple question that resulted from
the discussion is: what would a system and
interface look like that uses AI to prepro-
cess 10,000 papers, allows interactive pre-
sentation of relevant content, and enables
humans to make sense of the state of the
art and come up with its own hypotheses?
Preferably, the interface would support the
person to do this in a few hours, rather
than in their entire lifetime.
Another challenge at the societal scale is
to understand the long-term impact of inter-
active systems that we create. So far, this was
very much trial and error. Providing unlim-
ited and easy-to-use mass communication
to individuals without journalistic training
has changed how we read news. Personal
communication devices and instant mes-
saging have altered communication pat-
terns in families and classrooms. Working
in the office using a computer in order to
create texts is reducing our physical move-
ments. The way that we design interactive
systems, the things we make easy or hard
to use, and the modalities that we choose
in our interaction design have inevitably
resulted in long-term impacts on people.
With the current methods and tools in
HCI, we are well equipped to do a great job
in developing easy-to-use systems with an
amazing short-term user experience for the
individual. However, looking at upcoming
major innovations in mobility and health-
care technologies, the interfaces we design
may have many more consequences. One
major challenge at scale is to design for a
longer-term user experience (months to
years) on a societal scale. Here we still first
have to research and invent methods and
tools.
I N T E R V I E w w I T h A L B R E C h T s C h M I d T
Chapter 4
C O G N I T I V E A S P E C T S
Objectives
The main goals of the chapter are to accomplish the following:
• Explain what cognition is and why it is important for interaction design.
• Discuss what attention is and its effects on our ability to multitask.
• Describe how memory can be enhanced through technology aids.
• Show the difference between various cognitive frameworks that have been applied
to HCI.
• Explain what are mental models.
• Enable you to elicit a mental model and understand what it means.
4.1 Introduction
Imagine that it is getting late in the evening and you are sitting in front of your laptop. You
have a report to complete by tomorrow morning, but you are not getting very far with it.
You begin to panic and start biting your nails. You see two text messages flash up on your
smartphone. You instantly abandon your report and cradle your smartphone to read them.
One is from your mother, and the other is from your friend asking if you want to go out for a
drink. You reply immediately to both of them. Before you know it, you’re back on Facebook
to see whether any of your friends have posted anything about the party you wanted to go to
but had to say no. Your phone rings, and you see that it’s your dad calling. You answer it, and
he asks if you have been watching the football game. You say that you are too busy working
toward a deadline, and he tells you that your team has just scored. You chat with him for a
while and then say you have to get back to work. You realize 30 minutes have passed, and
you return your attention to your report. But before you realize it, you click your favorite
sports site to check the latest score of the football game and discover that your team has just
scored again. Your phone starts buzzing. Two new WhatsApp messages are waiting for you.
And on it goes. You glance at the time on your laptop. It is midnight. You really are in a panic
now and finally close everything down except your word processor.
4.1 Introduction
4.2 What Is Cognition?
4.3 Cognitive Frameworks
4 C O G N I T I V E A S P E C T S102
In the past 10–15 years, it has become increasingly common for people to be switching
their attention constantly among multiple tasks. The study of human cognition can help us
understand the impact of multitasking on human behavior. It can also provide insights into
other types of digital behaviors, such as decision-making, searching, and designing when
using computer technologies by examining human abilities and limitations.
This chapter covers these aspects by examining the cognitive aspects of interaction design.
It considers what humans are good and bad at, and it shows how this knowledge can inform
the design of technologies that both extend human capabilities and compensate for human
weaknesses. Finally, relevant cognitive theories, which have been applied in HCI to inform
technology design, are described. (Other ways of conceptualizing human behavior that focus
on the social and emotional aspects of interaction are presented in the following two chapters.)
4.2 What Is Cognition?
There are many different kinds of cognition, such as thinking, remembering, learning, day-
dreaming, decision-making, seeing, reading, writing, and talking. A well-known way of dis-
tinguishing between different modes of cognition is in terms of whether it is experiential or
reflective (Norman, 1993). Experiential cognition is a state of mind where people perceive,
act, and react to events around them intuitively and effortlessly. It requires reaching a certain
level of expertise and engagement. Examples include driving a car, reading a book, hav-
ing a conversation, and watching a video. In contrast, reflective cognition involves mental
effort, attention, judgment, and decision-making, which can lead to new ideas and creativity.
Examples include designing, learning, and writing a report. Both modes are essential for eve-
ryday life. Another popular way of describing cognition is in terms of fast and slow thinking
( Kahneman, 2011). Fast thinking is similar to Don Norman’s experiential mode insofar as it
is instinctive, reflexive, and effortless, and it has no sense of voluntary control. Slow thinking,
as the name suggests, takes more time and is considered to be more logical and demanding,
and it requires greater concentration. The difference between the two modes is easy to see
when asking someone to give answers to the following two arithmetic equations:
2 2
21 19
The former can be done by most adults in a split second without thinking, while the lat-
ter requires much mental effort; many people need to externalize the task to be able to com-
plete it by writing it down on paper and using the long multiplication method. Nowadays,
many people simply resort to fast thinking by typing the numbers to be added or multiplied
into a calculator app on a smartphone or computer.
Other ways of describing cognition are in terms of the context in which it takes place,
the tools that are employed, the artifacts and interfaces that are used, and the people involved
(Rogers, 2012). Depending on when, where, and how it happens, cognition can be distrib-
uted, situated, extended, and embodied. Cognition has also been described in terms of spe-
cific kinds of processes (Eysenck and Brysbaert, 2018). These include the following:
• Attention
• Perception
4 . 2 W h AT I S C O G N I T I O N ? 103
• Memory
• Learning
• Reading, speaking, and listening
• Problem-solving, planning, reasoning, and decision-making
It is important to note that many of these cognitive processes are interdependent: several
may be involved for a given activity. It is rare for one to occur in isolation. For example, when
reading a book one has to attend to the text, perceive and recognize the letters and words,
and try to make sense of the sentences that have been written.
In the following sections we describe the main kinds of cognitive processes in more
detail, followed by a summary box highlighting the core design implications for each. The
most relevant for interaction design are attention and memory, which we describe in the
greatest detail.
4.2.1 Attention
Attention is central to everyday life. It enables us to cross the road without being hit by a
car or bicycle, notice when someone is calling our name, and be able to text while at the
same time watching TV. It involves selecting things on which to concentrate, at a point in
time, from the range of possibilities available, allowing us to focus on information that is
relevant to what we are doing. The extent to which this process is easy or difficult depends
on (1) whether someone has clear goals and (2) whether the information they need is salient
in the environment.
4.2.1.1 Clear Goals
If someone knows exactly what they want to find out, they try to match this with the infor-
mation that is available. For example, when someone has just landed at an airport after a
long flight, which did not have Wi-Fi onboard, and they want to find out who won the World
Cup, they might scan the headlines on their smartphone or look at breaking news on a public
TV display inside the airport. When someone is not sure exactly what they are looking for,
they may browse through information, allowing it to guide their attention to interesting or
salient items. For example, when going to a restaurant, someone may have the general goal of
eating a meal but only a vague idea of what they want to eat. They peruse the menu to find
things that whet their appetite, letting their attention be drawn to the imaginative descrip-
tions of various dishes. After scanning through the possibilities and imagining what each
dish might be like, as well as considering other factors, such as cost, who they are with, what
are the specials, what the waiter recommends, and whether they want a two- or three-course
meal, and so on), they then decide.
4.2.1.2 Information Presentation
The way information is displayed can also greatly influence how easy or difficult it is to
comprehend appropriate pieces of information. Look at Figure 4.1, and try the activity
(based on Tullis, 1997). Here, the information-searching tasks are precise, requiring spe-
cific answers.
4 C O G N I T I V E A S P E C T S104
South Carolina
City
Charleston
Charleston
Charleston
Charleston
Charleston
Charleston
Charleston
Best Western
Days Inn
Holiday Inn N
Holiday Inn SW
Howard Johnsons
Ramada Inn
Sheraton Inn
803
803
803
803
803
803
803
$126
$118
$136
$133
$131
$133
$134
$130
$124
$146
$147
$136
$140
$142
747-0961
881-1000
744-1621
556-7100
524-4148
774-8281
744-2401
Columbia
Columbia
Columbia
Columbia
Columbia
Columbia
Columbia
Columbia
Best Western
Carolina Inn
Days Inn
Holiday Inn NW
Howard Johnsons
Quality Inn
Ramada Inn
Vagabond Inn
803
803
803
803
803
803
803
803
$129
$142
$123
$132
$125
$134
$136
$127
$134
$148
$127
$139
$127
$141
$144
$130
796-9400
799-8200
736-0000
794-9440
772-7200
772-0270
796-2700
796-6240
Phone elbuoDelgniSletoH/letoM
Area
code
Rates
Pennsylvania
Bedford Motel/Hotel: Crinaline Courts
(814) 623-9511 S: $118 D: $120
Bedford Motel/Hotel: Holiday Inn
(814) 623-9006 S: $129 D: $136
Bedford Motel/Hotel: Midway
(814) 623-8107 S: $121 D: $126
Bedford Motel/Hotel: Penn Manor
(814) 623-8177 S: $119 D: $125
Bedford Motel/Hotel: Quality Inn
(814) 623-5189 S: $123 D: $128
Bedford Motel/Hotel: Terrace
(814) 623-5111 S: $122 D: $124
Bradley Motel/Hotel: De Soto
(814) 362-3567 S: $120 D: $124
Bradley Motel/Hotel: Holiday House
(814) 362-4511 S: $122 D: $125
Bradley Motel/Hotel: Holiday Inn
(814) 362-4501 S: $132 D: $140
Breezewood Motel/Hotel: Best Western Plaza
(814) 735-4352 S: $120 D: $127
Breezewood Motel/Hotel: Motel 70
(814) 735-4385 S: $116 D: $118
(a)
(b)
Figure 4.1 Two different ways of structuring the same information at the interface level. One makes
it much easier to find information than the other.
Source: Used courtesy of Dr. Tom Tullis
4 . 2 W h AT I S C O G N I T I O N ? 105
4.2.1.3 Multitasking and Attention
As mentioned in the introduction to this chapter, many people now multitask, frequently
switching their attention among different tasks. For example, in a study of teenage multitask-
ing, it was found that the majority of teenagers were found to multitask most or some of
the time while listening to music, watching TV, using a computer, or reading (Rideout et al.,
2010). It is probably even higher now, considering their use of smartphones while walking,
talking, and studying. While attending a presentation at a conference, we witnessed some-
one deftly switch between four ongoing instant message chats (one at the conference, one at
school, one with friends, and one at her part-time job), read, answer, delete, and place all new
messages in various folders of her two email accounts, and check and scan her Facebook and
her Twitter feeds, all while appearing to listen to the talk, take some notes, conduct a search
on the speaker’s background, and open up their publications. When she had a spare moment,
she played the game Patience. It was exhausting just watching her for a few minutes. It was
as if she were capable of living in multiple worlds simultaneously while not letting a moment
go to waste. But how much did she really take in of the presentation?
Is it possible to perform multiple tasks without one or more of them being detrimentally
affected? There has been much research on the effects of multitasking on memory and attention
(Burgess, 2015). The general finding is that it depends on the nature of the tasks and how much
attention each demands. For example, listening to gentle music while working can help people
tune out background noise, such as traffic or other people talking, and help them concentrate
on what they are doing. However, if the music is loud, like heavy metal, it can be distracting.
Individual differences have also been found. For example, the results of a series of
experiments comparing heavy with light multitaskers showed that heavy media multitaskers
(such as the person described above) were more prone to being distracted by the multiple
streams of media they are viewing than those who infrequently multitask. The latter were
found to be better at allocating their attention when faced with competing distractions
ACTIVITY 4.1
Look at the top screen of Figure 4.1 and (1) find the price for a double room at the Quality
Inn in Columbia, South Carolina, and (2) find the phone number of the Days Inn in Charles-
ton, South Carolina. Then look at the bottom screen in Figure 4.1 and (1) find the price of a
double room at the Holiday Inn in Bradley, Pennsylvania, and (2) find the phone number of
the Quality Inn in Bedford, Pennsylvania. Which took longer to do?
In an early study, Tullis found that the two screens produced quite different results: It
took an average of 3.2 seconds to search the top screen, while it took an average of 5.5 sec-
onds to find the same kind of information in the bottom screen. Why is this so, considering
that both displays have the same density of information relative to the background?
Comment
The primary reason for the disparity is the way that the characters are grouped in the display.
In the top screen, they are grouped into vertical categories of information (that is, place, type
of accommodation, phone number, and rates), and this screen has space in between the col-
umns of information. In the bottom screen, the information is bunched together, making it
much more difficult to search.
4 C O G N I T I V E A S P E C T S106
(Ophir et al., 2009). This suggests that people who are heavy multitaskers are likely to be those
who are easily distracted and find it difficult to filter out irrelevant information. However, a
more recent study by Danielle Lotteridge et al. (2015) found that it may be more complex.
They found that while heavy multitaskers are easily distracted, they can also put this to good
use if the distracting sources are relevant to the task in hand. Lotteridge et al. conducted a
study that involved writing an essay under two conditions—either with relevant or irrelevant
information. They found that if the information sources are relevant, they don’t affect the
essay writing. The condition where irrelevant information was provided was found to nega-
tively impact task performance. In summary, they found that multitasking can be both good
and bad—it depends on what you are distracted by and how relevant it is to the task at hand.
The reason why multitasking is thought to be detrimental for human performance is that it
overloads people’s capacity to focus. Having switched attention from what someone is working
on to another piece of information requires additional effort to get back into the other task and
to remember where they were in the ongoing activity. Thus, the time to complete a task can be
significantly increased. A study of completion rates of coursework found that students who were
involved in instant messaging took up to 50 percent longer to read a passage from a textbook com-
pared with those who did not instant message while reading (Bowman et al., 2010). Multitasking
can also result in people losing their train of thought, making errors, and needing to start over.
Nevertheless, many people are expected to multitask in the workplace nowadays, such as
in hospitals, as a result of the introduction of ever more technology (for example, multiple
screens in an operating room). The technology is often introduced to provide new kinds of
real-time and changing information. However, this usually requires the constant attention of
clinicians to check whether any of the data is unusual or unexpected. Managing the ever-
increasing information load requires professionals, like clinicians, to develop new attention
and scanning strategies, looking out for anomalies in data visualizations and listening for
audio alarms alerting them to potential dangers. Interaction designers have tried to make this
easier by including the use of ambient displays that come on when something needs atten-
tion—flashing arrows to direct attention to a particular type of data or history logs of recent
actions that can be quickly examined to refresh one’s memory of what has just happened on a
given screen. However, how well clinicians manage to switch and divide their attention among
different tasks in tech-rich environments has barely been researched (Douglas et al., 2017).
Source: Chris Wildt / Cartoon Stock
4 . 2 W h AT I S C O G N I T I O N ? 107
DILEMMA
Is It OK to Use a Phone While Driving?
There has been considerable debate about whether drivers should be able to talk or text on
their phones at the same time as driving (see Figure 4.2). People talk on their phones while
walking, so why not be able to do the same thing when driving? The main reasons are that
driving is more demanding, drivers are more prone to being distracted, and there is a greater
chance of causing accidents (however, it is also the case that some people, when using their
phones, walk out into a road without looking to see whether any cars are coming).
A meta-review of research that has investigated mobile phone use in cars has found that
drivers’ reaction times are longer to external events when engaged in phone conversations
(Caird et al., 2018). Drivers who use phones have also been found to be much poorer at stay-
ing in their lane and maintaining the correct speed (Stavrinos et al., 2013). The reason for this
is that drivers on a phone rely more on their expectations about what is likely to happen next
and, as a result, respond much more slowly to unexpected events, such as the car in front of
them stopping (Briggs et al., 2018). Moreover, phone conversations cause the driver visually
to imagine what is being talked about. The driver may also imagine the facial expression of the
person to whom they are speaking. The visual imagery involved competes for the processing
resources also needed to enable the driver to notice and react to what is in front of them on the
road. The idea that using a hands-free device is safer than actually holding the phone to carry
out a conversation is false, as the same type of cognitive processing takes place both ways.
(Continued)
Figure 4.2 How distracting is it to be texting the phone while driving?
Source: Tetra Images / Alamy Stock Photo
4 C O G N I T I V E A S P E C T S108
In several contexts, therefore, multitasking can be detrimental to performance, such as text-
ing or speaking on the phone when driving. The cost of switching attention varies from person
to person and which information resources are being switched between. When developing new
technology to provide more information for people in their work settings, it is important to
consider how best to support them so that they can easily switch their attention back and forth
Design Implications
Attention
• Consider context. Make information salient when it requires attention at a given stage
of a task.
• Use techniques to achieve this when designing visual interfaces, such as animated graphics,
color, underlining, ordering of items, sequencing of different information, and spac-
ing of items.
• Avoid cluttering visual interfaces with too much information. This applies especially to the
use of color and graphics: It is tempting to use lots of these attributes, which results in a
mishmash of media that is distracting and annoying rather than helping the user attend to
relevant information.
• Consider designing different ways of supporting effective switching and returning to a
particular interface. This could be done subtly, such as the use of pulsing lights gradually
getting brighter, or abruptly, such as the use of alerting sounds or voice. How much com-
peting visual information or ambient sound is present also needs to be considered.
It has also been found that drivers who engage in conversation with their passengers expe-
rienced similar negative effects. However, there is a difference between having a conversation
with a passenger sitting next to the driver and one with a person located remotely. The driver
and front-seat passenger can observe jointly what is happening in front of them on the road and
will moderate or cease their conversation in order to switch their full attention to a potential or
actual hazard. Someone on the other end of a phone, however, is not privy to what the driver
is seeing and will carry on the conversation. They might have just asked “Where did you leave
the spare set of keys?” and caused the driver mentally to search for them in their home, making
it more difficult for them to switch their full attention back to what is happening on the road.
Because of these hazardous problems, many countries have banned the use of phones
while driving. To help drivers resist the temptation to answer a phone that rings or glance at an
incoming notification that pings, smartphone device manufacturers have been asked by some
governments to introduce a driver mode akin to the airplane mode that could automatically
lock down a smartphone, preventing access to apps, while disabling the phone’s keyboard
when it detects a person who is driving. For example, the iPhone now has implemented
this option.
4 . 2 W h AT I S C O G N I T I O N ? 109
among the multiple displays or devices and be able to return readily to what they were doing after
an interruption (for instance, the phone ringing or people entering their space to ask questions).
4.2.2 Perception
Perception refers to how information is acquired from the environment via the five sense
organs (vision, hearing, taste, smell, and touch) and transformed into experiences of objects,
events, sounds, and tastes (Roth, 1986). In addition, we have the additional sense of kines-
thesia, which relates to the awareness of the position and movement of the parts of the body
through internal sensory organs (known as proprioceptors) located in the muscles and joints.
Perception is complex, involving other cognitive processes such as memory, attention, and
language. Vision is the most dominant sense for sighted individuals, followed by hearing and
touch. With respect to interaction design, it is important to present information in a way that
can be readily perceived in the manner it was intended.
As was demonstrated in Activity 4.1, grouping items together and leaving spaces between
them can aid attention because it breaks up the information. Having chunks of informa-
tion makes it easier to scan, rather than one long list of text that is all the same. In addi-
tion, many designers recommend using blank space (more commonly known as white space)
when grouping objects, as it helps users to perceive and locate items more easily and quickly
(Malamed, 2009). In a study comparing web pages displaying the same amount of informa-
tion but structured using different graphical methods (see Figure 4.3), it was found that
people took less time to locate items from information that was grouped using a border than
Design Implications
Perception
Representations of information need to be designed to be perceptible and recognizable across
different media.
• Design icons and other graphical representations so that users can readily distinguish
between them.
• Obvious separators and white space are effective visual methods for grouping information
that make it easier to perceive and locate items.
• Design audio sounds to be readily distinguishable from one another so that users can per-
ceive how they differ and remember what each one represents.
• Research proper color contrast techniques when designing an interface, especially when
choosing a color for text so that it stands out from the background. For example, it is okay
to use yellow text on a black or blue background, but not on a white or green background.
• Haptic feedback should be used judiciously. The kinds of haptics used should be easily
distinguishable so that, for example, the sensation of squeezing is represented in a tactile
form that is different from the sensation of pushing. Overuse of haptics can cause confu-
sion. Apple iOS suggests providing haptic feedback in response to user-initiated actions,
such as when the action of unlocking a vehicle using a smartwatch has been completed.
4 C O G N I T I V E A S P E C T S110
Black Hills Forest
Cheyenne River
Social Science
South San Jose
Badlands Park
Juvenile Justice
Peters Landing
Public Health
San Bernardino
Moreno Valley
Altamonte Springs
Peach Tree City
Jefferson Farms
Psychophysics
Political Science
Game Schedule
South Addision
Cherry Hills Village
Devlin Hall
Positions
Hubard Hall
Fernadino Beach
Council Bluffs
Classical Lit
Sociology
Greek
Wallace Hall
Concert Tickets
Public Radio FM
Children’s Museum
Creative Writing
Lake Havasu City
Engineering Bldg
Sports Studies
Lakewood Village
Rock Island
Highland Park
Machesney Park
Vallecito Mts.
Rock Falls
Freeport
Slaughter Beach
Results and Stats
Thousand Oaks
Promotions
North Palermo
Credit Union
Wilner Hall
Deerfield Beach
Arlington Hill
Preview Game
Richland Hills
Experts Guids
Neff Hall
Writing Center
Theater Auditions
Delaware City
Scholarships
Hendricksville
Knights Landing
Rocky Mountains
Latin
Pleasant Hills
Observatory
Public Affairs
Heskett Center
Performing Arts
Italian
Coaches
Mckees Rocks
Glenwood Springs
Urban Affairs
Grand Wash Cliffs
Indian Well Valley
Online Courses
Lindquist Hall
Fisk Hall
Los Padres Forest
Modern Literature
Studio Arts
Hugher Complex
Cumberland Flats
Central Village
Hoffman Estates
Brunswick
East Millinocket
Women’s Studies
Vacant
News Theatre
Candlewood Isle
McLeansboro
Experimental Links
Graduation
Emory Lindquist
Clinton Hall
San Luis Obispo
Webmaster
Russian
Athletics
Go Shockers
Degree Options
Newsletter
Curriculum
Emergency (EMS)
Statistics
Award Documents
Language Center
Future Shockers
Student Life
Accountancy
Mc Knight Center
Council of Women
Commute
Small Business
Dance
Gerontloge
Marketing
College Bylaws
Why Wichita?
Tickets
Educational Map
Physical Plant
Graphic Design
Non Credit Class
Media Relations
Advertising
Beta Alpha Psi
Liberal Arts
Counseling
Biological Science
Duerksen Fine Art
EMT Program
Staff
Aerospace
Choral Dept.
Alberg Hall
French
Spanish
Softball, Men’s
McKinley Hall
Email
Dental Hygiene
Tenure
Personnel Policies
English
Graduate Complex
Music Education
Advising Center
Medical School
Levitt Arena
Religion
Art Composition
Physics
Entrepreneurship
Koch Arena
Roster
Parents
Wrestling
Philosopy
Wichita Lyceum
Fairmount Center
Women’s Museum
Instrmental
Nrsing
Opera
Sports History
Athletic Dept.
Health Plan
Gelogy
Manufacturing
Management
UCATS
Alumni News
Saso
Intercollegiate
Bowling
Wichita Gateway
Transfer Day
Job Openings
Live Radio
Thinker & Movers
Alumni
Foundations
Corbin Center
Jardine Hall
Hugo Wall School
Career Services
Doers & Shockers
Core Values
Grace Wilkie Hall
Strategic Plan
Medical Tech
Figure 4.3 Two ways of structuring information on a web page
Source: Weller (2004)
4 . 2 W h AT I S C O G N I T I O N ? 111
when using color contrast (Weller, 2004). The findings suggest that using contrasting colors
in this manner may not be a good way to group information on a screen, but that using bor-
ders is more effective (Galitz, 1997).
4.2.3 Memory
Memory involves recalling various kinds of knowledge that allow people to act appropri-
ately. For example, it allows them to recognize someone’s face, remember someone’s name,
recall when they last met them, and know what they said to them last.
It is not possible for us to remember everything that we see, hear, taste, smell, or touch,
nor would we want to, as our brains would get overloaded. A filtering process is used to
decide what information gets further processed and memorized. This filtering process, how-
ever, is not without its problems. Often, we forget things that we would like to remember
and conversely remember things that we would like to forget. For example, we may find it
difficult to remember everyday things, like people’s names, or scientific knowledge such as
mathematical formulae. On the other hand, we may effortlessly remember trivia or tunes that
cycle endlessly through our heads.
How does this filtering process work? Initially, encoding takes place, determining
which information is paid attention to in the environment and how it is interpreted. The
extent to which it takes place affects people’s ability to recall that information later.
The more attention that is paid to something and the more it is processed in terms of
thinking about it and comparing it with other knowledge, the more likely it is to be
remembered. For example, when learning about a topic, it is much better to reflect on it,
carry out exercises, have discussions with others about it, and write notes rather than pas-
sively reading a book or watching a video about it. Thus, how information is interpreted
when it is encountered greatly affects how it is represented in memory and how easy it is
to retrieve subsequently.
Another factor that affects the extent to which information can be subsequently retrieved
is the context in which it is encoded. One outcome is that sometimes it can be difficult for
people to recall information that was encoded in a different context from the one in which
they are at present. Consider the following scenario:
You are on a train and someone comes up to you and says hello. You don’t recognize this
person for a few moments, but then you realize it is one of your neighbors. You are only
used to seeing them in the hallway of your apartment building and seeing them out of
context makes this person initially difficult to recognize.
Another well-known memory phenomenon is that people are much better at recognizing
things than recalling things. Furthermore, certain kinds of information is easier to recognize
than others. In particular, people are good at recognizing thousands of pictures even if they
have only seen them briefly before. In contrast, people are not as good at remembering details
about the things they photograph when visiting places, such as museums. It seems that they
remember less about objects when they have photographed them than when they observe
them with the naked eye (Henkel, 2014). The reason for this is that the study participants
appeared to be focusing more on framing the photo and less on the details of the object being
photographed. Consequently, people don’t process as much information about an object
when taking photos of it compared with when they are actually looking at it; hence, they are
unable to remember as much about it later.
4 C O G N I T I V E A S P E C T S112
Increasingly, people rely on the Internet and their smartphones to act as cognitive pros-
theses. Smartphones with Internet access have become an indispensable extension of the
mind. Sparrow et al. (2011) showed how expecting to have readily available Internet access
reduces the need and hence the extent to which people attempt to remember the information
itself, while enhancing their memory for knowing where to find it online. Many people will
whip out a smartphone to find out who acted in a movie, the name of a book, or what year
a pop song was first released, and so on. Besides search engines, there are a number of other
cognitive prosthetic apps that instantly help people find out or remember something, such as
Shazam.com, the popular music recognition app.
4.2.3.1 Personal Information Management
The number of documents written, images created, music files recorded, videoclips down-
loaded, emails with attachments saved, URLs bookmarked, and so on, increases every day. A
common practice is for people to store these files on a phone, on a computer, or in the cloud
with a view to accessing them later. This is known as personal information management
(PIM). The design challenge here is deciding which is the best way of helping users organize
their content so that it can be easily searched, for example, via folders, albums, or lists. The
solution should help users readily access specific items at a later date, for example, a par-
ticular image, video, or document. This can be difficult, however, especially when there are
thousands or hundreds of thousands of pieces of information available. How does someone
find that photo they took of their dog spectacularly jumping into the sea to chase a seagull,
which they believe was taken two or three years ago? It can take them ages to wade through
the hundreds of folders they have catalogued by date, name, or tag. Do they start by homing
in on folders for a given year, looking for events, places, or faces, or typing in a search term
to find the specific photo?
ACTIVITY 4.2
Try to remember the birthdays of all the members of your family and closest friends. How
many can you remember? Then try to describe the image/graphic of the latest app you
downloaded.
Comment
It is likely that you remembered the image, the colors, and the name of the app you down-
loaded much better than the birthdays of your family and friends—most people now rely on
Facebook or other online app to remind them about such special dates. People are good at
remembering visual cues about things, for example, the color of items, the location of objects
(for example, a book being on the top shelf), and marks on an object (like a scratch on a
watch, a chip on a cup, and so on). In contrast, people find other kinds of information per-
sistently difficult to learn and remember, especially arbitrary material like phone numbers.
http://Shazam.com
4 . 2 W h AT I S C O G N I T I O N ? 113
It can become frustrating if an item is not easy to locate, especially when users have to
spend lots of time opening numerous folders when searching for a particular image or an old
document, simply because they can’t remember what they called it or where they stored it.
How can we improve upon this cognitive process of remembering?
Naming is the most common means of encoding content, but trying to remember a name
someone created some time back can be difficult, especially if they have tens of thousands
of named files, images, videos, emails, and so forth. How might such a process be facilitated,
considering individual memory abilities? Ofer Bergman and Steve Whittaker (2016) have
proposed a model for helping people manage their “digital stuff” based on curation. The
model involves three interdependent processes: how to decide what personal information
to keep, how to organize that information when storing it, and which strategies to use to
retrieve it later. The first stage can be assisted by the system they use. For example, email,
texts, music, and photos are stored as default by many devices. Users have to decide whether
to place these in folders or delete them. In contrast, when browsing the web, they have to
make a conscious decision as to whether a site they are visiting is worth bookmarking as one
they might want to revisit later.
A number of ways of adding metadata to documents have been developed, includ-
ing time stamping, categorizing, tagging, and attribution (for example color, text, icon,
sound, or image). Surprisingly, however, the majority of people still prefer the old-
fashioned way of using folders for holding their files and other digital content. One
reason is that folders provide a powerful metaphor (see Chapter 3, “Conceptualizing
Interaction”) that people can readily understand—placing things that have something in
common into a container.
A folder that is often seen on many users’ desktop is one simply labeled “stuff.” This
is where documents, images, and so forth, that don’t have an obvious place to go are often
placed but that people still want to keep somewhere. It has also been found that there is
a strong preference for scanning across and within folders when looking for something
rather than simply typing a term into a search engine (Ofer and Whittaker, 2016). Part of
the problem with using search engines is that it can be difficult to recall the name of the file
someone is seeking. This process requires more cognitive effort than navigating through a
set of folders.
To help users with searching, a number of search and find tools, such as Apple’s Spot-
light, now enable them to type a partial name or even the first letter of a file that it then
searches for throughout the entire system, including the content inside documents, apps,
games, emails, contacts, images, calendars, and applications. Figure 4.4 shows a partial list of
files that Spotlight matched to the word cognition, categorized in terms of documents, mail
and text messages, PDF documents, and so on.
4.2.3.2 Memory Load and Passwords
Phone, online, and mobile banking allow customers to carry out financial transactions, such
as paying bills and checking the balance of their accounts, at their convenience. One of the
problems confronting banks that provide these capabilities, however, is how to manage secu-
rity concerns, especially preventing fraudulent transactions.
4 C O G N I T I V E A S P E C T S114
Figure 4.4 Apple’s Spotlight search tool
BOX 4.1
The Problem with the Magical Number Seven, Plus or Minus Two
Perhaps the best-known finding in psychology (certainly the one that nearly all students
remember many years after they have finished their studies) is George Miller’s (1956) theory
that seven, plus or minus two, chunks of information can be held in short-term memory at any
one time. However, it is also one that has been misapplied in interaction design because several
designers assume that it means they should design user interfaces only to have seven, plus or
minus two, widgets on a screen, such as menus. In fact, however, this is a misapplication of the
phenomenon, as explained here.
4 . 2 W h AT I S C O G N I T I O N ? 115
By short-term memory, Miller meant a memory store in which information was assumed to
be processed when first perceived. By chunks of information, Miller meant a range of items such
as numbers, letters, or words. According to Miller’s theory, therefore, people’s immediate mem-
ory capacity is very limited. They are able to remember only a few words or numbers that they
have heard or seen. If you are not familiar with this phenomenon, try the following exercise:
Read the first set of numbers here (or get someone to read them to you), cover it up, and
then try to recall as many of the items as possible. Repeat this for the other sets.
• 3, 12, 6, 20, 9, 4, 0, 1, 19, 8, 97, 13, 84
• cat, house, paper, laugh, people, red, yes, number, shadow, broom, rain, plant, lamp,
chocolate, radio, one, coin, jet
• t, k, s, y, r, q, x, p, a, z, l, b, m, e
How many did you correctly remember for each set? Between five and nine, as suggested
by Miller’s theory?
Chunks of information can also be combined items that are meaningful. For example, it is
possible to remember the same number of two-word phrases like hot chocolate, banana split,
cream cracker, rock music, cheddar cheese, leather belt, laser printer, tree fern, fluffy duckling,
or cold rain. When these are all jumbled up (that is, split belt, fern crackers, banana laser,
printer cream, cheddar tree, rain duckling, or hot rock), however, it is much harder to remem-
ber as many chunks. This is mainly because the first set contains all meaningful two-word
phrases that have been heard before and that require less time to be processed in short-term
memory, whereas the second set is made up of completely novel phrases that don’t exist in the
real world. You need to spend time linking the two parts of the phrase together while trying
to memorize them. This takes more time and effort to achieve. Of course, it is possible to do if
you have time to spend rehearsing them, but if you are asked to do it having heard them only
once in quick succession, it is most likely that you will remember only a few.
So, how might people’s ability to remember only 7 ± 2 chunks of information that they
have just read or heard be usefully applied to interaction design? According to a survey by
Bob Bailey (2000), several designers have been led to believe the following guidelines and have
created interfaces based on them:
• Have only seven options on a menu.
• Display only seven icons on a menu bar.
• Never have more than seven bullets in a list.
• Place only seven tabs at the top of a website page.
• Place only seven items on a pull-down menu.
He points out how this is not how the principle should be applied. The reason is that these are
all items that can be scanned and rescanned visually and hence do not have to be recalled from
short-term memory. They don’t just flash up on the screen and disappear, requiring the user to
remember them before deciding which one to select. If you were asked to find an item of food
most people crave in the set of single words listed earlier, would you have any problem? No,
you would just scan the list until you recognized the one (chocolate) that matched the task
and then select it—just as people do when interacting with menus, lists, and tabs, regardless of
whether they consist of three or 30 items. What users are required to do here is not remember
as many items as possible, having only heard or seen them once in a sequence, but instead scan
through a set of items until they recognize the one they want. This is a quite different task.
4 C O G N I T I V E A S P E C T S116
One solution has been to develop rigorous security measures whereby customers must
provide multiple pieces of information before gaining access to their accounts. This is called
multifactor authentication (MFA). The method requires a user to provide two or more pieces
of evidence that only they know, such as the following:
• Their ZIP code or postal code
• Their mother’s maiden name
• Their birthplace
• The last school they attended
• The first school they attended
• A password of between five and ten letters
• A memorable address (not their home)
• A memorable date (not their birthday)
Many of these are relatively easy to remember and recall since they are familiar to the
specific user. But consider the last two. How easy is it for someone to come up with such
memorable information and then be able to recall it readily? Perhaps the customer can give
the address and birthday of another member of their family as a memorable address and
date. But what about the request for a password? Suppose a customer selects the word inter-
action as a password—fairly easy to remember, yes? The problem is that banks do not ask
for the full password because of the danger that someone in the vicinity might overhear or
oversee. Instead, they ask the customer to provide specific letters or numbers from it, like the
seventh followed by the fifth. Certainly, such information does not spring readily to mind.
Instead, it requires mentally counting each alphanumeric character of the password until
the desired one is reached. How long does it take you to determine the seventh letter of the
password interaction? How did you do it?
To make things harder, banks also randomize the questions they ask. Again, this is to pre-
vent someone else who is nearby from memorizing the sequence of information. However, it
also means that the customers themselves cannot learn the sequence of information required,
meaning that they have to generate different information each time.
This requirement to remember and recall such information puts a big memory load on cus-
tomers. Some people find such a procedure quite nerve-racking and are prone to forget certain
pieces of information. As a coping strategy, they write down their details on a sheet of paper.
Having such an external representation at hand makes it much easier for them to read off the
necessary information rather than having to recall it from memory. However, it also makes them
vulnerable to the fraud the banks are trying to prevent should anyone else get ahold of that piece
of paper! Software companies have also developed password managers to help reduce memory
load. An example is LastPass (https://www.lastpass.com/), which is designed to remember all of
your passwords, meaning that you only have to remember one master password.
ACTIVITY 4.3
How can banks overcome the problem of providing a secure system while making the mem-
ory load easier for people wanting to use online and mobile phone banking?
https://www.lastpass.com/
4 . 2 W h AT I S C O G N I T I O N ? 117
Much research has been conducted into how to design technology to help people suffer-
ing from memory loss (for instance those with Alzheimer’s disease). An early example was the
SenseCam, which was originally developed by Microsoft Research Labs in Cambridge (UK)
to enable people to remember everyday events. The device they developed was a wearable
camera that intermittently took photos, without any user intervention, while it was worn (see
Figure 4.5). The camera could be set to take pictures at particular times, for example, every
30 seconds, or based on what it sensed (for example, acceleration). The camera employed
a fish-eye lens, enabling nearly everything in front of the wearer to be captured. The digital
images for each day were stored, providing a record of the events that a person experienced.
Several studies were conducted on patients with various forms of memory loss using the
device. For example, Steve Hodges et al. (2006) describe how a patient, Mrs. B, who had amnesia
Comment
Advances in computer vision and biometrics technology means that it is now possible to
replace the need for passwords to be typed in each time. For example, facial and touch ID
can be configured on newer smartphones to enable password-free mobile banking. Once
these are set up, a user simply needs to put their face in front of their phone’s camera or their
finger on the fingerprint sensor. These alternative approaches put the onus on the phone to
recognize and authenticate the person rather than the person having to learn and remember
a password.
BOX 4.2
Digital Forgetting
Much of the research on memory and interaction design has focused on developing cogni-
tive aids that help people to remember, for example, reminders, to-do lists, and digital photo
collections. However, there are times when we want to forget a memory. For example, when
someone breaks up with their partner, it can be emotionally painful to be reminded of them
through shared digital images, videos, and Facebook friends. How can technology be designed
to help people forget such memories? How could social media, such as Facebook, be designed
to support this process?
Corina Sas and Steve Whittaker (2013) suggest designing new ways of harvesting digital
materials connected to a broken relationship through using various automatic methods, such
as facial recognition, which dispose of them without the person needing to go through them
personally and be confronted with painful memories. They also suggest that during a separa-
tion, people could create a collage of their digital content connected to the ex, so as to trans-
form them into something more abstract, thereby providing a means for closure and helping
with the process of moving on.
4 C O G N I T I V E A S P E C T S118
was given a SenseCam to wear. The images that were collected were uploaded to a computer
at the end of each day. For the next two weeks, Mrs. B and her husband looked through
these and talked about them. During this period, Mrs. B’s recall of an event nearly tripled,
to a point where she could remember nearly everything about that event. Prior to using the
SenseCam, Mrs. B would have typically forgotten the little that she could initially remember
about an event within a few days.
Since this seminal research, a number of digital memory apps have been developed for
people with dementia. For example, RemArc has been designed to trigger long-term memo-
ries in people with dementia using BBC Archive material such as old photos, videos, and
sound clips.
Figure 4.5 The SenseCam device and a digital image taken with it
Source: Used courtesy of Microsoft Research Cambridge
Design Implications
Memory
• Reduce cognitive load by avoiding long and complicated procedures for carrying out tasks.
• Design interfaces that promote recognition rather than recall by using familiar interaction
patterns, menus, icons, and consistently placed objects.
• Provide users with a variety of ways of labeling digital information (for example files,
emails, and images) to help them easily identify it again through the use of folders, cate-
gories, color, tagging, time stamping, and icons.
4 . 2 W h AT I S C O G N I T I O N ? 119
4.2.4 Learning
Learning is closely connected with memory. It involves the accumulation of skills and knowl-
edge that would be impossible to achieve without memory. Likewise, people would not be
able to remember things unless they had learned them. Within cognitive psychology, learning
is thought to be either incidental or intentional. Incidental learning occurs without any inten-
tion to learn. Examples include learning about the world such as recognizing faces, streets,
and objects, and what you did today. In contrast, intentional learning is goal-directed with
the goal of being able to remember it. Examples include studying for an exam, learning a
foreign language, and learning to cook. This is much harder to achieve. Software develop-
ers, therefore, cannot assume that users will simply be able to learn how to use an app or a
product. It often requires much conscious effort.
Moreover, it is well known that people find it hard to learn by reading a set of
instructions in a manual. Instead, they much prefer to learn through doing. GUIs and
direct manipulation interfaces are good environments for supporting this kind of active
learning by supporting exploratory interaction and, importantly, allowing users to undo
their actions, that is, return to a previous state if they make a mistake by clicking the
wrong option.
There have been numerous attempts to harness the capabilities of different technologies
to support intentional learning. Examples include online learning, multimedia, and virtual
reality. They are assumed to provide alternative ways of learning through interacting with
information that is not possible with traditional technologies, for example, books. In so
doing, they have the potential of offering learners the ability to explore ideas and concepts in
different ways. For example, multimedia simulations, wearables, and augmented reality (see
Chapter 7, “Interfaces”) have been designed to help teach abstract concepts (such as mathe-
matical formulae, notations, laws of physics, biological processes) that students find difficult
to grasp. Different representations of the same process (for instance, a graph, formula, sound,
or simulation) are displayed and interacted with in ways that make their relationship with
each other clearer to the learner.
People often learn effectively when collaborating together. Novel technologies have also
been designed to support sharing, turn-taking, and working on the same documents. How
these can enhance learning is covered in the next chapter.
Design Implications
Learning
• Design interfaces that encourage exploration.
• Design interfaces that constrain and guide users to select appropriate actions when initially
learning.
4 C O G N I T I V E A S P E C T S120
4.2.5 Reading, Speaking, and Listening
Reading, speaking, and listening are three forms of language processing that have some
similar and some different properties. One similarity is that the meaning of sentences or
phrases is the same regardless of the mode in which it is conveyed. For example, the sen-
tence “Computers are a wonderful invention.” essentially has the same meaning whether
one reads it, speaks it, or hears it. However, the ease with which people can read, listen,
or speak differs depending on the person, task, and context. For example, many people
find listening easier than reading. Specific differences between the three modes include the
following:
• Written language is permanent while listening is transient. It is possible to re-read informa-
tion if not understood the first time around. This is not possible with spoken information
that is being broadcast unless it is recorded.
• Reading can be quicker than speaking or listening, as written text can be rapidly scanned
in ways not possible when listening to serially presented spoken words.
• Listening requires less cognitive effort than reading or speaking. Children often prefer to
listen to narratives provided in multimedia or web-based learning material rather than to
read the equivalent text online. The popularity of audiobooks suggests adults also enjoy
listening to novels, and so forth.
• Written language tends to be grammatical, while spoken language is often ungrammati-
cal. For example, people often start talking and stop in midsentence, letting someone else
start speaking.
• Dyslexics have difficulties understanding and recognizing written words, making it hard
for them to write grammatical sentences and spell correctly.
Many applications have been developed either to capitalize on people’s reading, writing,
and listening skills, or to support or replace them where they lack or have difficulty with
them. These include the following:
• Interactive books and apps that help people to read or learn foreign languages.
• Speech-recognition systems that allow people to interact with them by using spoken com-
mands (for example, Dragon Home, Google Voice Search, and home devices, such as Ama-
zon Echo, Google Home, and Home Aware that respond to vocalized requests).
• Speech-output systems that use artificially generated speech (for instance, written text-to-
speech systems for the blind).
• Natural-language interfaces that enable people to type in questions and get written
responses (for example, chatbots).
• Interactive apps that are designed to help people who find it difficult to read, write, or
speak. Customized input and output devices that allow people with various disabilities to
have access to the web and use word processors and other software packages.
• Tactile interfaces that allow people who are visually impaired to read graphs (for example,
Designboom’s braille maps for the iPhone).
4 . 2 W h AT I S C O G N I T I O N ? 121
4.2.6 Problem-Solving, Planning, Reasoning, and Decision-Making
Problem-solving, planning, reasoning, and decision-making are processes involving reflective
cognition. They include thinking about what to do, what the available options are, and what
the consequences might be of carrying out a given action. They often involve conscious pro-
cesses (being aware of what one is thinking about), discussion with others (or oneself), and
the use of various kinds of artifacts (for example, maps, books, pens, and paper). Reasoning
involves working through different scenarios and deciding which is the best option or solu-
tion to a given problem. For example, when deciding on where to go on a vacation, people
may weigh the pros and cons of different locations, including cost, weather at the location,
availability and type of accommodation, time of flights, proximity to a beach, the size of the
local town, whether there is nightlife, and so forth. When weighing all of the options, they
reason through the advantages and disadvantages of each before deciding on the best one.
There has been a growing interest in how people make decisions when confronted with
information overload, such as when shopping on the web or at a store (Todd et al., 2011).
How easy is it to decide when confronted with an overwhelming choice? Classical rational
theories of decision-making (for instance, von Neumann and Morgenstern, 1944) posit that
making a choice involves weighing up the costs and benefits of different courses of action.
This is assumed to involve exhaustively processing the information and making trade-offs
between features. Such strategies are very costly in computational and informational terms—
not the least because they require the decision-maker to find a way of comparing the differ-
ent options. In contrast, research in cognitive psychology has shown how people tend to use
simple heuristics when making decisions (Gigerenzer et al., 1999). A theoretical explanation
is that human minds have evolved to act quickly, making just good enough decisions by using
fast and frugal heuristics. We typically ignore most of the available information and rely
only on a few important cues. For example, in the supermarket, shoppers make snap judg-
ments based on a paucity of information, such as buying brands that they recognize, that are
low-priced, or that offer attractive packaging—seldom reading other package information.
This suggests that an effective design strategy is to make key information about a product
Design Implications
Reading, Speaking, and Listening
• Keep the length of speech-based menus and instructions to a minimum. Research has
shown that people find it hard to follow spoken menus with more than three or four
options. Likewise, they are bad at remembering sets of instructions and directions that
have more than a few parts.
• Accentuate the intonation of artificially generated speech voices, as they are harder to
understand than human voices.
• Provide opportunities for making text large on a screen, without affecting the formatting,
for people who find it hard to read small text.
4 C O G N I T I V E A S P E C T S122
highly salient. However, what exactly is salient will vary from person to person. It may
depend on the user’s preferences, allergies, or interests. For example, one person might have a
nut allergy and be interested in food miles, while another may be more concerned about the
farming methods used (such as organic, FairTrade, and so on) and a product’s sugar content.
Thus, instead of providing ever more information to enable people to compare products
when making a choice, a better strategy is to design technological interventions that provide
just enough information, and in the right form, to facilitate good choices. One solution is to
exploit new forms of augmented reality and wearable technology that enable information-
frugal decision-making and that have glanceable displays that can represent key information
in an easy-to-digest form (Rogers et al., 2010b). The interface for an AR or wearable app
could be designed to provide certain “food” or other information filters, which could be
switched on or off by the user to match their preferences.
DILEMMA
Can You Make Up Your Mind Without an App?
In their book The App Generation (Yale University Press, 2014), Howard Gardner and Katie
Davis note how some young people find it hard to make their own decisions because they are
becoming more and more risk averse. The reason for this is that they now rely on using an
increasing number of mobile apps to help them in their decision-making, removing the risk of
having to decide for themselves. Often, they will first read what others have said on social media
sites, blogs, and recommender apps before choosing where to eat or go, what to do or listen
to, and so on. However, relying on a multitude of apps means that young people are becom-
ing increasingly unable to make decisions by themselves. For many, their first big decision is
choosing which college or university to attend. This has become an agonizing and prolonged
experience where both parents and apps play a central role in helping them out. They will read
countless reviews, go on numerous visits to colleges and universities with their parents over
several months, study university rankings that apply different measures, read up on what others
say on social networking sites, and so on. In the end, however, they may finally choose the insti-
tution where their friends attend or the one they liked the look of in the first place.
Design Implications
Problem-Solving, Planning, Reasoning, and Decision-Making
• Provide information and help pages that are easy to access for people who want to under-
stand more about how to carry out an activity more effectively (for example, web searching).
• Use simple and memorable functions to support rapid decision-making and planning.
Enable users to set or save their own criteria or preferences.
4 . 3 C O G N I T I V E F r A M E W O r k S 123
4.3 Cognitive Frameworks
A number of conceptual frameworks have been developed to explain and predict user behav-
ior based on theories of cognition. In this section, we outline three that focus primarily on
mental processes and three others that explain how humans interact and use technologies in
the context in which they occur. These are mental models, gulfs of execution and evaluation,
information processing, distributed cognition, external cognition, and embodied interaction.
4.3.1 Mental Models
Mental models are used by people when needing to reason about a technology, in particular,
to try to fathom what to do when something unexpected happens with it or when encounter-
ing unfamiliar products for the first time. The more someone learns about a product and how
it functions, the more their mental model develops. For example, broadband engineers have a
deep mental model of how Wi-Fi networks work that allows them to work out how to set them
up and fix them. In contrast, an average citizen is likely to have a reasonably good mental model
of how to use the Wi-Fi network in their home but a shallow mental model of how it works.
Within cognitive psychology, mental models have been postulated as internal construc-
tions of some aspect of the external world that are manipulated, enabling predictions and
inferences to be made (Craik, 1943). This process is thought to involve the fleshing out
and the running of a mental model (Johnson-Laird, 1983). This can involve both unconscious and
conscious mental processes, where images and analogies are activated.
ACTIVITY 4.4
To illustrate how we use mental models in our everyday reasoning, imagine the following two
scenarios:
• You arrive home from a vacation on a cold winter’s night to a cold house. You have a small
baby, and you need to get the house warm as quickly as possible. Your house is centrally
heated, but it does not have a smart thermostat that can be controlled remotely. Do you set
the thermostat as high as possible or turn it to the desired temperature (for instance, 70°F)?
• You arrive home after being out all night and you’re starving hungry. You look in the freezer
and find all that is left is a frozen pizza. The instructions on the package say heat the oven
to 375°F and then place the pizza in the oven for 20 minutes. Your oven is electric. How do
you heat it up? Do you turn it to the specified temperature or higher?
Comment
Most people when asked the first question imagine the scenario in terms of what they would do
in their own house and choose the first option. A typical explanation is that setting the tempera-
ture to be as high as possible increases the rate at which the room warms up. While many people
may believe this, it is incorrect. Thermostats work by switching on the heat and keeping it going
at a constant speed until the desired set temperature is reached, at which point it cuts out. They
cannot control the rate at which heat is given out from a heating system. Left at a given setting,
thermostats will turn the heat on and off as necessary to maintain the desired temperature.
(Continued)
4 C O G N I T I V E A S P E C T S124
Why do people use erroneous mental models? It seems that in the previous two scenarios,
they are using a mental model based on a general valve theory of the way something works
(Kempton, 1986). This assumes the underlying principle of more is more: the more you turn or
push something, the more it causes the desired effect. This principle holds for a range of phys-
ical devices, such as faucets, where the more you turn them, the more water that comes out.
However, it does not hold for thermostats, which instead function based on the principle of
an on-off switch. What seems to happen is that in everyday life, people develop a core set
of abstractions about how things work and apply these to a range of devices, irrespective of
whether they are appropriate.
Using incorrect mental models to guide behavior is surprisingly common. Just watch
people at a pedestrian crossing or waiting for an elevator. How many times do they press the
button? A lot of people will press it at least twice. When asked why, a common reason is that
they think it will make the lights change faster or ensure the elevator arrives.
Many people’s understanding of how technologies and services work is poor, for instance,
the Internet, wireless networking, broadband, search engines, computer viruses, the cloud, or
AI. Their mental models are often incomplete, easily confusable, and based on inappropri-
ate analogies and superstition (Norman, 1983). As a consequence, they find it difficult to
identify, describe, or solve a problem, and they lack the words or concepts to explain what
is happening.
How can user experience (UX) designers help people to develop better mental models?
A major obstacle is that people are resistant to spending much time learning about how
things work, especially if it involves reading manuals or other documentation. An alterna-
tive approach is to design technologies to be more transparent, which makes them easier to
understand in terms of how they work and what to do when they don’t. This includes provid-
ing the following:
• Clear and easy-to-follow instructions
• Appropriate online help, tutorials, and context-sensitive guidance for users in the form of
online videos and chatbot windows, where users can ask how to do something
• Background information that can be accessed to let people know how something works
and how to make the most of the functionality provided
• Affordances of what actions an interface allows (for example, swiping, clicking, or
selecting).
The concept of transparency has been used to refer to making interfaces intuitive to use
so that people can simply get on with their tasks, such as taking photos, sending messages,
or talking to someone remotely without having to worry about long sequences of buttons to
When asked the second question, most people say they would turn the oven to the speci-
fied temperature and put the pizza in when they think it is at the right temperature. Some
people answer that they would turn the oven to a higher temperature in order to warm it up
more quickly. Electric ovens work on the same principle as central heating, so turning the heat
up higher will not warm it up any quicker. There is also the problem of the pizza burning if
the oven is too hot!
4 . 3 C O G N I T I V E F r A M E W O r k S 125
press or options to select. An ideal form of transparency is where the interface simply disap-
pears from the focus of someone’s attention. Imagine if every time you had to give a presenta-
tion that all you had to do was say, “Upload and start my slides for the talk I prepared today,”
and they would simply appear on the screen for all to see. That would be bliss! Instead, many
AV projector systems persist in being far from transparent, requiring many counterintuitive
steps for someone to get their slides to show. This can include trying to find the right dongle,
setting up the system, typing in a password, setting up audio controls, and so forth, all of
which seems to take forever, especially when there is an audience waiting.
4.3.2 Gulfs of Execution and Evaluation
The gulf of execution and the gulf of evaluation describe the gaps that exist between the user
and the interface (Norman, 1986; Hutchins et al., 1986). The gulfs are intended to show how
to design the latter to enable the user to cope with them. The first one, the gulf of execution,
describes the distance from the user to the physical system while the second one, the gulf of eval-
uation, is the distance from the physical system to the user (see Figure 4.6). Don Norman and
his colleagues suggest that designers and users need to concern themselves with how to bridge
the gulfs to reduce the cognitive effort required to perform a task. This can be achieved, on the
one hand, by designing usable interfaces that match the psychological characteristics of the user
(for example, taking into account their memory limitations) and, on the other hand, by the user
learning to create goals, plans, and action sequences that fit with how the interface works.
The conceptual framework of the gulfs is still considered useful today, as it can help
designers consider whether their proposed interface design is increasing or decreasing cog-
nitive load and whether it makes it obvious as to which steps to take for a given task. For
example, Kathryn Whitenton (2018), who is a digital strategy manager, describes how the
gulfs prevented her from understanding and why she could not get her Bluetooth headset to
connect with her computer despite following the steps in the manual. She wasted a whole
hour repeating the steps and getting more and more frustrated and not making any progress.
Eventually, she discovered that the system she thought was toggled “on” was actually show-
ing her that it was “off” (see Figure 4.7). She found this out by searching the web to see
Gulf of
Evaluation
What’s the current
system state?
The
User
The
World
Gulf of
Execution
How do I use
this system?
Figure 4.6 Bridging the gulfs of execution and evaluation
Source: https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution. Used courtesy of the Nielsen Norman
Group
4 C O G N I T I V E A S P E C T S126
whether someone else could help her. She found a site that showed a screenshot of what the
settings switch looks like when turned on. There was an inconsistency between the labels
of two similar-looking switches, one showing the current status of the interaction (off) and
the other showing what would happen if the interaction were engaged (Add Bluetooth Or
Other Device).
This inconsistency of similar functions illustrated how the gulfs of execution and evalu-
ation were poorly bridged, making it confusing and difficult for the user to know what the
problem was or why they could not get their headset to connect with their computer despite
many attempts. In the article, she explains how the gulfs could be easily bridged by designing
all sliders to give the same information as to what happens when they are moved from one
side to the other. For more details about this situation, see https://www.nngroup.com/articles/
two-ux-gulfs-evaluation-execution/.
4.3.3. Information Processing
Another approach to conceptualizing how the mind works has been to use metaphors and
analogies to describe cognitive processes. Numerous comparisons have been made, including
conceptualizing the mind as a reservoir, a telephone network, a digital computer, and a deep
learning network. One prevalent metaphor from cognitive psychology is the idea that the
mind is an information processor. Information is thought to enter and exit the mind through
a series of ordered processing stages (see Figure 4.8). Within these stages, various processes
are assumed to act upon mental representations. Processes include comparing and matching.
Mental representations are assumed to comprise images, mental models, rules, and other
forms of knowledge.
The information processing model provides a basis from which to make predictions
about human performance. Hypotheses can be made about how long someone will take
to perceive and respond to a stimulus (also known as reaction time) and what bottlenecks
occur if a person is overloaded with too much information. One of the first HCI models to
Figure 4.7 An example where the gulfs helped explain how a seemingly trivial design decision led
to much user frustration
Source: https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution. Used courtesy of the Nielsen Norman
Group
https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/
https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/
4 . 3 C O G N I T I V E F r A M E W O r k S 127
be derived from the information processing theory was the human processor model, which
modeled the cognitive processes of a user interacting with a computer (Card et al., 1983).
Cognition was conceptualized as a series of processing stages, where perceptual, cognitive,
and motor processors are organized in relation to one another. The model predicts which
cognitive processes are involved when a user interacts with a computer, enabling calcula-
tions to be made of how long a user will take to carry out various tasks. In the 1980s, it was
found to be a useful tool for comparing different word processors for a range of editing tasks.
Even though it is not often used today to inform interaction design, it is considered to be a
HCI classic.
The information processing approach was based on modeling mental activities that hap-
pen exclusively inside the head. Nowadays, it is more common to understand cognitive activ-
ities in the context in which they occur, analyzing cognition as it happens in the wild (Rogers,
2012). A central goal has been to look at how structures in the environment can both aid
human cognition and reduce cognitive load. The three external approaches we consider next
are distributed cognition, external cognition, and embodied cognition.
4.3.4 Distributed Cognition
Most cognitive activities involve people interacting with external kinds of representations,
such as books, documents, and computers and also with each other. For example, when
someone goes home from wherever they have been, they do not need to remember the details
of the route because they rely on cues in the environment (for instance, they know to turn
left at the red house, right when the road comes to a T-junction, and so on). Similarly, when
they are at home, they do not have to remember where everything is because information is
available as needed. They decide what to eat and drink by scanning the items in the fridge,
look out the window to see whether it is raining or not, and so on. Likewise, they are always
creating external representations for a number of reasons, not only to help reduce memory
load and the cognitive cost of computational tasks, but also, importantly, to extend what they
can do and allow people to think more powerfully (Kirsh, 2010).
The distributed cognition approach was developed to study the nature of cognitive phe-
nomena across individuals, artifacts, and internal and external representations (Hutchins,
1995). Typically, it involves describing a cognitive system, which entails interactions among
people, the artifacts they use, and the environment in which they are working. An example
of a cognitive system is an airline cockpit, where the top-level goal is to fly the plane (see
Figure 4.9). This involves all of the following:
• The pilot, captain, and air traffic controller interacting with one another
• The pilot and captain interacting with the instruments in the cockpit
• The pilot and captain interacting with the environment in which the plane is flying (that
is, the sky, runway, and so on)
Figure 4.8 Human information processing model
Source: P. Barber (1998). Applied Cognitive Psychology. London: Methuen. Used courtesy of Taylor & Francis
4 C O G N I T I V E A S P E C T S128
A primary objective of the distributed cognition approach is to describe these interactions
in terms of how information is propagated through different media. By this we mean how
information is represented and re-represented as it moves across individuals and through the
array of artifacts that are used (for example, maps, instrument readings, scribbles, and spo-
ken word) during activities. These transformations of information are referred to as changes
in representational state.
This way of describing and analyzing a cognitive activity contrasts with other cognitive
approaches, such as the information processing model, in that it focuses not on what is hap-
pening inside the head of an individual but on what is happening across a system of individu-
als and artifacts. For example, in the cognitive system of the cockpit, a number of people and
artifacts are involved in the activity of flying at a higher altitude. The air traffic controller
initially tells the pilot when it is safe to ascend to a higher altitude. The pilot then alerts the
captain, who is flying the plane, by moving a knob on the instrument panel in front of them,
confirming that it is now safe to fly.
Hence, the information concerning this activity is transformed through different media
(over the radio, through the pilot, and via a change in the position of an instrument). This
kind of analysis can be used to derive design recommendations, suggesting how to change or
Figure 4.9 A cognitive system in which information is propagated through different media
4 . 3 C O G N I T I V E F r A M E W O r k S 129
redesign an aspect of the cognitive system, such as a display or a socially mediated practice.
In the previous example, distributed cognition could draw attention to the importance of any
new design needing to keep shared awareness and redundancy in the system so that both the
pilot and the captain can be kept aware and also know that the other is aware of the changes
in altitude that are occurring. It is also the basis for the DiCOT analytic framework that has
been developed specifically for understanding healthcare settings and has also been used for
software team interactions (see Chapter 9, “Data Analysis”).
4.3.5 External Cognition
People interact with or create information by using a variety of external representations,
including books, multimedia, newspapers, web pages, maps, diagrams, notes, drawings, and
so on. Furthermore, an impressive range of tools has been developed throughout history to
aid cognition, including pens, calculators, spreadsheets, and software workflows. The com-
bination of external representations and physical tools has greatly extended and supported
people’s ability to carry out cognitive activities (Norman, 2013). Indeed, they are such an
integral part of our cognitive activities that it is difficult to imagine how we would go about
much of our everyday life without them.
External cognition is concerned with explaining the cognitive processes involved when
we interact with different external representations such as graphical images, multimedia, and
virtual reality (Scaife and Rogers, 1996). A main goal is to explain the cognitive benefits of
using different representations for different cognitive activities and the processes involved.
The main ones include the following:
• Externalizing to reduce memory load
• Computational offloading
• Annotating and cognitive tracing
4.3.5.1 Externalizing to Reduce Memory Load
Numerous strategies have been developed for transforming knowledge into external rep-
resentations to reduce memory load. One such strategy is externalizing things that we find
difficult to remember, such as birthdays, appointments, and addresses. Diaries, personal
reminders, and calendars are examples of cognitive artifacts that are commonly used for this
purpose, acting as external reminders of what we need to do at a given time, such as buy a
card for a relative’s birthday.
Other kinds of external representations that people frequently employ are notes, such
as sticky notes, shopping lists, and to-do lists. Where these are placed in the environment
can also be crucial. For example, people often place notes in prominent positions, such as
on walls, on the side of computer screens, by the front door, and sometimes even on their
hands in a deliberate attempt to ensure that they do remind them of what needs to be done or
remembered. People also place things in piles in their offices and by the front door, indicating
what needs to be done urgently versus what can wait for a while.
Externalizing, therefore, can empower people to trust that they will be reminded without
having to remember themselves, thereby reducing their memory burden in the following ways:
• Reminding them to do something (for example, get something for mother’s birthday)
• Reminding them of what to do (such as buy a card)
• Reminding them of when to do something (for instance, send it by a certain date)
4 C O G N I T I V E A S P E C T S130
This is an obvious area where technology can be designed to help remind. Indeed, many
apps have been developed to reduce the burden on people to remember things, including
to-do and alarm-based lists. These can also be used to help improve people’s time manage-
ment and work-life balance.
4.3.5.2 Computational Offloading
Computational offloading occurs when we use a tool or device in conjunction with an exter-
nal representation to help us carry out a computation. An example is using pen and paper to
solve a math problem as mentioned in the introduction of the chapter where you were asked
to multiply 21 × 19 in your head versus using a pen and paper. Now try doing the sum again
but using roman numerals: XXI × XVIIII. It is much harder unless you are an expert in using
roman numerals—even though the problem is equivalent under both conditions. The reason
for this is that the two different representations transform the task into one that is easy and
one that is more difficult, respectively. The kind of tool used also can change the nature of
the task to being easier or more difficult.
4.3.5.3 Annotating and Cognitive Tracing
Another way in which we externalize our cognition is by modifying representations to reflect
changes that are taking place that we want to mark. For example, people often cross things
off a to-do list to indicate tasks that have been completed. They may also reorder objects in
the environment by creating different piles as the nature of the work to be done changes.
These two types of modification are called annotating and cognitive tracing.
• Annotating involves modifying external representations, such as crossing off or under-
lining items.
• Cognitive tracing involves externally manipulating items into different orders or structures.
Annotating is often used when people go shopping. People usually begin their shop-
ping by planning what they are going to buy. This often involves looking in their cupboards
and fridge to see what needs stocking up. However, many people are aware that they won’t
remember all this in their heads, so they often externalize it as a written shopping list. The
act of writing may also remind them of other items that they need to buy, which they may
not have noticed when looking through the cupboards. When they actually go shopping at
the store, they may cross off items on the shopping list as they are placed in the shopping
basket or cart. This provides them with an annotated externalization, allowing them to see at
a glance what items are still left on the list that need to be bought.
There are a number of digital annotation tools that allow people to use pens, styluses, or
their fingers to annotate documents, such as circling data or writing notes. The annotations
can be stored with the document, enabling the users to revisit theirs or others’ externaliza-
tions at a later date.
Cognitive tracing is useful in conditions where the current situation is in a state of flux
and the person is trying to optimize their position. This typically happens when playing
games, such as the following:
• In a card game, when the continuous rearrangement of a hand of cards into suits, in
ascending order, or collecting same numbers together helps to determine what cards to
keep and which to play as the game progresses and tactics change
4 . 3 C O G N I T I V E F r A M E W O r k S 131
• In Scrabble, where shuffling letters around in the tray helps a person work out the best
word given the set of letters (Maglio et al., 1999)
Cognitive tracing has also been used as an interactive function, for example, letting stu-
dents know what they have studied in an online learning package. An interactive diagram can
be used to highlight all of the nodes visited, exercises completed, and units still to be studied.
A general cognitive principle for interaction design based on the external cognition
approach is to provide external representations at an interface that reduce memory load,
support creativity, and facilitate computational offloading. Different kinds of information
visualizations can be developed that reduce the amount of effort required to make inferences
about a given topic (for example, financial forecasting or identifying programming bugs). In
so doing, they can extend or amplify cognition, allowing people to perceive and do activi-
ties that they couldn’t do otherwise. For example, information visualizations (discussed in
chapter 10) are used to represent big data in a visual form that can make it easier to make
cross-comparisons across dimensions and see patterns and anomalies. Workflow and contex-
tual dialog boxes can also pop up at appropriate times to guide users through their interac-
tions, especially where there are potentially hundreds and sometimes thousands of options
available. This reduces memory load significantly and frees up more cognitive capacity for
enabling people to complete desired tasks.
4.3.6 Embodied Interaction
Another way of describing our interactions with technology and the world is to conceive of it
as embodied. By this we mean the practical engagement with the social and physical environ-
ment (Dourish, 2001). This involves creating, manipulating, and making meaning through
our engaged interaction with physical things, including mundane objects such as cups and
spoons, and technological devices, such as smartphones and robots. Artifacts and technolo-
gies that indicate how they are coupled to the world make it clear how they should be used.
For example, a physical artifact, like a book when left opened on someone’s desk, can remind
them to complete an unfinished task the next day (Marshall and Hornecker, 2013).
Eva Hornecker et al. (2017) further explain embodied interaction in terms of how
our bodies and active experiences shape how we perceive, feel, and think. They describe
how our ability to think abstractly is thought to be a result of our sensorimotor experiences
with the world. This enables us to learn how to think and talk using abstract concepts, such
as inside-outside, up-down, on top of, and behind. Our numerous experiences of moving
through and manipulating the world since we were born (for example, climbing, walking,
crawling, stepping into, holding, or placing) is what enables us to develop a sense of the
world at both a concrete and abstract level.
Within HCI, the concept of embodied interaction has been used to describe how the body
mediates our various interactions with technology (Klemmer et al., 2006) and also our emo-
tional interactions (Höök, 2018). By theorizing about embodied interactions in these ways
has helped researchers uncover problems that can arise in the use of existing technologies
while also informing the design of new technologies in the context in which they will be used.
David Kirsh (2013) suggests that a theory of embodiment can provide HCI practition-
ers and theorists with new ideas about interaction and new principles for better designs. He
explains how interacting with tools changes the way people think and perceive of their envi-
ronments. He also argues that a lot of times we think with our bodies and not just with our
brains. He studied choreographers and dancers and observed that they often partially model
4 C O G N I T I V E A S P E C T S132
a dance (known as marking) through using abbreviated moves and small gestures rather than
doing a full workout or mentally simulating the dance in their heads. This kind of marking
was found to be a better method of practice than the other two methods. The reason for
doing it this way is not that it is saving energy or preventing dancers from getting exhausted
emotionally, but that it enables them to review and explore particular aspects of a phrase
or movement without the mental complexity involved in a full work out. The implication of
how people use embodiment in their lives is that learning new procedures and skills might be
better taught by a process like marking, where learners create little models of things or use
their own bodies to act out. For example, rather than developing fully fledged virtual real-
ity simulations for learning golf, tennis, skiing, and so on, it might be better to teach sets of
abbreviated actions, using augmented reality, as a form of embodied marking.
In-Depth Activity
The aim of this in-depth activity is for you to try to elicit mental models from people. In par-
ticular, the goal is for you to understand the nature of people’s knowledge about an interactive
product in terms of how to use it and how it works.
1. First, elicit your own mental model. Write down how you think contactless cards (see
Figure 4.10) work—where customers place their debit or credit card over a card reader. If
you are not familiar with contactless cards, do the same for a smartphone app like Apple
Pay or Google Pay. Then answer the following questions:
Figure 4.10 A contactless debit card indicated by symbol
133
Further Reading
BERGMAN, O. and WHITTAKER, S. (2016). The Science of Managing Our Digital Stuff.
MIT Press. This very readable book provides a fascinating account of how we manage all of
our digital stuff that increases by the bucket load each day. It explains why we persist with
seemingly old-fashioned methods when there are alternative, seemingly better approaches
that have been designed by software companies.
• What information is sent between the card/smartphone and the card reader when it is placed
in front of it?
• What is the maximum amount you can pay for something using a contactless card, or Apple/
Google Pay?
• Why is there an upper limit?
• How many times can you use a contactless card or Apple/Google Pay in a day?
• What happens if you have two contactless cards in the same wallet/purse?
• What happens when your contactless card is stolen and you report it to the bank?
Next, ask two other people the same set of questions.
2. Now analyze your answers. Do you get the same or different explanations? What do the
findings indicate? How accurate are people’s mental models about the way contactless cards
and smartphone Apple/Google Pay work?
Summary
This chapter explained the importance of understanding the cognitive aspects of interaction. It
described relevant findings and theories about how people carry out their everyday activities
and how to learn from these to help in designing interactive products. It provided illustrations
of what happens when you design systems with the user in mind and what happens when you
don’t. It also presented a number of conceptual frameworks that allow ideas about cognition
to be generalized across different situations.
Key points
• Cognition comprises many processes, including thinking, attention, memory, perception,
learning, decision-making, planning, reading, speaking, and listening.
• The way in which an interface is designed can greatly affect how well people can perceive,
attend, learn, and remember how to carry out their tasks.
• The main benefits of conceptual frameworks based on theories of cognition are that they
can explain user interaction, inform design, and predict user performance.
F U r T h E r r E A D I N G
4 C O G N I T I V E A S P E C T S134
ERICKSON, T. D. and MCDONALD, D. W. (2008) HCI Remixed: Reflections on Works
That Have Influenced the HCI Community. MIT Press. This collection of essays from more
than 50 leading HCI researchers describes the accessible prose papers, books, and software
that influenced their approach to HCI and shaped its history. They include some of the classic
papers on cognitive theories, including the psychology of HCI and the power of external
representations.
EYSENCK, M. and BRYSBAERT, M. (2018) Fundamentals of Cognition (3rd ed.). Rout-
ledge. This introductory textbook about cognition provides a comprehensive overview of
the fundamentals of cognition. In particular, it describes the processes that allow us to make
sense of the world around us and to enable us to make decisions about how to manage our
everyday lives. It also covers how technology can provide new insights into how the mind
works, for example, revealing how CAPTCHAs tell us more about perception.
GIGERENZER, G. (2008) Gut Feelings. Penguin. This provocative paperback is written by
a psychologist and behavioral expert in decision-making. When confronted with choice in a
variety of contexts, he explains how often “less is more.” He explains why this is so in terms
of how people rely on fast and frugal heuristics when making decisions, which are often
unconscious rather than rational. These revelations have huge implications for interaction
design that are only just beginning to be explored.
JACKO, J. (ed.) (2012) The Human-Computer Interaction Handbook: Fundamentals, Evolv-
ing Technologies and Emerging Applications (3rd ed). CRC Press. Part 1 is about human
aspects of HCI and includes in-depth chapters on information processing, mental models,
decision-making, and perceptual motor interaction.
KAHNEMAN, D. (2011) Thinking, Fast and Slow. Penguin. This bestseller presents an over-
view of how the mind works, drawing on aspects of cognitive and social psychology. The
focus is on how we make judgments and choices. It proposes that we use two ways of think-
ing: one that is quick and based on intuition and one that is slow and more deliberate and
challenging. The book explores the many facets of life and how and when we use each.
Chapter 5
S O C I A L I N T E R A C T I O N
5.1 Introduction
5.2 Being Social
5.3 Face-to-Face Conversations
5.4 Remote Conversations
5.5 Co-presence
5.6 Social Engagement
Objectives
The main goals of the chapter are to accomplish the following:
• Explain what is meant by social interaction.
• Describe the social mechanisms that people use to communicate and collaborate.
• Explain what social presence means.
• Give an overview of new technologies intended to facilitate collaboration and group
participation.
• Discuss how social media has changed how we keep in touch, make contacts, and man-
age our social and working lives.
• Outline examples of new social phenomena that are a result of being able to connect
online.
5.1 Introduction
People are inherently social: we live together, work together, learn together, play together,
interact and talk with each other, and socialize. A number of technologies have been developed
specifically to enable us to persist in being social when physically apart from one another,
many of which have now become part of the fabric of society. These include the widespread
use of smartphones, video chat, social media, gaming, messaging, and telepresence. Each of
these afford different ways of supporting how people connect.
There are many ways to study what it means to be social. In this chapter, we focus on
how people communicate and collaborate face-to-face and remotely in their social, work,
5 S O C I A L I N T E R A C T I O N136
and everyday lives—with the goal of providing models, insights, and guidelines to inform
the design of “social” technologies that can better support and extend them. A diversity of
communication technologies is also examined that have changed the way people live—how
they keep in touch, make friends, and coordinate their social and work networks. The con-
versation mechanisms that have conventionally been used in face-to-face interactions are
described and discussed in relation to how they have been adapted for the various kinds of
computer-based conversations that now take place at a distance. Examples of social phenom-
ena that have emerged as a result of social engagement at scale are also presented.
5.2 Being Social
A fundamental aspect of everyday life is being social, and that entails interacting with each
other. People continually update each other about news, changes, and developments on a
given project, activity, person, or event. For example, friends and families keep each other
posted on what’s happening at work, at school, at a restaurant or club, next door, in reality
shows, and in the news. Similarly, people who work together keep each other informed about
their social lives and everyday events, as well as what is happening at work, for instance
when a project is about to be completed, plans for a new project, problems with meeting
deadlines, rumors about closures, and so on.
While face-to-face conversations remain central to many social interactions, the use of
social media has dramatically increased. People now spend several hours a day communicat-
ing with others online—texting, emailing, tweeting, Facebooking, Skyping, instant messaging,
and so on. It is also common practice for people at work to keep in touch with each other via
WhatsApp groups and other workplace communication tools, such as Slack, Yammer, or Teams.
The almost universal adoption of social media in mainstream life has resulted in most
people now being connected in multiple ways over time and space—in ways that were
unimaginable 25 or even 10 years ago. For example, adults average about 338 Facebook
friends, while it is increasingly common for people to have more than 1,000 connections
on LinkedIn—many more than those made through face-to-face networking. The way that
people make contact, how they stay in touch, who they connect to, and how they maintain
their social networks and family ties have irrevocably changed. During the last 20 or so years,
social media, teleconferencing, and other social-based technologies (often referred to as social
computing) have also transformed how people collaborate and work together globally—
including the rise of flexible and remote working, the widespread use of shared calendars and
collaboration tools (for example Slack, Webex, Trello, and Google Docs), and professional
networking platforms (such as LinkedIn, Twitter, and WhatsApp).
A key question that the universal adoption of social media and other social computing
tools in society raises is how it has affected people’s ability to connect, work, and interact
with one another. Have the conventions, norms, and rules established in face-to-face interac-
tions to maintain social order been adopted in social media interactions, or have new norms
emerged? In particular, are the established conversational rules and etiquette, whose function
it is to let people know how they should behave in social groups, also applicable to online
social behavior? Or, have new conversational mechanisms evolved for the various kinds
of social media? For example, do people greet each other in the same way, depending on
whether they are chatting online, Skyping, or at a party? Do people take turns when online
5 . 2 B E I N g S O C I A L 137
chatting in the way they do when talking with each other face to face? How do they choose
which technology or app to use from the variety available today for their various work and
social activities, such as SnapChat, text messaging, Skype, or phone calls? Answering these
questions can help us understand how existing tools support communication and collabora-
tive work while helping to inform the design of new ones.
When planning and coordinating social activities, groups often switch from one mode
to another. Most people send texts in preference to calling someone up, but they may switch
to calling or mobile group messaging (such as WhatsApp, GroupMe) at different stages of
planning to go out (Schuler et al., 2014). However, there can be a cost as conversations about
what to do, where to meet, and who to invite multiply across people. Some people might get
left off or others might not reply, and much time can be spent to-ing and fro-ing across the
different apps and threads. Also, some people may not look at their notifications in a timely
manner, while further developments in the group planning have evolved. This is compounded
by the fact that often people don’t want to commit until close to the time of the event, in case
an invitation to do something from another friend appears that is more interesting to them.
Teenagers, especially, often leave it until the last minute to micro-coordinate their arrange-
ments with their friends before deciding on what to do. They will wait and see if a better offer
comes their way rather than deciding for themselves a week in advance, say, to see a movie
with a friend and sticking to it. This can make it frustrating for those who initiate the plan-
ning and are waiting to book tickets before they sell out.
A growing concern that is being raised within society is how much time people spend
looking at their phones—whether interacting with others, playing games, tweeting, and so
forth—and its consequences on people’s well-being (see Ali et al., 2018). A report on the
impact of the “decade of a smartphone” notes that on average a person in the United King-
dom spends more than a day a week online (Ofcom, 2018). Often, it is the first thing they do
upon waking and the last thing they do before going to bed. Moreover, lots of people cannot
go for long without checking their phone. Even when sitting together, they resort to being in
their own digital bubbles (see Figure 5.1). Sherry Turkle (2015) bemoans the negative impact
that this growing trend is having on modern life, especially how it is affecting everyday con-
versation. She points out that many people will admit to preferring texting to talking to oth-
ers, as it is easier, requires less effort, and is more convenient. Furthermore, her research has
shown that when children hear adults talking less, they likewise talk less. This in turn reduces
opportunities to learn how to empathize. She argues that while online communication has its
place in society, it is time to reclaim conversation, where people put down their phones more
often and (re)learn the art and joy of spontaneously talking to each other.
On the other hand, it should be stressed that several technologies have been designed
to encourage social interaction to good effect. For example, voice assistants that come with
smart speakers, such as Amazon’s Echo devices, provide a large number of “skills” intended
to support multiple users taking part at the same time, offering the potential for families to
play together. An example skill is “Open the Magic Door,” which allows group members
(such as families) to choose their path in a story by selecting different options through the
narrative. Social interaction may be further encouraged by the affordance of a smart speaker
when placed on a surface in the home, such as a kitchen counter or mantelpiece. In particular,
its physical presence in this shared location affords joint ownership and use—similar to other
domestic devices, such as the radio or TV. This differs from other virtual voice assistants that
are found on phones or laptops that support individual use.
5 S O C I A L I N T E R A C T I O N138
Figure 5.1 A family sits together, but they are all in their own digital bubbles—including the dog!
Source: Helen Sharp
ACTIVITY 5.1
Think of a time where you enjoyed meeting up with friends to catch up in a cafe. Compare
this social occasion with the experience that you have when texting with them on your smart-
phone. How are the two kinds of conversations different?
Comment
The nature of the conversations is likely to be very different with pros and cons for each.
Face-to-face conversations ebb and flow unpredictably and spontaneously from one topic to
the next. There can be much laughing, gesturing, and merriment among those taking part in the
conversation. Those present pay attention to the person speaking, and then when someone
else starts talking, all eyes move to them. There can be much intimacy through eye contact,
facial expressions, and body language, in contrast to when texters send intermittent messages
back and forth in bursts of time. Texting is also more premeditated; people decide what to say
and can review what they have written. They can edit their message or decide even not to send
it, although sometimes people press the Send button without much thought about its impact
on the interlocutor that can lead to regrets afterward.
Emoticons are commonly used as a form of expressivity to compensate for nonverbal
communication. While they can enrich a message by adding humor, affection, or a personal
touch, they are nothing like a real smile or a wink shared at a key moment in a conversation.
Another difference is that people say things and ask each other things in conversations that
they would never do via text. On the one hand, such confiding and directness may be more
engaging and enjoyable, but on the other hand, it can sometimes be embarrassing. It depends
on the context as to whether conversing face-to-face versus texting is preferable.
5 . 3 F A C E – T O – F A C E C O N V E R S AT I O N S 139
5.3 Face-to-Face Conversations
Talking is something that is effortless and comes naturally to most people. And yet holding a
conversation is a highly skilled collaborative achievement, having many of the qualities of a
musical ensemble. In this section we examine what makes up a conversation. Understanding
how conversations start, progress, and finish is useful when designing dialogues that take place
with chatbots, voice assistants, and other communication tools. In particular, it helps research-
ers and developers understand how natural it is, how comfortable people are when conversing
with digital agents, and the extent to which it is important to follow conversation mechanisms
that are found in human conversations. We begin by examining what happens at the beginning.
A: Hi there.
B: Hi!
C: Hi.
A: All right?
C: Good. How’s it going?
A: Fine, how are you?
C: Good.
B: OK. How’s life treating you?
Such mutual greetings are typical. A dialogue may then ensue in which the participants
take turns asking questions, giving replies, and making statements. Then, when one or more of
the participants wants to draw the conversation to a close, they do so by using either implicit or
explicit cues. An example of an implicit cue is when a participant looks at their watch, signal-
ing indirectly to the other participants that they want the conversation to draw to a close. The
other participants may choose to acknowledge this cue or carry on and ignore it. Either way,
the first participant may then offer an explicit signal, by saying, “Well, I have to go now. I got
a lot of work to do” or, “Oh dear, look at the time. I gotta run. I have to meet someone.” Fol-
lowing the acknowledgment by the other participants of such implicit and explicit signals, the
conversation draws to a close, with a farewell ritual. The different participants take turns say-
ing, “Goodbye,” “Bye,” “See you,” repeating themselves several times until they finally separate.
ACTIVITY 5.2
How do you start and end a conversation when (1) talking on the phone and (2) chatting online?
Do you use the same conversational mechanisms that are used in face-to-face conversations?
Comment
The person answering the call will initiate the conversation by saying “hello” or, more formally,
the name of their company/department. Most phones (landline and smartphones) have the facil-
ity to display the name of the caller (Caller ID) so the receiver can be more personal when answer-
ing, for example “Hello, John. How are you doing?” Phone conversations usually start with a
mutual greeting and end with a mutual farewell. In contrast, conversations that take place when
chatting online have evolved new conventions. The use of opening and ending greetings when
joining and leaving is rare; instead, most people simply start their message with what they want to
talk about and then stop when they have gotten an answer, as if in the middle of a conversation.
5 S O C I A L I N T E R A C T I O N140
Many people are now overwhelmed by the number of emails they receive each day and
find it difficult to reply to them all. This has raised the question of which conversational
techniques to use to improve the chances of getting someone to reply. For example, can the
way people compose their emails, especially the choice of opening and ending a conversation,
increase the likelihood that the recipient will respond to it? A study by Boomerang (Brendan
G, 2017) of 300,000 emails taken from mailing list archives of more than 20 different online
communities examined whether the opening or closing phrase that was used affected the
reply rate. They found that the most common opening phrase used “hey” (64 percent), fol-
lowed by “hello” (63 percent), and then “hi” (62 percent) were the ones that got the highest
rate of reply, in the region of 63–64 percent. This was found to be higher than emails that
opened with more formal phrases, like “Dear” (57 percent) or “Greetings” (56 percent). The
most popular form of sign-off was found to be “thanks” (66 percent), “regards” (63 per-
cent), and “cheers” (58 percent), with “best” being used less (51 percent). Again, they found
that emails that used closings with a form of “thank you” got the highest rate of responses.
Hence, which conversational mechanism someone uses to address the recipient can deter-
mine whether they will reply to it.
Conversational mechanisms enable people to coordinate their talk with one another,
allowing them to know how to start and stop. Throughout a conversation, further turn-
taking rules are followed that enable people to know when to listen, when it is their cue to
speak, and when it is time for them to stop again to allow the others to speak. Sacks et al.
(1978), famous for their work on conversation analysis, describe these in terms of three
basic rules.
Rule 1 The current speaker chooses the next speaker by asking a question, inviting an
opinion, or making a request.
Rule 2 Another person decides to start speaking.
Rule 3 The current speaker continues talking.
The rules are assumed to be applied in this order so that whenever there is an opportu-
nity for a change of speaker to occur, for instance, someone comes to the end of a sentence,
rule 1 is applied. If the listener to whom the question or request is addressed does not accept
the offer to take the floor, rule 2 is applied, and someone else taking part in the conversation
may take up the opportunity and offer a view on the matter. If this does not happen, then
rule 3 is applied, and the current speaker continues talking. The rules are cycled through
recursively until someone speaks again.
To facilitate rule following, people use various ways of indicating how long they are
going to talk and on what topic. For example, a speaker might say right at the beginning
of his turn in the conversation that he has three things to say. A speaker may also explic-
itly request a change in speaker by saying to the listeners, “OK, that’s all I want to say
on that matter. So, what do you think?” More subtle cues to let others know that their
turn in the conversation is coming to an end include the lowering or raising of the voice
to indicate the end of a question or the use of phrases like “You know what I mean?” or
simply “OK?” Back channeling (uh-huh, mmm), body orientation (such as moving away
from or closer to someone), gaze (staring straight at someone or glancing away), and
gesturing (for example, raising of arms) are also used in different combinations when
talking in order to signal to others when someone wants to hand over or take up a turn
in the conversation.
5 . 3 F A C E – T O – F A C E C O N V E R S AT I O N S 141
Another way in which conversations are coordinated and given coherence is through the use
of adjacency pairs (Schegloff and Sacks, 1973). Utterances are assumed to come in pairs in which
the first part sets up an expectation of what is to come next and directs the way in which what
does come next is heard. For example, A may ask a question to which B responds appropriately.
A: So, shall we meet at 8:00?
B: Um, can we make it a bit later, say 8:30?
Sometimes adjacency pairs get embedded in each other, so it may take some time for a
person to get a reply to their initial request or statement.
A: So, shall we meet at 8:00?
B: Wow, look at them.
A: Yes, what a funny hairdo!
B: Um, can we make it a bit later, say 8:30?
For the most part, people are not aware of following conversational mechanisms and
would be hard-pressed to articulate how they can carry on a conversation. Furthermore,
people don’t necessarily abide by the rules all the time. They may interrupt each other or talk
over each other, even when the current speaker has clearly indicated a desire to hold the floor
for the next two minutes to finish an argument. Alternatively, a listener may not take up a
cue from a speaker to answer a question or take over the conversation but instead continue
to say nothing even though the speaker may be making it glaringly obvious that it is the
listener’s turn to say something. Oftentimes, a teacher will try to hand over the conversation
to a student in a seminar by staring at them and asking a specific question, only to see the
student look at the floor and say nothing. The outcome is an embarrassing silence, followed
by either the teacher or another student picking up the conversation again.
Other kinds of breakdowns in conversation arise when someone says something that is
ambiguous, and the interlocutor misinterprets it to mean something else. In such situations,
the participants will collaborate to overcome the misunderstanding by using repair mecha-
nisms. Consider the following snippet of conversation between two people:
A: Can you tell me the way to get to the Multiplex Ranger cinema?
B: Yes, you go down here for two blocks and then take a right (pointing to the right), proceed
until you get to the light, and then it’s on the left.
A: Oh, so I go along here for a couple of blocks and then take a right, and the cinema is at the
light (pointing ahead of him)?
B: No, you go down this street for a couple of blocks (gesturing more vigorously than before to
the street to the right of him while emphasizing the word this).
A: Ahhhh! I thought you meant that one: so it’s this one (pointing in the same direction as the
other person).
B: Uh-hum, yes, that’s right: this one.
Detecting breakdowns in conversation requires that the speaker and listener both pay
attention to what the other says (or does not say). Once they have understood the nature
of the failure, they can then go about repairing it. As shown in the previous example, when
the listener misunderstands what has been communicated, the speaker repeats what they
said earlier, using a stronger voice intonation and more exaggerated gestures. This allows
the speaker to repair the mistake and be more explicit with the listener, allowing them to
understand and follow better what they are saying. Listeners may also signal when they don’t
5 S O C I A L I N T E R A C T I O N142
understand something or want further clarification by using various tokens, like “Huh?” or
“What?” (Schegloff, 1981), together with giving a puzzled look (usually frowning). This is
especially the case when the speaker says something that is vague. For example, they might
say “I want it” to their partner, without saying what it is they want. The partner may reply
using a token or, alternatively, explicitly ask, “What do you mean by it?” Nonverbal com-
munication also plays an important role in augmenting face-to-face conversation, involving
the use of facial expressions, back channeling, voice intonation, gesturing, and other kinds
of body language.
Taking turns also provides opportunities for the listener to initiate repair or request
clarification or for the speaker to detect that there is a problem and initiate repair. The lis-
tener will usually wait for the next turn in the conversation before interrupting the speaker
in order to give the speaker the chance to clarify what is being said by completing the
utterance.
ACTIVITY 5.3
How do people repair breakdowns when conversing via email? Do they do the same when texting?
Comment
As people usually cannot see each other when communicating by email or text, they have to
rely on other means of repairing the conversation when things are left unsaid or are unclear.
For example, when someone proposes an ambiguous meeting time, where the date and day
given don’t match up for the month, the person receiving the message may begin their reply
by asking politely, “Did you mean this month or June?” rather than baldly stating the other
person’s error, for example, “the 13th May is not a Wednesday!”
When someone does not reply to an email or text when the sender is expecting them to
do so, it can put them in a quandary as to what to do next. If someone does not reply to an
email within a few days, then the sender might send them a gentle nudge message that dimin-
ishes any blame, for example, “I am not sure if you got my last email as I was using a different
account” rather than explicitly asking them why they have not answered the email they sent.
When texting, it depends on whether it is a dating, family, or business-related text that has
been sent. When starting to date, some people will deliberately wait a while before replying
to a text as a form of “playing games” and trying not to appear to be overly keen. If they
don’t reply at all, it is a generally accepted notion that they are not interested, and no further
texts should be sent. In contrast, in other contexts, double-texting has become an acceptable
social norm as a way of reminding someone, without sounding too rude, to reply. It implicitly
implies that the sender understands that the recipient has overlooked the first text because
they were too busy or doing something else at the time, thereby saving face.
Emails and texts can also appear ambiguous, especially when things are left unsaid. For
example, the use of an ellipsis (…) at the end of a sentence can make it difficult to work out
what the sender intended when using it. Was it to indicate something was best left unsaid, the
sender is agreeing to something but their heart is not in it, or simply that the sender did not
know what to say? This email or text convention puts the onus on the receiver to decide what
is meant by the ellipsis and not on the sender to explain what they meant.
5 . 4 R E m O T E C O N V E R S AT I O N S 143
5.4 Remote Conversations
The telephone was invented in the nineteenth century by Alexander Graham Bell, enabling two
people to talk to one another at a distance. Since then, a number of other technologies have
been developed that support synchronous remote conversations, including videophones that were
developed in the 1960s–1970s (see Figure 5.2). In the late 1980s and 1990s, a range of “media
spaces” were the subjects of experimentation—audio, video, and computer systems were com-
bined to extend the world of desks, chairs, walls, and ceilings (Harrison, 2009). The goal was
to see whether it was possible for people, distributed over space and different time zones, could
communicate and interact with one another as if they were actually physically present.
An example of an early media space was the VideoWindow (Bellcore, 1989) that was
developed to enable people in different locations to carry on a conversation as they would do
if they were drinking coffee together in the same room (see Figure 5.3). Two lounge areas that
were 50 miles apart were connected via a 3-foot by 5-foot picture window onto which video
images of each location were projected. The large size enabled viewers to see a room of peo-
ple roughly the same size as themselves. A study of its use showed that many of the conver-
sations that took place between the remote conversants were indeed indistinguishable from
similar face-to-face interactions, with the difference being that they spoke a bit louder and
constantly talked about the video system (Kraut et al., 1990). Other research on how people
interact when using videoconferencing has shown that they tend to project themselves more,
take longer conversational turns, and interrupt each other less (O’Connaill et al., 1993).
Since this early research, videoconferencing has come of age. The availability of cheap
webcams and cameras now embedded as a default in tablets, laptops, and phones has
Figure 5.2 One of British Telecom’s early videophones
Source: British Telecommunications Plc
5 S O C I A L I N T E R A C T I O N144
greatly helped make videoconferencing mainstream. There are now numerous platforms
available from which to choose, both free and commercial. Many videoconferencing apps
(for example, Zoom or Meeting Owl) also allow multiple people at different sites to connect
synchronously. To indicate who has the floor, screen effects are often used, such as enlarging
the person who is talking to take up most of the screen or highlighting their portal when they
take the floor. The quality of the video has also improved, making it possible for people to
appear more life-like in most setups. This is most noticeable in high-end telepresence rooms
that use multiple high-definition cameras with eye-tracking features and directional micro-
phones (see Figure 5.4). The effect can be to make remote people appear more present by
projecting their body movements, actions, voice, and facial expressions to the other location.
Another way of describing this development is in terms of the degree of telepresence.
By this we mean the perception of being there when physically remote. Robots, for example,
have been built with telepresence in mind to enable people to attend events and communicate
with others by controlling them remotely. Instead of sitting in front of a screen from their
Figure 5.3 Diagram of VideoWindow system in use
Figure 5.4 A telepresence room
Source: Cisco Systems, Inc.
5 . 4 R E m O T E C O N V E R S AT I O N S 145
location and seeing the remote place solely through a fixed camera at the other place, they
can look around the remote place by controlling the “camera’s” eyes, which are placed on the
robot and are physically moving it around. For example, telepresence robots have been devel-
oped to enable children who are in a hospital to attend school by controlling their assigned
robots to roam around the classroom (Rae et al., 2015).
Telepresence robots are also being investigated to determine whether they can help peo-
ple who have developmental difficulties visit places remotely, such as museums. Currently,
several of the activities that are involved in going on such a visit, such as buying a ticket
or using public transport, are cognitively challenging, preventing them from going on such
trips. Natalie Friedman and Alex Cabral (2018) conducted a study with six participants with
developmental difficulties to see whether providing them each with a telepresence robot
would increase their physical and social self-efficacy and well-being. The participants were
taken on a remote tour of two museum exhibits and then asked to rate their experience after-
ward. Their responses were positive, suggesting that this kind of telepresence can open doors
to social experiences that were previously denied to those with disabilities.
BOX 5.1
Facebook Spaces: How Natural Is It to Socialize in a 3D World?
Facebook’s vision of social networking is to immerse people in 3D, where they interact with
their friends in virtual worlds. Figure 5.5 shows what it might look like: Two avatars (Jack
and Diane) are talking at a virtual table beside a lake and with some mountains in the back-
ground. Users experience this by wearing virtual reality (VR) headsets. The goal is to provide
users with a magical feeling of presence, one where they can feel as if they are together, even
(Continued)
Figure 5.5 Facebook’s vision of socializing in a 3D world
Source: Facebook
5 S O C I A L I N T E R A C T I O N146
Telepresence robots have also become a regular feature at conferences, including the
ACM CHI conference, enabling people to attend who cannot travel. They are typically about
5-feet tall, have a display at the top that shows the remote person’s head, and have a base
at the bottom holding wheels allowing the robot to move forward, move backward, or turn
around. A commercial example is the Beam+ (https://suitabletech.com/). To help the robot
navigate in its surroundings, two cameras are embedded in the display, one facing outward
to provide the remote person with a view of what is in front of them and the other facing
downward to provide a view of the floor. The robots also contain a microphone and speak-
ers to enable the remote person to be heard and to hear what is being said locally. Remote
users connect via Wi-Fi to the remote site and steer their Beam+ robot using a web interface.
A PhD student from the University College London (UCL) attended her first CHI confer-
ence remotely, during which time she gave a demo of her research every day by talking to
the attendees using the Beam+ robot (see Figure 5.6). Aside from a time difference of eight
hours (meaning that she had to stay up through the night to attend), it was an enriching
experience for her. She met lots of new people who not only were interested in her demo but
who also learned how she felt about attending the conference remotely. Her colleagues at the
conference also dressed up her robot to make her appear more like her, giving the robot a
set of foam-cutout arms with waving hands, and they put a university T-shirt on the robot.
However, she could not see how she appeared to others at the conference, so local attendees
took photos of her Beam+ robot to show her how she looked. She also commented how she
could not gauge the volume of her voice, and on one occasion she accidentally set the volume
control to be at its highest setting. When speaking to someone, she did not realize how loud
though they are apart in the physical world. To make the experience appear more life-like,
users can move their avatar’s arms through controls provided by the VR OculusTouch.
While big strides have been made toward improving social presence, there is still a way
to go before the look and feel of socializing with virtual avatars becomes more like the real
thing. For a start, the facial expressions and skin tone of virtual avatars still appear to be
cartoon-like.
Similar to the term telepresence, social presence refers to the feeling of being there with
a real person in virtual reality. Specifically, it refers to the degree of awareness, feeling, and
reaction to other people who are virtually present in an online environment. The term differs
from telepresence, which refers to one party being virtually present with another party who is
present in a physical space, such as a meeting room (note that it is possible for more than one
telepresence robot to be in the same physical space). Imagine if avatars become more convinc-
ing in their appearance to users. How many people would switch from their current use of 2D
media to catch up and chat with friends in this kind of immersive 3D Facebook page? Do you
think it would enhance the experience of how they would interact and communicate with oth-
ers remotely?
How many people would don a VR headset 10 times a day or more to teleport to meet
their friends virtually? (The average number of times that someone looks at Facebook on their
phones is now 14 times each day.) There is also the perennial problem of motion sickness that
25–40 percent of people say that they have experienced in VR (Mason, 2017).
https://suitabletech.com/
5 . 4 R E m O T E C O N V E R S AT I O N S 147
she was until another person across the room told her that she was yelling. (The person she
was talking to was too polite to tell her to lower her voice.)
Another navigation problem that can occur is when the remote person wants to move
from one floor to another in a building. They don’t have a way of pressing the elevator but-
ton to achieve this. Instead, they have to wait patiently beside the elevator for someone to
come along to help them. They also lack awareness of others who are around them. For
example, when moving into a room to get a good spot to see a presentation, they may not
realize that they have obscured the view of people sitting behind them. It can also be a bit sur-
real when their image starts breaking up on the robot “face” as the Wi-Fi signal deteriorates.
For example, Figure 5.7 shows Johannes Schöning breaking up into a series of pixels that
makes him look a bit like David Bowie!
Despite these usability problems, a study of remote users trying a telepresence robot
for the first time at a conference found the experience to be positive (Neustaedter et al.,
2016). Many felt that it provided them with a real sense of being at the conference—quite
different from the experience of watching or listening to talks online—as happens when
connecting via a livestream or a webinar. Being able to move around the venue also ena-
bled them to see familiar faces and to bump into people during coffee breaks. For the
Figure 5.6 Susan Lechelt’s Beam+ robot given a human touch with cut-out foam arms and a uni-
versity logo T-shirt
Source: Used courtesy of Susan Lechelt
5 S O C I A L I N T E R A C T I O N148
conference attendees, the response was also largely positive, enabling them to chat with
those who could not make the conference. However, sometimes the robot’s physical pres-
ence obstructed their view in a room when watching a speaker, and that could be frustrat-
ing. It is difficult to know how to tell a telepresence robot discreetly to move out of the way
while a talk is in progress and for the remote person to know where to move that is out of
the way as they have been told.
Figure 5.7 The image of Johannes Schöning breaking up on the Beam+ robot video display when
the Wi-Fi signal deteriorated
Source: Yvonne Rogers
ACTIVITY 5.4
Watch these two videos about Beam and Cisco’s telepresence. How does the experi-
ence of being at a meeting using a telepresence robot compare with using a telepres-
ence videoconferencing system?
Videos
BeamPro overview of how robotic telepresence works—https://youtu.be/SQCigphfSvc
Cisco TelePresence Room EndPoints MX700 and MX800—https://youtu.be/52lgl0kh0FI
Comment
The BeamPro allows the remote person to move around a workplace as well as sit in on meet-
ings. They can also have one-on-one conversations with someone at their desk. When moving
https://youtu.be/SQCigphfSvc
5 . 4 R E m O T E C O N V E R S AT I O N S 149
around, the remote individual can even bump into other remote workers, in the corridor, for
example, who are also using a BeamPro. Hence, it supports a range of informal and formal
social interactions. Using a BeamPro also allows someone to feel as if they are at work while
still being at home.
In contrast, the Cisco telepresence room has been designed specifically to support meet-
ings between small remote groups to make them feel more natural. When someone is speak-
ing, the camera zooms in on them to have them fill the screen. From the video, it appears
effortless and allows the remote groups to focus on their meeting rather than worry about
the technology. However, there is limited flexibility—conducting one-on-one meetings, for
example.
BOX 5.2
Simulating Human Mirroring Through Artificial Smiles
A common phenomenon that occurs during face-to-face conversations is mirroring, where
people mimic each other’s facial expressions, gestures, or body movements. Have you ever
noticed that when you put your hands behind your head, yawn, or rub your face during a
conversation with someone that they follow suit? These kinds of mimicking behaviors are
assumed to induce empathy and closeness between those conversing (Stel and Vonk, 2010).
The more people engage in mimicry, the more they view each other as being similar, which
in turn increases the rapport between them (Valdesolo and DeSteno, 2011). Mimicry doesn’t
always occur during a conversation, however—sometimes it requires a conscious effort, while
in other situations it does not occur. Might the use of technology increase its occurrence in
conversations?
One way would be to use special video effects. Suppose that an artificial smile could be
superimposed on the video face of someone to make them appear to smile. What might hap-
pen? Would they both begin to smile and in doing so feel closer to each other? To investigate
this possibility of simulating smiling mimicry, Keita Suzuki et al. (2017) developed a technique
called FaceShare. The system was developed so that it could deform the image of someone’s
face to make it appear to smile—even though they were not—whenever their partner’s face
began smiling. The mimicry method used 3D modeling of key feature points of the face,
including the contours, eyes, nose, and mouth to detect where to place the smile. The smile
was created by raising the lower eyelids and both ends of the mouth in conjunction with the
cheeks. The findings from this research showed that FaceShare was effective at making con-
versations appear smoother and that the pseudo smiles appearing on someone’s video face
were judged to be natural.
5 S O C I A L I N T E R A C T I O N150
5.5 Co-presence
Together with telepresence, there has been much interest in enhancing co-presence, that is,
supporting people in their activities when interacting in the same physical space. A number
of technologies have been developed to enable more than one person to use them at the same
time. The motivation is to enable co-located groups to collaborate more effectively when
working, learning, and socializing. Examples of commercial products that support this kind
of parallel interaction are Smartboards and Surfaces, which use multitouch, and Kinect, which
uses gesture and object recognition. To understand how effective they are, it is important to
consider the coordination and awareness mechanisms already in use by people in face-to-face
interactions and then to see how these have been adapted or replaced by the technology.
5.5.1 Physical Coordination
When people are working closely together, they talk to each other, issuing commands and
letting others know how they are progressing. For example, when two or more people are
collaborating, as when moving a piano, they shout instructions to each other, like “Down
a bit, left a touch, now go straight forward,” to coordinate their actions. A lot of nonverbal
communication is also used, including nods, shakes, winks, glances, and hand-raising in com-
bination with such coordination talk in order to emphasize and sometimes replace it.
For time-critical and routinized collaborative activities, especially where it is difficult
to hear others because of the physical conditions, people frequently use gestures (although
radio-controlled communication systems may also be used). Various types of hand signals
have evolved, with their own set of standardized syntax and semantics. For example, the arm
and baton movements of a conductor coordinate the different players in an orchestra, while
the arm and orange baton movements of ground personnel at an airport signal to a pilot how
to bring the plane into its assigned gate. Universal gestures, such as beckoning, waving, and
halting hand movement, are also used by people in their everyday settings.
The use of physical objects, such as wands and batons, can also facilitate coordination.
Group members can use them as external thinking props to explain a principle, an idea, or a
plan to the others (Brereton and McGarry, 2000). In particular, the act of waving or holding
up a physical object in front of others is very effective at commanding attention. The per-
sistence and ability to manipulate physical artifacts may also result in more options being
explored in a group setting (Fernaeus and Tholander, 2006). They can help collaborators
gain a better overview of the group activity and increase awareness of others’ activities.
5.5.2 Awareness
Awareness involves knowing who is around, what is happening, and who is talking with whom
(Dourish and Bly, 1992). For example, when attending a party, people move around the physi-
cal space, observing what is going on and who is talking to whom, eavesdropping on oth-
ers’ conversations, and passing on gossip to others. A specific kind of awareness is peripheral
awareness. This refers to a person’s ability to maintain and constantly update a sense of what
is going on in the physical and social context, by keeping an eye on what is happening in the
periphery of their vision. This might include noticing whether people are in a good or bad
mood by the way they are talking, how fast the drink and food is being consumed, who has
entered or left the room, how long someone has been absent, and whether the lonely person in
5 . 5 C O – p R E S E N C E 151
the corner is finally talking to someone—all while we are having a conversation with someone
else. The combination of direct observations and peripheral monitoring keeps people informed
and updated on what is happening in the world.
Another form of awareness that has been studied is situational awareness. This refers
to being aware of what is happening around you in order to understand how information,
events, and your own actions will affect ongoing and future events. Having good situational
awareness is critical in technology-rich work domains, such as air traffic control or an oper-
ating theater, where it is necessary to keep abreast of complex and continuously changing
information.
People who work closely together also develop various strategies for coordinating their
work, based on an up-to-date awareness of what the others are doing. This is especially so for
interdependent tasks, where the outcome of one person’s activity is needed for others to be
able to carry out their tasks. For example, when putting on a show, the performers will con-
stantly monitor what each other is doing in order to coordinate their performance efficiently.
The metaphorical expression close-knit teams exemplifies this way of collaborating. People
become highly skilled in reading and tracking what others are doing and the information to
which they are paying attention.
A classic study of this phenomenon is of two controllers working together in a control
room in the London Underground subway system (Heath and Luff, 1992). An overriding
observation was that the actions of one controller were tied closely to what the other was
doing. One of the controllers (controller A) was responsible for the movement of trains on
the line, while the other (controller B) was responsible for providing information to passen-
gers about the current service. In many instances, it was found that controller B overheard
what controller A was saying and doing and acted accordingly, even though controller A had
not said anything explicitly to him. For example, on overhearing controller A discussing a
problem with a train driver over the in-cab intercom system, controller B inferred from the
conversation that there was going to be a disruption in the service and so started to announce
this to the passengers on the platform before controller A had even finished talking with the
train driver. At other times, the two controllers kept a lookout for each other, monitoring
the environment for actions and events that they might not have noticed but that could have
been important for them to know about so that they could act appropriately.
ACTIVITY 5.5
What do you think happens when one person in a close-knit team does not see or hear some-
thing, or misunderstands what has been said, while the others in the group assume that person
has seen, heard, or understood what has been said?
Comment
The person who has noticed that someone has not acted in the manner expected may use one
of a number of subtle repair mechanisms, say coughing or glancing at something that needs
to be attended to. If this doesn’t work, they may then resort to stating explicitly aloud what
(Continued)
5 S O C I A L I N T E R A C T I O N152
5.5.3 Shareable Interfaces
A number of technologies have been designed to capitalize on existing forms of coordination
and awareness mechanisms. These include whiteboards, large touch screens, and multitouch
tables that enable groups of people to collaborate while interacting at the same time with
content on the surfaces. Several studies have investigated whether different arrangements
of shared technologies can help co-located people work better together (for example, see
Müller-Tomfelde, 2010). An assumption is that shareable interfaces provide more opportuni-
ties for flexible kinds of collaboration compared with single-user interfaces, through enabling
co-located users to interact simultaneously with digital content. The use of fingers or pens
as input on a public display is observable by others, increasing opportunities for building
situational and peripheral awareness. The sharable surfaces are also considered to be more
natural than other technologies, enticing people to touch them without feeling intimidated or
embarrassed by the consequences of their actions. For example, small groups found it more
comfortable working together around a tabletop compared with sitting in front of a PC or
standing in a line in front of a vertical display (Rogers and Lindley, 2004).
BOX 5.3
Playing Together in the Same Place
Augmented reality (AR) sandboxes have been developed for museum visitors to interact with
a landscape, consisting of mountains, valleys, and rivers. The sand is real, while the landscape
is virtual. Visitors can sculpt the sand into different-shaped contours that change their appear-
ance to look like a river or land, depending on the height of the sand piles. Figure 5.8 shows
a AR sandbox that was installed at the V&A museum in London. On observing two young
children playing at the sandbox, this author overheard one say to the other while flattening a
pile of sand, “Let’s turn this land into sea.” The other replied “OK, but let’s make an island on
that.” They continued to talk about how and why they should change their landscape. It was
a pleasure to watch this dovetailing of explaining and doing.
The physical properties of the sand, together with the real-time changing superimposed
landscape, provided a space for children (and adults) to collaborate in creative ways.
had previously been signaled implicitly. Conversely, the unaware person may wonder why
the event hasn’t happened and, likewise, look over at the other team members, cough to get
their attention, or explicitly ask them a question. The kind of repair mechanism employed at
a given moment will depend on a number of factors, including the relationship among the
participants, for instance, whether one is more senior than the others. This determines who
can ask what, the perceived fault or responsibility for the breakdown, and the severity of the
outcome of not acting there and then on the new information.
5 . 5 C O – p R E S E N C E 153
Often in meetings, some people dominate while others say very little. While this is OK
in certain settings, in others it is considered more desirable for everyone to have a say. Is it
possible to design shareable technologies so that people can participate around them more
equally? Much research has been conducted to investigate whether this is possible. Of primary
importance is whether the interface invites people to select, add, manipulate, or remove digi-
tal content from the displays and devices. A user study showed that a tabletop that allowed
group members to add digital content by using physical tokens resulted in more equitable
participation than if only digital input was allowed via touching icons and menus at the
tabletop (Rogers et al., 2009). This suggests that it was easier for people who are normally
shy in groups to contribute to the task. Moreover, people who spoke the least were found to
make the largest contribution to the design task at the tabletop, in terms of selecting, adding,
moving, and removing options. This reveals how changing the way people can interact with a
surface can affect group participation. It shows that it is possible for more reticent members
to contribute without feeling under pressure to speak more.
Figure 5.8 Visitors creating together using an Augmented Reality Sandbox at the V&A
Museum in London
Source: Helen Sharp
5 S O C I A L I N T E R A C T I O N154
Experimentation with real-time feedback presented via ambient displays has also been
shown to provide a new form of awareness for co-located groups. LEDs glowing in tabletops
and abstract visualizations on handheld and wall displays have been designed to represent
how different group members are performing, such as turn-taking. The assumption is that
this kind of real-time feedback can promote self and group regulation and in so doing modify
group members’ contributions to make them more equitable. For example, the Reflect Table
was designed based on this assumption (Bachour et al., 2008). The table monitors and analyzes
ongoing conversations using embedded microphones in front of each person and represents
this in the form of increasing numbers of colored LEDs (see Figure 5.9). A study investigated
whether students became more aware of how much they were speaking during a group meet-
ing when their relative levels of talk were displayed in this manner and, if so, whether they
regulated their levels of participation more effectively. In other words, would the girl in the
bottom right reduce her contributions (as she clearly has been talking the most) while the boy
in the bottom left increase his (as he has been talking the least)? The findings were mixed:
Some participants changed their level to match the levels of others, while others became frus-
trated and chose simply to ignore the LEDs. Specifically, those who spoke the most changed
their behavior the most (that is, reduced their level) while those who spoke the least changed theirs
the least (in other words, did not increase their level). Another finding was that participants
who believed that it was beneficial to contribute equally to the conversation took more
notice of the LEDs and regulated their conversation level accordingly. For example, one
participant said that she “refrained from talking to avoid having a lot more lights than the
others” (Bachour et al., 2010). Conversely, participants who thought it was not important
took less notice. How do you think you would react?
An implication from the various user studies on co-located collaboration around tab-
letops is that designing shareable interfaces to encourage more equitable participation isn’t
straightforward. Providing explicit real-time feedback on how much someone is speaking
in a group may be a good way of showing everyone who is talking too much, but it may be
intimidating for those who are talking too little. Allowing discreet and accessible ways for
adding and manipulating content to an ongoing collaborative task at a shareable surface may
Figure 5.9 The Reflect Table
Source: Used courtesy of Pierre Dillenbourg
5 . 5 C O – p R E S E N C E 155
be more effective at encouraging greater participation from people who normally find it dif-
ficult or who are simply unable to contribute verbally to group settings (for example, those
on the autistic spectrum, those who stutter, or those who are shy or are non-native speakers).
How best to represent the activity of online social networks in terms of who is taking
part has also been the subject of much research. A design principle that has been influential is
social translucence (Erickson and Kellogg, 2000). This refers to the importance of designing
communication systems to enable participants and their activities to be visible to one another.
This idea was very much behind the early communication tool, Babble, developed at IBM
by David Smith (Erickson et al., 1999), which provided a dynamic visualization of the par-
ticipants in an ongoing chat room. A large 2D circle was depicted using colored marbles on
each user’s monitor. Marbles inside the circle conveyed those individuals active in the current
conversation. Marbles outside the circle showed users involved in other conversations. The
more active a participant was in the conversation, the more the corresponding marble moved
toward the center of the circle. Conversely, the less engaged a person was in the ongoing con-
versation, the more the marble moved toward the periphery of the circle.
Since this early work on visualizing social interactions, there have been a number of
virtual spaces developed that provide awareness about what people are doing, where they
are, and their availability, with the intention of helping them feel more connected. Work-
ing in remote teams can be isolating, especially if they rarely get to see their colleagues face
to face. When teams are not co-located, they also miss out on in-person collaboration and
valuable informal conversations that build team alignment. This is where the concept of
the “online office” comes in. For example, Sococo (https://www.sococo.com/) is an online
office platform that is bridging the gap between remote and co-located work. It uses the
spatial metaphor of a floor plan of an office to show where people are situated, who is in
a meeting, and who is chatting with whom. The Sococo map (see Figure 5.10) provides a
Search for colleagues
across the
workspace to see
status or instantly chat
See a team in a meeting,
sharing screens and viewing
documents in a room
Name a room to reflect
the topic of a meeting in
progress
Knock on a door to join a
meeting or just pop in
Send a link for a guest to
join you in your Sococo office
Blinking avatars are
colleagues collaborating
Share documents or links
on a desk for immediate
access to anyone in the
room
Instantly “Get”
colleagues to
spontaneously
collaborate
Scale instantly with
unlimited floors to your
Sococo space
Figure 5.10 Sococo floor plan of a virtual office, showing who is where and who is meeting with whom
Source: Used courtesy of Leeann Brumby
5 S O C I A L I N T E R A C T I O N156
bird’s-eye view of a team’s online office, giving everyone at-a-glance insight into teammates’
availability and what’s happening organizationally. Sococo also provides the sense of pres-
ence and virtual “movement” that you get in a physical office—anyone can pop into a room,
turn on their microphone and camera, and meet with another member of their team face to
face. Teams can work through projects, get feedback from management, and collaborate ad
hoc in their online office regardless of physical location. This allows organizations to take
advantage of the benefits of the distributed future of work while still providing a central,
online office for their teams.
BOX 5.4
Can Technologies Be Designed to Help People Break the Ice and
Socialize?
Have you ever found yourself at a party, wedding, conference, or other social gathering, stand-
ing awkwardly by yourself, not knowing who to talk to or what to talk about? Social embar-
rassment and self-consciousness affect most of us at such moments, and such feelings are most
acute when one is a newcomer and by oneself, such as a first-time attendee at a conference.
How can conversation initiation be made easier and less awkward for people who do not
know each other?
A number of mechanisms have been employed by organizers of social events, such
as asking old-timers to act as mentors and the holding of various kinds of ice-breaking
activities. Badge-wearing, the plying of drink and food, and introductions by others are also
common ploys. While many of these methods can help, engaging in ice-breaking activities
requires people to act in a way that is different from the way they normally socialize and
which they may find equally uncomfortable or painful to do. They often require people to
agree to join in a collaborative game, which they may find embarrassing. This can be exac-
erbated by the fact that once people have agreed to take part, it is difficult for them to drop
out because of the perceived consequences that it will have for the others and themselves
(such as being seen by the others as a spoilsport or party-pooper). Having had one such
embarrassing experience, most people will shy away from any further kinds of ice-breaking
activities.
An alternative approach is to design a physical space where people can enter and exit a
conversation with a stranger in subtler ways, that is, one where people do not feel threatened
or embarrassed and that does not require a high level of commitment. The classic Opinionizer
system was designed along these lines, with the goal of encouraging people in an informal
gathering to share their opinions visually and anonymously (Brignull and Rogers, 2003). The
collective creation of opinions via a public display was intended to provide a talking point
for the people standing beside it. Users submitted their opinions by typing them in at a public
keyboard. To add color and personality to their opinions, a selection of small cartoon avatars
and speech bubbles were available. The screen was also divided into four labeled quadrants
representing different backgrounds, such as techie, softie, designer, or student, to provide a
factor on which people could comment (see Figure 5.11).
5 . 5 C O – p R E S E N C E 157
A range of other ambient-based displays have been developed and placed in physical work
settings with the purpose of encouraging people to socialize and talk more with each other.
For example, the Break-Time Barometer was designed to persuade people to come out of their
offices for a break to meet others they might not talk with otherwise (Kirkham et al., 2013).
An ambient display, based on a clock metaphor, shows how many people are currently in the
common room; if there are people present, it also sends an alert that it would be a good time
to join them for a break. While the system nudged some people to go for a break in the staff
room, it also had the opposite effect on others who used it to determine when breaks weren’t
happening so that they could take a break without their colleagues being around for company.
When the Opinionizer was placed in various social gatherings, a honey-pot effect was
observed. By this it is meant the creation of a sociable buzz in the immediate vicinity of the
Opinionizer as a result of an increase in the number of people moving into the area. Further-
more, by standing in this space and showing an interest, for example visibly facing the screen
or reading the text, people gave off a tacit signal to others that they were open to discussion
and interested in meeting new people.
There are now a number of commercial ice-breaking phone apps available that use arti-
ficial intelligence (AI) matchmaking algorithms to determine which preferences and interests
shared among people make them suitable conversational partners. Wearable technology is
also being developed as a new form of digital ice-breaker. Limbic Media (https://limbicmedia
.ca/social-wearables/), for example, has developed a novel pendant device colored with LED
lights for this purpose. When two people touch their pendants together, the effect is for them
to vibrate. This coming together action can break the ice in a fun and playful way.
(a) (b)
Figure 5.11 (a) The Opinionizer interface and (b) a photo of it being used at a book launch party
Source: Helen Sharp
This video features Limbic Media’s novel type of social wearable being used at
the 2017 BCT Tech Summit: https://vimeo.com/216045804.
https://limbicmedia.ca/social-wearables/
https://limbicmedia.ca/social-wearables/
5 S O C I A L I N T E R A C T I O N158
5.6 Social Engagement
Social engagement refers to participation in the activities of a social group (Anderson and
Binstock, 2012). Often it involves some form of social exchange where people give or receive
something from others. Another defining aspect is that it is voluntary and unpaid. Increas-
ingly, different forms of social engagement are mediated by the Internet. For example, there
are many websites now that support pro-social behavior by offering activities intended to
help others. One of the first websites of this ilk was GoodGym (www.goodgym.org/), which
connects runners with isolated older people. While out running, the runners stop for a chat
with an older person who has signed up to the service, and the runner helps them with
their chores. The motivation is to help others in need while getting fit. There is no obliga-
tion, and anyone is welcome to join. Another website that was set up is conservation vol-
unteers (https://www.tcv.org.uk/). The website brings together those who want to help out
with existing conservation activities. By bringing different people together, social cohesion
is also promoted.
Not only has the Internet enabled local people to meet who would not have otherwise,
it has proven to be a powerful way of connecting millions of people with a common interest
in ways unimaginable before. An example is retweeting a photo that resonates with a large
crowd who finds it amusing and wants to pass it on further. For example, in 2014, the most
retweeted selfie was one taken by Ellen DeGeneres (an American comedian and television
host) at the Oscar Academy Awards of her in front of a star-studded, smiling group of actors
and friends. It was retweeted more than 2 million times (more than three-quarters of a mil-
lion in the first half hour of being tweeted)—far exceeding the one taken by Barack Obama
at Nelson Mandela’s funeral the previous year.
There has even been an “epic Twitter battle.” A teenager from Nevada, Carter Wilker-
son, asked Wendy’s fast-food restaurant how many retweets were needed for him to receive
a whole year’s supply of free chicken nuggets. The restaurant replied “18 million” (see
Figure 5.12). From that moment on, his quest became viral with his tweet being retweeted
more than 2 million times. Ellen’s record was suddenly put in jeopardy, and she intervened,
putting out a series of requests on her show for people to continue to retweet her tweet so
her record would be upheld. Carter, however, surpassed her record at the 3.5 million mark.
During the Twitter battle, he used his newly found fame to create a website that sold T-shirts
promoting his chicken nugget challenge. He then donated all of the proceeds from the sales
toward a charity that was close to his heart. The restaurant also gave him a year’s supply of
free chicken nuggets—even though he didn’t reach the target of 18 million. Not only that, it
also donated $100,000 to the same charity in honor of Carter achieving a new record. It was
a win-win situation (except maybe for Ellen).
Another way that Twitter connects people rapidly and at scale is when unexpected events
and disasters happen. Those who have witnessed something unusual may upload an image
that they have taken of it or retweet what others have posted to inform others about it. Those
who like to reach out in this way are sometimes called digital volunteers. For example, while
writing this chapter, there was a massive thunderstorm overhead that was very dramatic.
I checked out the Twitter hashtag #hove (I was in the United Kingdom) and found that
hundreds of people had uploaded photos of the hailstones, flooding, and minute-by-minute
updates of how public transport and traffic were being affected. It was easy to get a sense
http://www.goodgym.org/
5 . 6 S O C I A L E N g A g E m E N T 159
of the scale of the storm before it was picked up by the official media channels, which then
used some of the photos and quotes from Twitter in their coverage (see Figure 5.13). Relying
on Twitter for breaking news has increasingly become the norm. When word came of a huge
explosion in San Bruno, California, the chief of the Federal Emergency Management Agency
in the United States logged on to Twitter and searched for the word explosion. Based on the
tweets coming from that area, he was able to discern that the gas explosion and ensuing fire
was a localized event that would not spread to other communities. He noted how he got bet-
ter situational awareness more quickly from reading Twitter than by hearing about it from
official sources.
Clearly, the immediacy and global reach of Twitter provides an effective form of com-
munication, providing first responders and those living in the affected areas with up-to-the-
minute information about how a wildfire, storm, or gas plume is spreading. However, the
reliability of the tweeted information can sometimes be a problem. For example, some people
end up obsessively checking and posting, sometimes without realizing that this can start or
fuel rumors by adding news that is old or incorrect. Regulars can go into a frenzy, constantly
adding new tweets about an event, as witnessed when an impending flood was announced
(Starbird et al., 2010). While such citizen-led dissemination and retweeting of information
from disparate sources is well intentioned, it can also flood the Twitter streams, making it
difficult to know what is old, actual, or hearsay.
Figure 5.12 Carter Wilkerson’s tweet that went viral
5 S O C I A L I N T E R A C T I O N160
Figure 5.13 A weather warning photo tweeted and retweeted about a severe storm in Hove,
United Kingdom
BOX 5.5
Leveraging Citizen Science and Engagement Through Technology
The growth and success of citizen science and citizen engagement has been made possible
by the Internet and mobile technology, galvanizing and coordinating the efforts of millions
of people throughout the world. Websites, smartphone apps, and social media have been
instrumental in leveraging the reach and impact of a diversity of citizen science projects
across time and geographical zones (Preece et al., 2018). Citizen science involves local peo-
ple helping scientists carry out a scientific project at scale. Currently, thousands of such
projects have been set up all over the world, whereby volunteers help out in a number
of research areas, including biodiversity, air quality, astronomy, and environmental issues.
They do so by engaging in scientific activities such as monitoring plants and wildlife, col-
lecting air and water samples, categorizing galaxies, and analyzing DNA sequences. Citizen
engagement involves people helping governments, rather than scientists, to improve pub-
lic services and policies in their communities. Examples include setting up and oversee-
ing a website that offers local services for community disasters and creating an emergency
response team when a disaster occurs.
Why would anyone want to volunteer their time for the benefit of science or government?
Many people want to learn more about a domain, while others want to be recognized for their
contributions (Rotman et al., 2014). Some citizen science apps have developed online mecha-
nisms to support this. For example, iNaturalist (https://www.inaturalist.org/) enables volun-
teers to comment on and help classify others’ contributions.
https://www.inaturalist.org/
5 . 6 S O C I A L E N g A g E m E N T 161
DILEmmA
Is It OK to Talk with a Dead Person Using a Chatbot?
Eugenia Kuyda, an AI researcher, lost a close friend in a car accident. He was only in his 20s.
She did not want to lose his memory, so she gathered all of the texts he had sent over the
course of his life and made a chatbot from them. The chatbot is programmed to respond
automatically to text messages so that Eugenia can talk to her friend as if he were still alive.
It responds to her questions using his own words.
Do you think this kind of interaction is creepy or comforting to someone who is grieving?
Is it disrespectful of the dead, especially if the dead person has not given their consent? What
if the friend had agreed to having their texts mashed up in this way in a “pre-death digital
agreement”? Would that be more socially acceptable?
In-Depth Activity
The goal of this activity is to analyze how collaboration, coordination, and communication
are supported in online video games involving multiple players.
The video game Fortnite arrived in 2017 to much acclaim. It is an action game designed
to encourage teamwork, cooperation, and communication. Download the game from an
app store (it is free) and try it. You can also watch an introductory video about it at
https://youtu.be/_U2JbFhUPX8.
Answer the following questions.
1. Social issues
(a) What is the goal of the game?
(b) What kinds of conversations are supported?
(c) How is awareness of the others in the game supported?
(d) What kinds of social protocols and conventions are used?
(e) What types of awareness information are provided?
(f) Does the mode of communication and interaction seem natural or awkward?
(g) How do players coordinate their actions in the game?
2. Interaction design issues
(a) What form of interaction and communication is supported, for instance, text, audio,
and/or video?
(b) What other visualizations are included? What information do they convey?
(c) How do users switch between different modes of interaction, for example, exploring
and chatting? Is the switch seamless?
(d) Are there any social phenomena that occur specific to the context of the game that
wouldn’t happen in face-to-face settings?
3. Design issues
• What other features might you include in the game to improve communication, coordi-
nation, and collaboration?
5 S O C I A L I N T E R A C T I O N162
Further Reading
boyd, d. (2014) It’s Complicated: The Social Lives of Networked Teens. Yale. Based on a
series of in-depth interviews with a number of teenagers, danah boyd offers new insights into
how teenagers across the United States, who have only ever grown up in a world of apps
and media, navigate, use, and appropriate them to grow up and develop their identities. A
number of topics are covered that are central to what it means to grow up in a networked
world, including bullying, addiction, expressiveness, privacy, and inequality. It is insightful
and covers much ground.
CRUMLISH, C. and MALONE, E. (2009) Designing Social Interfaces. O’Reilly. This is a col-
lection of design patterns, principles, and advice for designing social websites, such as online
communities.
GARDNER, H. and DAVIS, K. (2013) The App Generation: How Today’s Youth Navigate
Identity, Intimacy, and Imagination in a Digital World. Yale. This book explores the impact
of apps on the young generation, examining how they affect their identity, intimacy, and
imagination. It focuses on what it means to be app-dependent versus app-empowered.
Summary
Human beings are inherently social. People will always need to collaborate, coordinate, and
communicate with one another, and the diverse range of applications, web-based services,
and technologies that have emerged enable them to do so in more extensive and diverse ways.
In this chapter, we looked at some core aspects of sociality, namely, communication and col-
laboration. We examined the main social mechanisms that people use in different conversa-
tional settings when interacting face to face and at a distance. A number of collaborative and
telepresence technologies designed to support and extend these mechanisms were discussed,
highlighting core interaction design concerns.
Key Points
• Social interaction is central to our everyday lives.
• Social mechanisms have evolved in face-to-face and remote contexts to facilitate
conversation, coordination, and awareness.
• Talk and the way it is managed are integral to coordinating social interaction.
• Many kinds of technologies have been developed to enable people to communicate remotely
with one another.
• Keeping aware of what others are doing and letting others know what you are doing are
important aspects of collaboration and socializing.
• Social media has brought about significant changes in the way people keep in touch and
manage their social lives.
163
ROBINSON, S., MARSDEN, G. and JONES, M. (2015) There’s Not an App for That:
Mobile User Experience Design for Life. Elsevier. This book offers a fresh approach for
designers, students, and researchers to dare to think differently by moving away from the
default framing of technological design in terms of yet another “looking down” app. It asks
the reader instead to look up and around them—to be inspired by how we actually live our
lives when “out there” app-less. They also explore what it means to design technologies to
be more mindful.
TURKLE, S. (2016) Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin.
Sherry Turkle has written extensively about the positive and negative effects of digital tech-
nology on everyday lives—at work, at home, at school, and in relationships. This book is a
very persuasive warning about the negative impacts of perpetual use of smartphones. Her
main premise is that as people—both adults and children—become increasingly glued to
their phones instead of talking to one another, they lose the skill of empathy. She argues that
we need to reclaim conversation to relearn empathy, friendship, and creativity.
F U R T H E R R E A D I N g
Chapter 6
E M O T I O N A L I N T E R A C T I O N
6.1 Introduction
6.2 Emotions and the User Experience
6.3 Expressive Interfaces and Emotional Design
6.4 Annoying Interfaces
6.5 Affective Computing and Emotional AI
6.6 Persuasive Technologies and Behavioral Change
6.7 Anthropomorphism
Objectives
The main goals of this chapter are to accomplish the following:
• Explain how our emotions relate to behavior and the user experience.
• Explain what are expressive and annoying interfaces and the effects they can have
on people.
• Introduce the area of emotion recognition and how it is used.
• Describe how technologies can be designed to change people’s behavior.
• Provide an overview on how anthropomorphism has been applied in interaction design.
6.1 Introduction
When you receive some bad news, how does it affect you? Do you feel upset, sad, angry, or
annoyed—or all of these? Does it put you in a bad mood for the rest of the day? How might
technology help? Imagine a wearable technology that could detect how you were feeling and
provide a certain kind of information and suggestions geared toward helping to improve
your mood, especially if it detected that you were having a real downer of a day. Would you
find such a device helpful, or would you find it unnerving that a machine was trying to cheer
you up? Designing technology to detect and recognize someone’s emotions automatically
from sensing aspects of their facial expressions, body movements, gestures, and so forth,
6 E M O T I O N A L I N T E R A C T I O N166
is a growing area of research often called emotional AI or affective computing. There are
many potential applications for using automatic emotion sensing, other than those intended
to cheer someone up, including health, retail, driving, and education. These can be used to
determine if someone is happy, angry, bored, frustrated, and so on, in order to trigger an
appropriate technology intervention, such as making a suggestion to them to stop and reflect
or recommending a particular activity for them to do.
In addition, emotional design is a growing area relating to the design of technology
that can engender desired emotional states, for example, apps that enable people to reflect
on their emotions, moods, and feelings. The focus is on how to design interactive prod-
ucts to evoke certain kinds of emotional responses in people. It also examines why people
become emotionally attached to certain products (for instance, virtual pets), how social
robots might help reduce loneliness, and how to change human behavior through the use of
emotive feedback.
In this chapter, we include emotional design and affective computing using the broader
term, emotional interaction, to cover both aspects. We begin by explaining what emotions
are and how they shape behavior and everyday experiences. We then consider how and
whether an interface’s appearance affects usability and the user experience. In particu-
lar, we look at how expressive and persuasive interfaces can change people’s emotions or
behaviors. How technology can detect human emotions using voice and facial recognition
is then covered. Finally, the way anthropomorphism has been used in interaction design is
discussed.
6.2 Emotions and the User Experience
Consider the different emotions one experiences throughout a common everyday activity—
shopping online for a product, such as a new laptop, a sofa, or a vacation. First, there is the
realization of needing or wanting one and then the desire and anticipation of purchasing it.
This is followed by the joy or frustration of finding out more about what products are avail-
able and deciding which to choose from potentially hundreds or even thousands of them by
visiting numerous websites, such as comparison sites, reviews, recommendations, and social
media sites. This entails matching what is available with what you like or need and whether
you can afford it. The thrill of deciding on a purchase may be quickly followed by the shock
of how much it costs and the disappointment that it is too expensive. The process of having
to revise your decision may be accompanied by annoyance if you discover that nothing is as
good as the first choice. It can become frustrating to keep looking and revisiting sites. Finally,
when you make your decision, a sense of relief is often experienced. Then there is the process
of clicking through the various options (such as color, size, warranty, and so forth) until the
online payment form pops up. This can be tedious, and the requirement to fill in the many
details raises the possibility of making a mistake. Finally, when the order is complete, you
can let out a big sigh. However, doubts can start to creep in—maybe the other one was better
after all… .
This rollercoaster set of emotions is what many of us experience when shopping online,
especially for big-ticket items where there is a myriad of options from which to choose and
where you want to be sure that you make the right choice.
6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 167
Emotional interaction is concerned with what makes people feel happy, sad, annoyed,
anxious, frustrated, motivated, delirious, and so on, and then using this knowledge to inform
the design of different aspects of the user experience. However, it is not straightforward.
Should an interface be designed to try to keep a person happy when it detects that they are
smiling, or should it try to change them from being in a negative mood to a positive one
when it detects that they are scowling? Having detected an emotional state, a decision has to
ACTIVITY 6.1
Have you seen one of the terminals shown in Figure 6.1 at an airport after you have gone
through security? Were you drawn toward it, and did you respond? If so, which smiley button
did you press?
Comment
The act of pressing one of the buttons can be very satisfying—providing a moment for you to
reflect upon your experience. It can even be pleasurable to express how you feel in this physi-
cal manner. Happyornot designed the feedback terminals that are now used in many airports
throughout the world. The affordances of the large, colorful, slightly raised buttons laid out in
a semicircle, with distinct smileys, makes it easy to know what is being asked of the passerby,
enabling them to select among feeling happy, angry, or something in between.
The data collected from the button presses provides statistics for an airport as to when
and where people are happiest and angriest after going through security. Happyornot has
found that it also makes travelers feel valued. The happiest times to travel, from the data they
have collected at various airports, are at 8 a.m. and 9 a.m. The unhappiest times recorded are
in the early hours of the morning, presumably because people are tired and grumpier.
Figure 6.1 A Happyornot terminal located after security at Heathrow Airport
Source: https://www.rsrresearch.com/research/why-metrics-matter. Used courtesy of Retail Systems
Research
https://www.rsrresearch.com/research/why-metrics-matter. Used courtesy of Retail Systems Research
6 E M O T I O N A L I N T E R A C T I O N168
be made as to what or how to present information to the user. Should it try to “smile” back
through using various interface elements, such as emojis, feedback, and icons? How expres-
sive should it be? It depends on whether a given emotional state is viewed as desirable for the
user experience or the task at hand. A happy state of mind might be considered optimal for
when someone goes to shop online if it is assumed that this will make them more willing to
make a purchase.
Advertising agencies have developed a number of techniques to influence people’s emo-
tions. Examples include showing a picture of a cute animal or a child with hungry, big eyes
on a website that “pulls at the heartstrings.” The goal is to make people feel sad or upset at
what they observe and make them want to do something to help, such as by making a dona-
tion. Figure 6.2, for example, shows a web page that has been designed to trigger a strong
emotional response in the viewer.
Our moods and feelings are also continuously changing, making it more difficult to
predict how we feel at different times. Sometimes, an emotion can descend upon us but
disappear shortly afterward. For example, we can become startled by a sudden, unexpected
loud noise. At other times, an emotion can stay with us for a long time; for example, we can
remain annoyed for hours when staying in a hotel room that has a noisy air conditioning
unit. An emotion like jealousy can keep simmering for a long period of time, manifesting
itself on seeing or hearing something about the person or thing that triggered it.
Figure 6.2 A webpage from Crisis (a UK homelessness charity)
Source: https://www.crisis.org.uk
In a series of short videos, Kia Höök talks about affective computing, explaining
how emotion is formed and why it is important to consider when designing user
experiences with technology. See www.interaction-design.org/encyclopedia/
affective_computing.html.
http://www.interaction-design.org/encyclopedia/affective_computing.html
http://www.interaction-design.org/encyclopedia/affective_computing.html
https://www.crisis.org.uk
6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 169
A good place to start understanding how emotions affect behavior and how behavior
affects emotions is to examine how people express themselves and read each other’s expres-
sions. This includes understanding the relationship between facial expressions, body lan-
guage, gestures, and tone of voice. For example, when people are happy, they typically smile,
laugh, and relax their body posture. When they are angry, they might shout, gesticulate, tense
their hands, and screw up their face. A person’s expressions can trigger emotional responses
in others. When someone smiles, it can cause others to feel good and smile back.
Emotional skills, especially the ability to express and recognize emotions, are central to
human communication. Most people are highly skilled at detecting when someone is angry,
happy, sad, or bored by recognizing their facial expressions, way of speaking, and other body
signals. They also usually know what emotions to express in a given situation. For example,
when someone has just heard they have failed an exam, it is not a good time to smile and be
happy for them. Instead, people try to empathize and show that they feel sad, too.
There is an ongoing debate about whether and how emotion causes certain behaviors.
For example, does being angry make us concentrate better? Or does being happy make us
take more risks, such as spending too much money or vice versa or neither? It could be
that we can just feel happy, sad, or angry, and that this does not affect our behavior. Roy
Baumeister et al. (2007) argue that the role of emotion is more complicated than a simple
cause-and-effect model.
Many theorists, however, argue that emotions cause behavior, for example that fear
brings about flight and that anger initiates the fight perspective. A widely accepted expla-
nation, derived from evolutionary psychology, is that when something makes someone
frightened or angry, their emotional response is to focus on the problem at hand and try to
overcome or resolve the perceived danger. The physiological responses that accompany this
state usually include a rush of adrenalin through the body and the tensing of muscles. While
the physiological changes prepare people to fight or flee, they also give rise to unpleasant
experiences, such as sweating, butterflies in the stomach, quick breathing, heart pounding,
and even feelings of nausea.
Nervousness is a state of being that is often accompanied by several emotions, includ-
ing apprehension and fear. For example, many people get worried and some feel terrified
before speaking at a public event or a live performance. There is even a name for this kind
of nervousness—stage fright. Andreas Komninos (2017) suggests that it is the autonomous
system “telling” people to avoid these kinds of potentially humiliating or embarrassing expe-
riences. But performers or professors can’t simply run away. They have to cope with the
negative emotions associated with having to be in front of an audience. Some are able to
turn their nervous state to their advantage, using the increase in adrenalin to help them focus
on their performance. Others are only too glad when the performance is over and they can
relax again.
As mentioned earlier, emotions can be simple and short-lived or complex and long-lasting.
To distinguish between the two types of emotion, researchers have described them in terms
of being either automatic or conscious. Automatic emotions (also knowns as affect) happen
rapidly, typically within a fraction of a second and, likewise, may dissipate just as quickly.
Conscious emotions, on the other hand, tend to be slow to develop and equally slow to dis-
sipate, and they are often the result of a conscious cognitive behavior, such as weighing the
odds, reflection, or contemplation.
6 E M O T I O N A L I N T E R A C T I O N170
Understanding how emotions work provides a way of considering how to design for user
experiences that can trigger affect or reflection. For example, Don Norman (2005) suggests
that being in a positive state of mind can enable people to be more creative as they are less
focused. When someone is in a good mood, it is thought to help them make decisions more
quickly. He also suggests that when people are happy, they are more likely to overlook and
cope with minor problems that they are experiencing with a device or interface. In contrast,
when someone is anxious or angry, they are more likely to be less tolerant. He also suggests
that designers pay special attention to the information required to do the task at hand, but
especially in the case when designing apps or devices for serious tasks, such as monitor-
ing a process control plant or driving a car. The interface needs to be clearly visible with
BOx 6.1
How Does Emotion Affect Driving Behavior?
Research investigating the influence of emotions on driving behavior has been extensively
reviewed (Pêcher et al., 2011; Zhang and Chan, 2016). One major finding is that when driv-
ers are angry, their driving becomes more aggressive, they take more risks such as dangerous
overtaking, and they are prone to making more errors. Driving performance has also been
found to be negatively affected when drivers are anxious. People who are depressed are also
more prone to accidents.
What are the effects of listening to music while driving? A study by Christelle Pêcher et al.
(2009) found that people slowed down while driving in a car simulator when they listened to
either happy or sad music, as compared to neutral music. This effect is thought to be due to
the drivers focusing their attention on the emotions and lyrics of the music. Listening to happy
music was also found not only to slow drivers down, but to distract them more by reducing
their ability to stay in their lane. This did not happen with the sad music.
Source: Jonny Hawkins / Cartoon Stock
6 . 2 E M O T I O N s A N d T h E U s E R E x p E R I E N C E 171
unambiguous feedback. The bottom line is “things intended to be used under stressful situ-
ations require a lot more care, with much more attention to detail” (Norman, 2005, p. 26).
Don Norman and his colleagues (Ortony et al., 2005) have also developed a model of
emotion and behavior. It is couched in terms of different “levels” of the brain. At the lowest
level are parts of the brain that are prewired to respond automatically to events happening
in the physical world. This is called the visceral level. At the next level are the brain processes
that control everyday behavior. This is called the behavioral level. At the highest level are
brain processes involved in contemplating. This is called the reflective level (see Figure 6.3).
The visceral level responds rapidly, making judgments about what is good or bad, safe or
dangerous, pleasurable or abhorrent. It also triggers the emotional responses to stimuli (for
instance fear, joy, anger, and sadness) that are expressed through a combination of physi-
ological and behavioral responses. For example, many people will experience fear on seeing
a very large hairy spider running across the floor of the bathroom, causing them to scream
and run away. The behavioral level is where most human activities occur. Examples include
well-learned routine operations such as talking, typing, and swimming. The reflective level
entails conscious thought where people generalize across events or step back from their daily
routines. An example is switching between thinking about the narrative structure and spe-
cial effects used in a horror movie and becoming scared at the visceral level when watching
the movie.
One way of using the model is to think about how to design products in terms of the
three levels. Visceral design refers to making products look, feel, and sound good. Behavio-
ral design is about use and equates to the traditional values of usability. Reflective design
is about considering the meaning and personal value of a product in a particular culture.
For example, the design of a Swatch watch (see Figure 6.4) can be viewed in terms of the
three levels. The use of cultural images and graphical elements is designed to appeal to users
Sensory Motor
Control
Reflective
Behavioral
Visceral
Control
Figure 6.3 Anthony Ortony et al.’s (2005) model of emotional design showing three levels: visceral,
behavioral, and reflective
Source: Adapted from Norman (2005), Figure 1.1
6 E M O T I O N A L I N T E R A C T I O N172
at the reflective level; its affordances of use at the behavioral level, and the brilliant colors,
wild designs, and art attract users’ attention at the visceral level. They are combined to
create the distinctive Swatch trademark, and they are what draw people to buy and wear
their watches.
6.3 Expressive Interfaces and Emotional Design
Designers use a number of features to make an interface expressive. Emojis, sounds, colors,
shapes, icons, and virtual agents are used to (1) create an emotional connection or feel-
ing with the user (for instance, warmth or sadness) and/or (2) elicit certain kinds of emo-
tional responses in users, such as feeling at ease, comfort, and happiness. In the early days,
emotional icons were used to indicate the current state of a computer or a phone, notably
when it was waking up or being rebooted. A classic from the 1980s was the happy Mac
icon that appeared on the screen of the Apple computer whenever the machine was booted
(see Figure 6.5a). The smiling icon conveyed a sense of friendliness, inviting the user to feel
at ease and even smile back. The appearance of the icon on the screen was also meant to be
Figure 6.4 A Swatch watch called Dip in Color
Source: http://store.swatch.com/suop103-dip-in-color.html
http://store.swatch.com/suop103-dip-in-color.html
6 . 3 E x p R E s s I V E I N T E R F A C E s A N d E M O T I O N A L d E s I g N 173
reassuring, indicating that the computer was working. After being in use for nearly 20 years,
the happy and sad Mac icons were laid to rest. Apple now uses more impersonal but aestheti-
cally pleasing forms of feedback to indicate a process for which the user needs to wait, such
as “starting up,” “busy,” “not working,” or “downloading.” These include a spinning colorful
beach ball (see Figure 6.5b) and a moving clock indicator. Similarly, Android uses a spinning
circle to show when a process is loading.
Other ways of conveying expressivity include the following:
• Animated icons (for example, a recycle bin expanding when a file is placed in it and paper
disappearing in a puff of smoke when emptied)
• Sonifications indicating actions and events (such as whoosh for a window closing,
“schlook” for a file being dragged, or ding for a new email arriving)
• Vibrotactile feedback, such as distinct smartphone buzzes that represent specific messages
from friends or family
The style or brand conveyed by an interface, in terms of the shapes, fonts, colors, and
graphical elements used, and the way they are combined, also influence its emotional impact.
Use of imagery at the interface can result in more engaging and enjoyable experiences (Mullet
and Sano, 1995). A designer can also use a number of aesthetic techniques such as clean lines,
balance, simplicity, white space, and texture.
The benefits of having aesthetically pleasing interfaces have become more acknowl-
edged in interaction design. Noam Tractinsky (2013) has repeatedly shown how the aesthet-
ics of an interface can have a positive effect on people’s perception of the system’s usability.
When the look and feel of an interface is pleasing and pleasurable—for example through
beautiful graphics or a nice feel or the way that the elements have been put together—peo-
ple are likely to be more tolerant and prepared to wait a few more seconds for a website to
download. Furthermore, good-looking interfaces are generally more satisfying and pleasur-
able to use.
(a) (b)
Figure 6.5 (a) Smiling and sad Apple icons depicted on the classic Mac and (b) the spinning beach
ball shown when an app freezes
Source: (b) https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-
spinningbeachball-of-death
https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-spinningbeachball-of-death
https://www.macobserver.com/tmo/article/frozen-how-to-force-quit-an-os-x-app-showing-a-spinningbeachball-of-death
6 E M O T I O N A L I N T E R A C T I O N174
6.4 Annoying Interfaces
In many situations, interfaces may inadvertently elicit negative emotional responses, such as
anger. This typically happens when something that should be simple to use or set turns out to
be complex. The most common examples are remote controls, printers, digital alarm clocks,
and digital TV systems. Getting a printer to work with a new digital camera, trying to switch
from watching a DVD to a TV channel, and changing the time on a digital alarm clock in
For more information about the design of other Nest products, see
https://www.wired.com/story/inside-the-second-coming-of-nest/.
BOx 6.2
The Design of the Nest Thermostat Interface
The popular Nest thermostat provides an automatic way of controlling home heating that is
personalized to the habits and needs of the occupants. Where possible, it also works out how
to save money by reducing energy consumption when not needed. The wall-mounted device
does this by learning what temperature the occupants prefer and when to turn the heating on
and off in each room by learning their routines.
The Nest thermostat is more than just a smart meter, however. It was also designed to have
a minimalist and aesthetically pleasing interface (see Figure 6.6a). It elegantly shows the tempera-
ture currently on its round face and to which temperature it has been set. This is very different
from earlier generations of automatic thermostats, which were utilitarian box-shaped designs
with lots of complicated buttons and a dull screen that provided feedback about the setting and
temperature (see Figure 6.6b). It is little wonder that the Nest thermostat has been a success.
(a) (b)
Figure 6.6 (a) The Nest thermostat and (b) A traditional thermostat
Source: Nest
https://www.wired.com/story/inside-the-second-coming-of-nest/
6 . 4 A N N O Y I N g I N T E R F A C E s 175
a hotel can be very trying. Also, complex actions such as attaching the ends of cables between
smartphones and laptops, or inserting a SIM card into a smartphone, can be irksome, espe-
cially if it is not easy to see which way is correct to insert them.
This does not mean that developers are unaware of such usability problems. Several
methods have been devised to help the novice user get set up and become familiarized with
a technology. These methods include pop-up help boxes and contextual videos. Another
approach to helping users has been to make an interface appear friendlier as a way of reas-
suring users—especially those who were new to computers or online banking. One technique
that was first popularized in the 1990s was the use of cartoon-like companions. The assump-
tion was that novices would feel more at ease with a “friend” appearing on the screen and
would be encouraged to try things out after listening, watching, following, and interacting
with it. For example, Microsoft pioneered a class of agent-based software, Bob, aimed at
new computer users (many of whom were viewed as computer-phobic). The agents were pre-
sented as friendly characters, including a pet dog and a cute bunny. An interface metaphor of
a warm, cozy living room, replete with fire and furniture, was also provided (see Figure 6.7),
again intended to convey a comfortable feeling. However, Bob never became a commercial
product. Why do you think that was?
Contrary to the designers’ expectations, many people did not like the idea of Bob, finding
the interface too cute and childish. However, Microsoft did not give up on the idea of making its
interfaces friendlier and developed other kinds of agents, including the infamous Clippy (a paper
clip that had human-like qualities), as part of their Windows 98 operating environment. Clippy
typically appeared at the bottom of a user’s screen whenever the system thought the user needed
help carrying out a particular task (see Figure 6.8a). It, too, was depicted as a cartoon character,
with a warm personality. This time, Clippy was released as a commercial product, but it was not
a success. Many Microsoft users found it too intrusive, distracting them from their work.
Figure 6.7 “At home with Bob” software developed for Windows 95
Source: Microsoft Corporation
6 E M O T I O N A L I N T E R A C T I O N176
A number of online stores and travel agencies also began including automated virtual
agents in the form of cartoon characters who acted as sales agents on their websites. The agents
appeared above or next to a textbox where the user could type in their query. To make them
appear as if they were listening to the user, they were animated in a semi human-like way. An
example of this was Anna from IKEA (see Figure 6.8b) who occasionally nodded, blinked
her eyes, and opened her mouth. These virtual agents, however, have now largely disappeared
from our screens, being replaced by virtual assistants who talk in speech bubbles that have
no physical appearance, or static images of real agents who the user can talk to via LiveChat.
Interfaces, if designed poorly, can make people sometimes feel insulted, stupid, or threat-
ened. The effect can be to annoy them to the point of losing their temper. There are many
situations that cause such negative emotional responses. These include the following:
• When an application doesn’t work properly or crashes
• When a system doesn’t do what the user wants it to do
• When a user’s expectations are not met
• When a system does not provide sufficient information to let the user know what to do
• When error messages pop up that are vague or obtuse
• When the appearance of an interface is too noisy, garish, gimmicky, or patronizing
(a) (b)
Figure 6.8 Defunct virtual agents: (a) Microsoft’s Clippy and (b) IKEA’s Anna
Source: Microsoft Corporation
6 . 4 A N N O Y I N g I N T E R F A C E s 177
• When a system requires users to carry out too many steps to perform a task, only to dis-
cover a mistake was made somewhere along the line and they need to start all over again
• Websites that are overloaded with text and graphics, making it difficult to locate desired
information and resulting in sluggish performance
• Flashing animations, especially flashing banner ads and pop-up ads that cover the user
view and which require them to click in order to close them
• The overuse or automatic playing of sound effects and music, especially when selecting
options, carrying out actions, running tutorials, or watching website demos
• Featuritis—an excessive number of operations, such as an array of buttons on remote controls
• Poorly laid-out keyboards, touchpads, control panels, and other input devices that cause
users to press the wrong keys or buttons persistently
ACTIVITY 6.2
Most people are familiar with the “404 error” message that pops up now and again when a
web page does not upload for the link they have clicked or when they have typed or pasted an
incorrect URL into a browser. What does it mean and why the number 404? Is there a better
way of letting users know when a link to a website is not working? Might it be better for the
web browser to say that it was sorry rather than presenting an error message?
Comment
The number 404 comes from the HTML language. The first 4 indicates a client error. The
server is telling the user that they have done something wrong, such as misspelling the URL or
requesting a page that no longer exists. The middle 0 refers to a general syntax error, such as
a spelling mistake. The last 4 indicates the specific nature of the error. For the user, however, it
is an arbitrary number. It might even suggest that there are 403 other errors they could make!
Early research by Byron Reeves and Clifford Nass (1996) suggested that computers
should be courteous to users in the same way that people are to one another. They found that
people are more forgiving and understanding when a computer says that it’s sorry after mak-
ing a mistake. A number of companies now provide alternative and more humorous “error”
landing pages that are intended to make light of the embarrassing situation and to take the
blame away from the user (see Figure 6.9).
(Continued)
6 E M O T I O N A L I N T E R A C T I O N178
Figure 6.9 An alternative 404 error message
Source: https://www.creativebloq.com/web-design/best-404-pages-812505
dILEMMA
Should Voice Assistants Teach Kids Good Manners?
Many families now own a smart speaker, such as an Amazon Echo, with a voice assistant like
Alexa running on it. One observation is that young children will often talk to Alexa as if she
was their friend, asking her all sorts of personal questions, such as “Are you my friend?” and
“What is your favorite music?” and “What is your middle name?” They also learn that is not
necessary to say “please” when asking their questions or “thank you” on receiving a response,
similar to how they talk to other display-based voice assistants, such as Siri or Cortana. Some
parents, however, are worried that this lack of etiquette could develop into a new social norm
that could transfer over to how they talk to real human beings. Imagine the scenario where
Aunt Emma and Uncle Liam come over to visit their young niece for her 5th birthday, and
the first thing that they hear is, “Aunty Emma, get me my drink” or “Uncle Liam, where is
my birthday present?” with nary a “please” uttered. How would you feel if you were treated
like that?
One would hope that parents would continue to teach their children good manners and
the difference between a real human and a voice assistant. However, it is also possible to
configure Alexa and other voice assistants to reward children when they are polite to them,
https://www.creativebloq.com/web-design/best-404-pages-812505
6 . 5 A F F E C T I V E C O M p U T I N g A N d E M O T I O N A L A I 179
6.5 Affective Computing and Emotional AI
Affective computing is concerned with how to use computers to recognize and express
emotions in the same way as humans do (Picard, 1998). It involves designing ways for
people to communicate their emotional states, through using novel, wearable sensors and
creating new techniques to evaluate frustration, stress, and moods by analyzing people’s
expressions and conversations. It also explores how affect influences personal health
(Jacques et al., 2017). More recently, emotional AI has emerged as a research area that
seeks to automate the measurement of feelings and behaviors by using AI technologies
that can analyze facial expressions and voice in order to infer emotions. A number of sens-
ing technologies can be used to achieve this and, from the data collected, predict aspects
of a user’s behavior, for example, forecasting what someone is most likely to buy online
when feeling sad, bored, or happy. The main techniques and technologies that have been
used to do this are as follows:
• Cameras for measuring facial expressions
• Biosensors placed on fingers or palms to measure galvanic skin response (which is used to
infer how anxious or nervous someone is as indicated by an increase in their sweat)
• Affective expression in speech (voice quality, intonation, pitch, loudness, and rhythm)
• Body movement and gestures, as detected by motion capture systems or accelerometer sen-
sors placed on various parts of the body
The use of automated facial coding is gaining popularity in commercial settings, espe-
cially in marketing and e-commerce. For example, Affdex emotion analytics software from
Affectiva (www.affectiva.com) employs advanced computer vision and machine learning
algorithms to catalog a user’s emotional reactions to digital content, as captured through
a webcam, to analyze how engaged the user is with digital online content, such as movies,
online shopping sites, and advertisements.
Six fundamental emotions are classified based on the facial expressions that Aff-
dex collects.
for example, by saying “By the way, thanks for asking so nicely.” Voice assistants could also
be programmed to be much more forceful in how they teach good manners, for example,
saying, “I won’t answer you unless you say ‘please’ each time you ask me a question.” Would
this be taking the role of parenting too far? Mike Elgon (2018) cogently argues why voice
assistants should not do this. He questions whether by extending human social norms to voice
assistants, we are teaching children that technology can have sensibilities and hence should
be thought about in the same way that we consider human feelings. In particular, he wonders
whether by being polite to a voice assistant, children might begin to think that they are capa-
ble of feeling appreciated or unappreciated and that they have rights just like humans. Do
you agree with him, or do you think that there is no harm in developing virtual assistants to
teach children good manners and that children will learn? Or, do you believe that children will
instinctively know voice assistants don’t have rights or feelings?
6 E M O T I O N A L I N T E R A C T I O N180
• Anger
• Contempt
• Disgust
• Fear
• Joy
• Sadness
These emotions are indicated as a percentage of what was detected beside the emotion
labels above the person’s face appearing on a display. For example, Figure 6.10 shows a label
of 100 percent happiness and 0 percent for all the other categories above the woman’s head
on the smartphone display. The white dots overlaying her face are the markers used by the
app when modeling a face. They provide the data that determines the type of facial expres-
sion being shown, in terms of detecting the presence or absence of the following:
• Smiling
• Eye widening
• Brow raising
• Brow furrowing
• Raising a cheek
• Mouth opening
• Upper-lip raising
• Wrinkling of the nose
If a user screws up their face when an ad pops up, this suggests that they feel disgust,
whereas if they start smiling, it suggests that they are feeling happy. The website can then
adapt its ad, movie storyline, or content to what it perceives the person needs at that point
in their emotional state.
Figure 6.10 Facial coding using Affdex software
Source: Affectiva, Inc.
6 . 5 A F F E C T I V E C O M p U T I N g A N d E M O T I O N A L A I 181
Affectiva has also started to analyze drivers’ facial expressions when on the road with
the goal of improving driver safety. The emotional AI software perceives if a driver is angry
and then suggests an intervention. For example, a virtual agent in the car might suggest to
the driver to take a deep breath and play soothing music to help relax them. In addition to
identifying particular emotions through facial expressions (for example, joy, anger, and sur-
prise), Affectiva uses particular markers to detect drowsiness. These are eye closure, yawning,
and blinking rate. Again, upon detecting when a threshold has been reached for these facial
expressions, the software might trigger an action, such as getting a virtual agent to suggest to
the driver that they pull over where it is safe to do so.
Other indirect methods that are used to reveal the emotional state of someone include
eye-tracking, finger pulse, speech, and the words/phrases they use when tweeting, chatting
online, or posting to Facebook (van den Broek, 2013). The level of affect expressed by users,
the language they use, and the frequency with which they express themselves when using
social media can all indicate their mental state, well-being, and aspects of their personality
(for instance, whether they are an extrovert or introvert, neurotic or calm, and so on). Some
companies may try to use a combination of these measures, such as facial expressions and
the language that people use when online, while others may focus on just one aspect, such as the
tone of their voice when answering questions over the phone. This type of indirect emotion
detection is beginning to be used to help infer or predict someone’s behavior, for example,
determining their suitability for a job or how they will vote in an election.
Another application of biometric data is being used in streaming video games where
spectators watch players, known as streamers, play video games. The most popular site is
Twitch; millions of viewers visit it each day to watch others compete in games, such as Fort-
nite. The biggest streamers have become a new breed of celebrity, like YouTubers. Some even
have millions of dedicated fans. Various tools have been developed to enhance the viewers’
experience. One is called All the Feels, which provides an overlay of biometric and webcam-
derived data of a streamer onto the screen interface (Robinson et al., 2017). A dashboard
provides a visualization of the streamer’s heart rate, skin conductance, and emotions. This
additional layer of data has been found to enhance the spectator experience and improve the
connection between the streamer and spectators. Figure 6.11 shows the emotional state of a
streamer using the All the Feels interface.
Figure 6.11 All the Feels app showing the biometric data of a streamer playing a videogame
Source: Used courtesy of Katherine Isbister
6 E M O T I O N A L I N T E R A C T I O N182
6.6 Persuasive Technologies and Behavioral Change
A diversity of techniques has been used at the interface level to draw people’s attention
to certain kinds of information in an attempt to change what they do or think. Pop-up
ads, warning messages, reminders, prompts, personalized messages, and recommendations
are some of the methods that are being deployed on a computer or smartphone interface.
Examples include Amazon’s one-click mechanism that makes it easy to buy something on its
online store and recommender systems that suggest specific books, hotels, restaurants, and so
forth, that a reader might want to try based on their previous purchases, choices, and taste.
The various techniques that have been developed have been referred to as persuasive design
(Fogg, 2009). They include enticing, cajoling, or nudging someone into doing something
through the use of persuasive technology.
Technology interventions have also been developed to change people’s behaviors in other
domains besides commerce, including safety, preventative healthcare, fitness, personal rela-
tionships, energy consumption, and learning. Here the emphasis is on changing someone’s
habits or doing something that will improve an individual’s well-being through monitoring
their behavior. An early example was Nintendo’s Pokémon Pikachu device (see Figure 6.12)
that was designed to motivate children into being more physically active on a consistent
basis. The owner of the digital pet that lives in the device was required to walk, run, or jump
each day to keep it alive. The wearer received credits for each step taken—the currency being
watts that could be used to buy Pikachu presents. Twenty steps on the pedometer rewarded
the player with 1 watt. If the owner did not exercise for a week, the virtual pet became angry
and refused to play anymore. This use of positive rewarding and sulking can be a powerful
means of persuasion, given that children often become emotionally attached to their virtual
pets, especially when they start to care for them.
BOx 6.3
Is It OK for Technology to Work Out How You Are Feeling?
Do you think it is ethical that technology is trying to read your emotions from your facial
expressions or from what you write in your tweets and, based on its analysis, filter the online
content that you are browsing, such as ads, news, or a movie to match your mood? Might
some people think it is an invasion of their privacy?
Human beings will suggest things to each other, often based on what they think the other
is feeling. For example, they might suggest a walk in the park to cheer them up. They might
also suggest a book to read or a movie to watch. However, some people may not like the idea
that an app can do the same, for example, suggesting what you should eat, watch, or do based
on how it analyzes your facial expressions.
6 . 6 p E R s U A s I V E T E C h N O L O g I E s A N d B E h A V I O R A L C h A N g E 183
HAPIfork is a device that was developed to help someone monitor and track their eating
habits (see Figure 6.13). If it detects that they are eating too quickly, it will vibrate (similar
to the way a smartphone does when on silent mode), and an ambient light will appear at the
end of the fork, providing the eater with real-time feedback intended to slow them down.
The assumption is that eating too fast results in poor digestion and poor weight control and
that making people aware that they are gobbling their food down can help them think about
Figure 6.12 Nintendo’s Pokémon Pikachu device
Source: http://nintendo.wikia.com/wiki/File:Pok%C3%A9mon_Pikachu_2_GS_(Device)
ACTIVITY 6.3
Watch these two videos:
The Piano Staircase: http://youtu.be/2lXh2n0aPyw
The Outdoor Bin: http://youtu.be/cbEKAwCoCKw
Do you think that such playful methods are effective at changing people’s behavior?
Comment
Volkswagen sponsored an open competition, called the fun theory, asking people to transform
mundane artifacts into novel enjoyable user experiences in an attempt to change people’s
behavior for the better. The idea was to encourage a desired behavior by making it more
fun. The Piano Staircase and the Outdoor Bin are the most well-known examples; the stairs
sounded like piano keys being played as they were climbed, while the bin sounded like a well
echoing when something was thrown into it. Research has shown that using these kinds of
playful methods is very engaging, and they can help people overcome their social inhibition
of taking part in an activity in a public place (Rogers et al., 2010a).
http://nintendo.wikia.com/wiki/File:Pokémon_Pikachu_2_GS_(Device)
6 E M O T I O N A L I N T E R A C T I O N184
how to eat more slowly at a conscious level. Other data is collected about how long it took
them to finish their meal, the number of fork servings per minute, and the time between them.
These are turned into a dashboard of graphs and statistics so that the user can see each week
whether their fork behavior is improving.
Nowadays, there are many kinds of mobile apps and personal tracking devices available
that are intended to help people monitor various behaviors and change them based on the
data collected and displayed back to them. These devices include fitness trackers, for exam-
ple, Fitbit, and weight trackers, such as smart scales. Similar to HAPIfork, these devices are
designed to encourage people to change their behavior by displaying dashboards of graphs
showing how much exercise they have done or weight they have lost over a day, week, or
longer period, compared with what they have done in the previous day, week, or month.
These results can also be compared, through online leaderboards and charts, with how well
they have done versus their peers and friends. Other techniques employed to encourage peo-
ple to exercise more or to move when sedentary include goal setting, reminders, and rewards
for good behavior. A survey of how people use such devices in their everyday lives revealed
that people often bought them simply to try them or were given one as a present, rather than
specifically trying to change a particular behavior (Rooksby et al., 2014). How, what, and
when they tracked depended on their interests and lifestyles; some used them as a way of
showing how fast they could run during a marathon or cycle on a course or how they could
change their lifestyle to sleep or eat better.
An alternative approach to collecting quantified data about a behavior automatically is
to ask people to write down manually how they are feeling now or to rate their mood and for
them to reflect upon how they felt about themselves in the past. A mobile app called Echo,
for example, asked people to write a subject line, rate their happiness at that moment, and
add a description, photos, and/or videos if they wanted to (Isaacs et al., 2013). Sporadically,
the app then asked them to reflect on previous entries. An assumption was that this type
Figure 6.13 Someone using the HAPIfork in a restaurant
Source: Helen Sharp
6 . 6 p E R s U A s I V E T E C h N O L O g I E s A N d B E h A V I O R A L C h A N g E 185
of technology-mediated reflection could increase well-being and happiness. Each reflection
was shown as a stacked card with the time and a smiley happiness rating. People who used
the Echo app reported on the many positive effects of doing so, including reliving positive
experiences and overcoming negative experiences by writing them down. The double act of
recording and reflecting enabled them to generalize from the positive experiences and draw
positive lessons from them.
The global concern about climate change has also led a number of HCI researchers to
design and evaluate various energy-sensing devices that display real-time feedback. One goal
is to find ways of helping people reduce their energy consumption, and it is part of a larger
research agenda called sustainable HCI: see Mankoff et al., 2008; DiSalvo et al., 2010; Hazas
et al., 2012. The focus is to persuade people to change their everyday habits with respect to
environmental concerns, such as reducing their own carbon footprint, their community’s
footprint (for example, a school or workplace), or an even larger organization’s carbon foot-
print (such as a street, town, or country).
Extensive research has shown that domestic energy use can be reduced by providing
households with feedback on their consumption (Froehlich et al., 2010). The frequency of
feedback is considered important; continuous or daily feedback on energy consumption has
been found to yield higher savings results than monthly feedback. The type of graphical
representation also has an effect. If the image used is too obvious and explicit (for instance,
a finger pointing at the user), it may be perceived as too personal, blunt, or “in your face,”
resulting in people objecting to it. In contrast, simple images (for example, an infographic
or emoticon) that are more anonymous but striking and whose function is to get people’s
attention may be more effective. They may encourage people to reflect more on their energy
use and even promote public debate about what is represented and how it affects them.
However, if the image used is too abstract and implicit, other meanings may be attributed to
it, such as simply being an art piece (such as an abstract painting with colored stripes that
change in response to the amount of energy used), resulting in people ignoring it. The ideal
may be somewhere in between. Peer pressure can also be effective, where peers, parents, or
children chide or encourage one another to turn lights off, take a shower instead of a bath,
and so on.
Another influencing factor is social norms. In a classic study by P. Wesley Schultz et al.,
(2007), households were shown how their energy consumption compared with their neigh-
borhood average. Households above the average tended to decrease their consumption, but
those using less electricity than average tended to increase their consumption. The study
found that this “boomerang” effect could be counteracted by providing households with an
emoticon along with the numerical information about their energy usage: households using
less energy than average continued to do so if they received a smiley icon; households using more
than average decreased their consumption even more if they were given a sad icon.
In contrast to the Schultz study, where each household’s energy consumption was kept
private, the Tidy Street project (Bird and Rogers, 2010) that was run in Brighton in the
United Kingdom created a large-scale visualization of the street’s electricity usage by spray-
ing a stenciled display on the road surface using chalk (see Figure 6.14). The public display
was updated each day to represent how the average electricity usage of the street compared
to the city of Brighton’s average. The goal was to provide real-time feedback that all of the
homeowner’s and the general public could see change each day over a period of three weeks.
The street graph also proved to be very effective in getting people who lived on Tidy Street
6 E M O T I O N A L I N T E R A C T I O N186
to talk to each other about their electricity consumption and habits. It also encouraged them to
talk with the many passersby who walked up and down the street. The outcome was to reduce
electricity consumption in the street by 15 percent, which was considerably more than other
projects in this area have been able to achieve.
BOx 6.4
The Darker Side: Deceptive Technology
Technology is increasingly being used to deceive people into parting with their personal
details, which allows Internet fraudsters to access their bank accounts and draw money from
them. Authentic-looking letters, appearing to be sent from eBay, PayPal, and various leading
banks, are spammed across the world, ending up in people’s email in-boxes with messages
such as “During our regular verification of accounts, we couldn’t confirm your information.
Please click here to update and verify your information.” Given that many people have an
account with one of these corporations, there is a good chance that they will be misled and
unwittingly believe what is being asked of them, only to discover a few days later that they are
several thousand dollars worse off. Similarly, letters from supposedly super-rich individuals in
far-away countries, offering a share of their assets if the email recipient provides them with
their bank details, have persistently been spammed worldwide. While many people are becom-
ing increasingly wary of what are known as phishing scams, there are still many vulnerable
individuals who are gullible to such tactics.
The term phishing is a play on the term fishing, which refers to the sophis-
ticated way of luring users’ financial information and passwords. Internet
fraudsters are becoming smarter and are constantly changing their tactics.
While the art of deception is centuries old, the increasing, pervasive, and
often ingenious use of the web to trick people into divulging personal infor-
mation can have catastrophic effects on society as a whole.
Figure 6.14 Aerial view of the Tidy Street public electricity graph
Source: Helen Sharp
6 . 7 A N T h R O p O M O R p h I s M 187
6.7 Anthropomorphism
Anthropomorphism is the propensity people have to attribute human qualities to animals
and objects. For example, people sometimes talk to their computers as if they were humans,
treat their robot cleaners as if they were their pets, and give all manner of cute names to their
mobile devices, routers, and so on. Advertisers are well aware of this phenomenon and often
create human-like and animal-like characters out of inanimate objects to promote their prod-
ucts. For example, breakfast cereals, butter, and fruit drinks have all been transmogrified into
characters with human qualities (they move, talk, have personalities, and show emotions),
enticing the viewer to buy them. Children are especially susceptible to this kind of magic, as
witnessed by their love of cartoons where all manner of inanimate objects are brought to life
with human-like qualities.
The finding that people, especially children, have a propensity to accept and enjoy objects
that have been given human-like qualities has led many designers to capitalize on it, most
notably in the design of virtual agents and interactive dolls, robots, and cuddly toys. Early
commercial products like ActiMates were designed to encourage children to learn by playing
with them. One of the first—Barney (a dinosaur)—attempted to motivate play in children
by using human-based speech and movement (Strommen, 1998). The toys were programmed
to react to the child and make comments while watching TV or working together on a
computer-based task. In particular, Barney was programmed to congratulate the child when-
ever they produced a right answer and also to react to the content on-screen with appropriate
emotions, for instance, cheering at good news and expressing concern at bad news. Interac-
tive dolls have also been designed to talk, sense, and understand the world around them,
using sensor-based technologies, speech recognition, and various mechanical servos embed-
ded in their bodies. For example, the interactive doll Luvabella exhibits facial expressions,
such as blinking, smiling, and making baby cooing noises in response to how her owner plays
and looks after her. The more a child plays with her, the doll will learn to speak, transforming
her babble into words and phrases.
Furnishing technologies with personalities and other human-like attributes can make
them more enjoyable and fun to interact with. They can also motivate people to carry out
various activities, such as learning. Being addressed in the first person (for instance, “Hello,
Noah! Nice to see you again. Welcome back. Now what were we doing last time? Oh yes,
Exercise 5. Let’s start again.”) is more appealing than being addressed in the impersonal third
person (“User 24, commence Exercise 5.”), especially for children. It can make them feel
more at ease and reduce their anxiety. Similarly, interacting with screen characters like tutors
and wizards can be more engaging than interacting with a dialog box.
A YouTube video (https://youtu.be/au2Vg9xRZZ0) shows Luvabella in action
and asks viewers to decide whether the interactive doll is creepy or cool. What do
you think?
https://youtu.be/au2VG9xRZZ0
6 E M O T I O N A L I N T E R A C T I O N188
ACTIVITY 6.4
A Robot or a Cuddly Pet?
Early robot pets, such as Sony’s AIBO, were made of hard materials that made them look shiny
and clunky. In contrast, a more recent trend has been to make them look and feel more like real
pets by covering them up in fur and making them behave in more cute, pet-like ways. Two con-
trasting examples are presented in Figure 6.15a and 6.15b. Which do you prefer and why?
Comment
Most people like stroking pets, so they may prefer a soft pet robot that they can also stroke,
such as the one shown in Figure 6.15b. A motivation for making robot pets cuddly is to
enhance the emotional experience people receive through using their sense of touch. For
example, the Haptic Creature on the right is a robot that mimics a pet that might sit in
your lap, such as a cat or a rabbit (Yohanan and MacLean, 2008). It is made up of a body,
head, and two ears, as well as mechanisms that simulate breathing, a vibrating purr, and the
warmth of a living creature. The robot “detects” the way it is touched by means of an array
of (roughly 60) touch sensors laid out across its entire body and an accelerometer. When the
Haptic Creature is stroked, it responds accordingly, using the ears, breathing, and purring to
communicate its emotional state through touch. On the other hand, the sensors are also used
by the robot to detect the human’s emotional state through touch. Note how the robot has
no eyes, nose, or mouth. Facial expressions are the most common way humans communicate
emotional states. Since the Haptic Creature communicates and senses emotional states solely
through touch, the face was deliberately left off to prevent people from trying to “read” emo-
tion from it.
(a) (b)
Figure 6.15 Robot pets: (a) Aibo and (b) The Haptic Creature
Source: (a) Jennifer Preece, (b) Used courtesy of Steve Yohanan. Photo by Martin Dee
6 . 7 A N T h R O p O M O R p h I s M 189
A number of commercial physical robots have been developed specifically to support
care giving for the elderly. Early ones were designed to be about 2 feet tall and were made
from white plastic with colored parts that represented clothing or hair. An example was Zora
(see Figure 6.16), developed in Belgium, that was marketed as a social robot for healthcare.
One was bought by a nursing home in France. Many of the patients developed an emotional
attachment to their Zora robot, holding it, cooing, and even giving it kisses on the head.
However, some people found this kind of robot care a little demeaning. Certainly, it can never
match the human touch and warmth that patients need, but there is no harm in it playing an
entertaining and motivating role alongside human caregivers.
This video demonstrates how the Zora robot was used to entertain seniors and to
help them get some exercise: https://youtu.be/jcMNY5EnQNQ.
Figure 6.16 The Zora robot
Source: http://zorarobotics.be/
In-depth Activity
This in-depth activity requires you to try one of the emotion recognition apps available and
to see how well it fares in recognizing different people’s facial expressions. Download the
AffdexMe app or Age Emotion Detector for Apple or Android. Take a photo of yourself
looking natural and see what emotion it suggests.
(Continued)
http://zorarobotics.be/
6 E M O T I O N A L I N T E R A C T I O N190
1. How many emotions does it recognize?
2. Try to make a face for each of the following: sadness, anger, joy, fear, disgust, and surprise.
After making a face for each, see how well the app detects the emotion you were expressing.
3. Ask a couple of other people to try it. See whether you can find someone with a beard and
ask them to try, too. Does facial hair make it more difficult for the app to recognize
an emotion?
4. What other application areas do you think these kinds of apps could be used for besides
advertising?
5. What ethical issues does facial recognition raise? Has the app provided sufficient informa-
tion as to what it does with the photos taken of people’s faces?
6. How well would the recognition software work when used in a more natural setting where
the user is not making a face for the camera?
summary
This chapter described the different ways that interactive products can be designed (both delib-
erately and inadvertently) to make people respond in certain ways. The extent to which users
will learn, buy a product online, quit a bad habit, or chat with others depends on the believ-
ability of the interface, how comfortable they feel when using a product, and/or how much
they can trust it. If the interactive product is frustrating to use, annoying, or patronizing, users
will easily become angry and despondent and often they stop using it. If, on the other hand,
the product is pleasurable, is enjoyable to use, and makes people feel comfortable and at ease,
then they will continue to use it, make a purchase, return to the website, or continue to learn.
This chapter also described various interaction mechanisms that can be used to elicit
positive emotional responses in users and ways of avoiding negative ones. Further, it described
how new technology has been developed to detect emotional states.
Key Points
• Emotional aspects of interaction design are concerned with how to facilitate certain states
(for example, pleasure) or avoid certain reactions (such as frustration) in user experiences.
• Well-designed interfaces can elicit good feelings in people.
• Aesthetically pleasing interfaces can be a pleasure to use.
• Expressive interfaces can provide reassuring feedback to users as well as be informative
and fun.
• Badly designed interfaces often make people frustrated, annoyed, or angry.
• Emotional AI and affective computing use AI and sensor technology for detecting people’s
emotions by analyzing their facial expressions and conversations.
• Emotional technologies can be designed to persuade people to change their behaviors
or attitudes.
• Anthropomorphism is the attribution of human qualities to objects.
• Robots are being used in a variety of settings, including households and assisted-living homes.
191
Further Reading
CALVO, R.A and PETERS, D. (2014) Positive Computing. MIT. This book discusses how
to design technology for well-being to make a happier and healthier world. As the title sug-
gests, it is positive in its outlook. It covers the psychology of well-being, including empathy,
mindfulness, joy, compassion, and altruism. It also describes the opportunities and chal-
lenges facing interaction designers who want to develop technology that can improve peo-
ple’s well-being.
HÖÖK, K. (2018) Designing with the Body. MIT. This book proposes that interaction design
should consider the experiential, felt, and aesthetic stance that encompasses the design and
use cycle. The approach suggested by the author is called soma design, where body and
movements are viewed as very much part of the design process, and where a slow, thoughtful
process is promoted that considers fundamental human values. It is argued that adopting this
stance can yield better products and create healthier, more sustainable companies.
LEDOUX, J. E. (1998) The Emotional Brain: The Mysterious Underpinnings of Emotional
Life. Simon & Schuster. This book explains what causes us to feel fear, love, hate, anger, and
joy, and it explores whether we control our emotions versus them controlling us. The book
also covers the origins of human emotions and explains that many evolved to enable us
to survive.
McDUFF, D. & CZERWINSKI, M. (2018) Designing Emotionally Sentient Agents. Com-
munications of the ACM, Vol. 61 No. 12, pages 74–83. This article provides an accessible
overview of the burgeoning area of emotional agents. It presents the challenges, opportuni-
ties, dilemmas, concerns, and current applications that are now being developed, including
bots, robots, and agents.
NORMAN, D. (2005) Emotional Design: Why We Love (or Hate) Everyday Things. Basic
Books. This book is an easy read while at the same time being thought-provoking. We get to
see inside Dan Norman’s kitchen and learn about the design aesthetics of his collection of
teapots. The book also includes essays on the emotional aspects of robots, computer games,
and a host of other pleasurable interfaces.
WALTER, A. (2011) A Book Apart: Designing for Emotion. Zeldman, Jeffrey. This short
book is targeted at web designers who want to understand how to design websites that users
will enjoy and want to return to. It covers the classic literature on emotions, and it proposes
practical approaches to emotional web design.
F U R T h E R R E A d I N g
Chapter 7
I N T E R F A C E S
7.1 Introduction
7.2 Interface Types
7.3 Natural User Interfaces and Beyond
7.4 Which Interface?
Objectives
The main goals of the chapter are to accomplish the following:
• Provide an overview of the many different kinds of interfaces.
• Highlight the main design and research considerations for each of the interfaces.
• Discuss what is meant by a natural user interface (NUI).
• Consider which interface is best for a given application or activity.
7.1 Introduction
When considering how to solve a user problem, the default solution that many developers
choose to design is an app that can run on a smartphone. Making this easier still are many
easy-to-use app developer tools that can be freely downloaded. It is hardly surprising, there-
fore, to see just how many apps there are in the world. In December 2018, Apple, for exam-
ple, had a staggering 2 million apps in its store, many of which were games.
Despite the ubiquity of the smartphone app industry, the web continues to proliferate
in offering services, content, resources, and information. A central concern is how to design
them to be interoperable across different devices and browsers, which takes into account the
varying form factors, size, and shape of smart watches, smartphones, laptops, smart TVs, and
computer screens. Besides the app and the web, many other kinds of interfaces have been
developed, including voice interfaces, touch interfaces, gesture interfaces, and multimodal
interfaces.
The proliferation of technological developments has encouraged different ways of think-
ing about interaction design and UX. For example, input can be via mice, touchpads, pens,
remote controllers, joysticks, RFID readers, gestures, and even brain-computer interaction.
Output is equally diverse, appearing in the form of graphical interfaces, speech, mixed reali-
ties, augmented realities, tangible interfaces, wearable computing, and more.
7 I N T E R F A C E S194
The goal of this chapter is to give you an overview of the diversity of interfaces that can
be developed for different environments, people, places, and activities. We present a catalog
of 20 interface types, starting with command-based and ending with smart ones. For each
interface, we present an overview and outline the key research and design considerations.
Some are only briefly touched upon, while others, which are more established in interaction
design, are described in greater depth.
7.2 Interface Types
Numerous adjectives have been used to describe the different types of interfaces that have
been developed, including graphical, command, speech, multimodal, invisible, ambient, affec-
tive, mobile, intelligent, adaptive, smart, tangible, touchless, and natural. Some of the inter-
face types are primarily concerned with a function (for example, to be intelligent, to be
adaptive, to be ambient, or to be smart), while others focus on the interaction style used
(such as command, graphical, or multimedia), the input/output device used (for instance,
pen-based, speech-based, or gesture-based), or the platform being designed for (for example,
tablet, mobile, PC, or wearable). Rather than cover every possible type that has been devel-
oped or described, we have chosen to select the main types of interfaces that have emerged
over the past 40 years. The interface types are loosely ordered in terms of when they were
developed. They are numbered to make it easier to find a particular one. (See the following
list for the complete set.) It should be noted, however, that this classification is for con-
venience of reference. The interface entries are not mutually exclusive since some products
can appear in two or more categories. For example, a smartphone can be considered to be
mobile, touch, or wearable.
The types of interfaces covered in this chapter include the following:
1. Command
2. Graphical
3. Multimedia
4. Virtual reality
5. Web
6. Mobile
7. Appliance
8. Voice
9. Pen
10. Touch
11. Gesture
12. Haptic
13. Multimodal
14. Shareable
NOTE
This chapter is not meant to be read from beginning to end; rather, it should be
dipped into as needed to find out about a particular type of interface.
7 . 2 I N T E R F A C E T y p E S 195
15. Tangible
16. Augmented reality
17. Wearables
18. Robots and drones
19. Brain-computer interaction
20. Smart
7.2.1 Command-Line Interfaces
Early interfaces required the user to type in commands that were typically abbreviations (for
example, ls) at the prompt symbol appearing on the computer display, to which the system
responded (for example, by listing current files). Another way of issuing commands is by
pressing certain combinations of keys (such as Shift+Alt+Ctrl). Some commands are also a
fixed part of the keyboard, such as delete, enter, and undo, while other function keys can be
programmed by the user as specific commands (for instance, F11 commanding print action).
Command-line interfaces were largely superseded by graphical interfaces that incor-
porated commands such as menus, icons, keyboard shortcuts, and pop-up/predictable text
commands as part of an application. Where command-line interfaces continue to have an
advantage is when users find them easier and faster to use than equivalent menu-based
systems (Raskin, 2000). Users also prefer command-line interfaces for performing certain
operations as part of a complex software package, such as for CAD environments (such as
Rhino3D and AutoCAD), to allow expert designers to interact rapidly and precisely with the
software. They also provide scripting for batch operations, and they are being increasingly
used on the web, where the search bar acts as a general-purpose command-line facility, for
example, www.yubnub.org.
System administrators, programmers, and power users often find that it is much more
efficient and quicker to use command languages such as Microsoft’s PowerShell. For exam-
ple, it is much easier to delete 10,000 files in one go by using one command rather than
scrolling through that number of files and highlighting those that need to be deleted. Com-
mand languages have also been developed for visually impaired people to allow them to
interact in virtual worlds, such as Second Life (see Box 7.1).
Here is a selection of classic HCI videos on the Internet that demonstrate pioneer-
ing interfaces:
The Sketchpad: Ivan Sutherland (1963) describes the first interactive graphical
interface: https://youtu.be/6orsmFndx_o.
The Mother of All Demos: Douglas Engelbart (1968) describes the first WIMP:
http://youtu.be/yJDv-zdhzMy.
put that there (1979): MIT demonstrates the first speech and gesture interface:
https://youtu.be/RyBEUyEtxQo.
Unveiling the genius of multitouch interface design: Jeff Han gives a TED talk
(2007): http://youtu.be/ac0E6deG4AU.
Intel’s Future Technology Vision (2012): See http://youtu.be/g_cauM3kccI.
http://www.yubnub.org
7 I N T E R F A C E S196
Watch a video demonstration of TextSL at http://youtu.be/0Ba_w7u44MM.
Research and Design Considerations
In the 1980s, much research investigated ways of optimizing command interfaces. The form of
the commands (including use of abbreviations, full names, and familiar names), syntax (such
as how best to combine different commands), and organization (for instance, how to structure
options) are examples of some of the main areas that have been investigated (Shneiderman, 1998).
A further concern was which command names would be the easiest to remember. A number of
variables were tested, including how familiar users were with the chosen names. Findings from
a number of studies, however, were inconclusive; some found specific names were better remem-
bered than general ones (Barnard et al., 1982), others showed that names selected by users them-
selves were preferable (see Ledgard et al., 1981; Scapin, 1981), while yet others demonstrated that
high-frequency words were better remembered than low-frequency ones (Gunther et al., 1986).
The most relevant design principle is consistency (see Chapter 1, “What Is Interaction
Design?”). Therefore, the method used for labeling/naming the commands should be chosen
to be as consistent as possible; for example, always use the first letters of the operation when
using abbreviations.
BOX 7.1
Command Interfaces for Virtual Worlds
Virtual worlds, such as Second Life, have become popular places for learning and social-
izing. Unfortunately, people who are visually impaired cannot interact in a visual capacity.
A command-based interface, called TextSL, was developed to enable them to participate using
a screen reader (Folmer et al., 2009). Commands can be issued to enable the user to move
their avatar around, interact with others, and find out about the environment in which they
are located. Figure 7.1 shows that the user has issued the command for their avatar to smile
and say hello to other avatars who are sitting by a log fire.
Figure 7.1 Second Life command-based interface for visually impaired users
Source: Used courtesy of Eelke Folmer
7 . 2 I N T E R F A C E T y p E S 197
7.2.2 Graphical User Interfaces
The Xerox Star interface (described in Chapter 3, “Conceptualizing Interaction”) led to the
birth of the graphical user interface (GUI), opening up new possibilities for users to inter-
act with a system and for information to be presented and represented within a graphical
interface. Specifically, new ways of visually designing the interface became possible, which
included the use of color, typography, and imagery (Mullet and Sano, 1995). The original
GUI was called a WIMP (windows, icons, menus, pointer) and consisted of the following:
• Windows: Sections of the screen that can be scrolled, stretched, overlapped, opened, closed,
and moved using a mouse
• Icons: Pictograms that represent applications, objects, commands, and tools that are
opened or activated when clicked on
• Menus: Lists of options that can be scrolled through and selected in the way a menu is used
in a restaurant
• Pointing device: A mouse controlling the cursor as a point of entry to the windows, menus,
and icons on the screen
The first generation of WIMP interfaces were primarily boxy in design; user interaction
took place through a combination of windows, scroll bars, checkboxes, panels, palettes,
and dialog boxes that appeared on the screen in various forms (see Figure 7.2). Develop-
ers were largely constrained by the set of widgets available to them, of which the dialog
box was most prominent. (A widget is a standardized display representation of a control,
like a button or scroll bar, that can be manipulated by the user.) Nowadays, GUIs have
been adapted for mobile and touchscreens. Instead of using a mouse and keyboard as
input, the default action for most users is to swipe and touch using a single finger when
browsing and interacting with digital content. (For more on this subject, see the sections on
touch and mobile interfaces.)
Figure 7.2 The boxy look of the first generation of GUIs
7 I N T E R F A C E S198
The basic building blocks of the WIMP are still part of the modern GUI used as part of
a display, but they have evolved into a number of different forms and types. For example,
there are now many different types of icons and menus, including audio icons and audio
menus, 3D animated icons, and even tiny icon-based menus that can fit onto a smartwatch
screen (see Figure 7.3). Windows have also greatly expanded in terms of how they are used
and what they are used for; for example, a variety of dialog boxes, interactive forms, and
feedback/error message boxes have become pervasive. In addition, a number of graphical
elements that were not part of the WIMP interface have been incorporated into the GUI.
These include toolbars and docks (a row or column of available applications and icons of
other objects such as open files) and rollovers (where text labels appear next to an icon or
part of the screen as the cursor is rolled over it). Here, we give an overview of the design
considerations concerning the basic building blocks of the WIMP/GUI: windows, menus,
and icons.
Window Design
Windows were invented to overcome the physical constraints of a computer display, ena-
bling more information to be viewed and tasks to be performed on the same screen. Multi-
ple windows can be opened at any one time, for example, web browsers, word processing
documents, photos, and slideshows, enabling the user to switch between them when needing
to look at or work on different documents, files, and apps. They can also enable multiple
instances of one app to be opened, such as when using a web browser.
Scrolling bars within windows also enable more information to be viewed than is possi-
ble on one screen. Scroll bars can be placed vertically and horizontally in windows to enable
upward, downward, and sideway movements through a document and can be controlled
using a touchpad, mouse, or arrow keys. Touch interfaces enable users to scroll content sim-
ply by swiping the screen to the left or right or up or down.
Figure 7.3 Simple smartwatch menus with one, two, or three options
Source: https://developer.apple.com/design/human-interface-guidelines/watchos/interface-elements/menus/
https://developer.apple.com/design/human-interface-guidelines/watchos/interface-elements/menus/
7 . 2 I N T E R F A C E T y p E S 199
One of the problems of having multiple windows open is that it can be difficult to find
specific ones. Various techniques have been developed to help users locate a particular win-
dow, a common one being to provide a list as part of an app menu. macOS also provides a
function that shrinks all windows that are open for a given application so that they can be
seen side by side on one screen. The user needs only to press one function key and then move
the cursor over each one to see what they are called in addition to a visual preview. This tech-
nique enables users to see at a glance what they have in their workspace, and it also allows
them easily to select one to bring forward. Another option is to display all of the windows
open for a particular application, for example, Microsoft Word. Web browsers, like Firefox,
also show thumbnails of the top sites visited and a selection of sites that you have saved or
visited, which are called highlights (see Figure 7.4).
A particular kind of window that is commonly used is the dialog box. Confirmations,
error messages, checklists, and forms are presented through dialog boxes. Information in the
dialog boxes is often designed to guide user interaction, with the user following the sequence
of options provided. Examples include a sequenced series of forms (such as Wizards) present-
ing the necessary and optional choices that need to be filled in when choosing a PowerPoint
presentation or an Excel spreadsheet. The downside of this style of interaction is that there
is a tendency to cram too much information or data entry fields into one box, making the
interface confusing, crowded, and difficult to read (Mullet and Sano, 1995).
Figure 7.4 Part of the home page for the Firefox browser showing thumbnails of top sites visited
and suggested highlight pages (bottom rows)
7 I N T E R F A C E S200
BOX 7.2
The Joys of Filling In Forms on the Web
For many of us, shopping on the Internet is generally an enjoyable experience. For exam-
ple, choosing a book on Amazon or flowers from Interflora can be done at our leisure and
convenience. The part that we don’t enjoy, however, is filling in the online form to give the
company the necessary details to pay for the selected items. This can often be a frustrating and
time-consuming experience, especially as there is much variability between sites. Sometimes,
it requires users to create an account and a new password. At other times, guest checkout is
enabled. However, if the site has a record of your email address in its database, it won’t allow
you to use the guest option. If you have forgotten your password, you need to reset it, and this
requires switching from the form to your email account. Once past this hurdle, different kinds
of interactive forms pop up for you to enter your mailing address and credit card details. The
form may provide the option of finding your address by allowing you to enter a postal or ZIP
code. It may also have asterisks that denote fields that must be filled in.
Having so much inconsistency can frustrate the user, as they are unable to use the same
mental model for filling in checkout forms. It is easy to overlook or miss a box that needs to
be filled in, and after submitting the page, an error message may come back from the system
saying it is incomplete. This may require the user to have to enter sensitive information again,
as it will have been removed in the data processing stage (for example, the user’s credit card
number and the three or four-digit security code on the back or front of the card, respectively).
To add to the frustration, many online forms often accept only fixed data formats, mean-
ing that, for some people whose information does not fit within its constraints, they are unable
to complete the form. For example, one kind of form will accept only a certain type of mailing
address format. The boxes are provided for: address line 1 and address line 2, providing no extra
lines for addresses that have more than two lines; a line for the town/city; and a line for the ZIP
code (if the site is based in the United States) or other postal code (if based in another country).
The format for the codes is different, making it difficult for non-U.S. residents (and U.S. residents
for other country sites) to fill in this part.
Another gripe about online registration forms is the country of residence box that opens
up as a never-ending menu, listing all of the countries in the world in alphabetical order.
Instead of typing in the country in which they reside, users are required to select the one
they are from, which is fine if you happen to live in Australia or Austria but not if you live in
Venezuela or Zambia (see Figure 7.5).
This is an example of where the design principle of recognition over recall (see Chapter 4,
“Cognitive Aspects”) does not apply and where the converse is true. A better design is to have
a predictive text option, where users need only to type in the first one or two letters of their
country to cause a narrowed-down list of choices to appear from which they can select within
the interface. Or, one smart option is for the form to preselect the user’s country of origin by
using information shared from the user’s computer or stored in the cloud. Automating the
filling in of online forms, through providing prestored information about a user (for example,
their address and credit card details), can obviously help reduce usability problems—provided
they are OK with this.
7 . 2 I N T E R F A C E T y p E S 201
Figure 7.5 A scrolling menu of country names
Source: https://www.jollyflorist.com
ACTIVITy 7.1
Go to the Interflora site in the United Kingdom, click the international delivery option, and
then click “select a country.” How are the countries ordered? Is it an improvement over the
scrolling pop-up menu?
Comment
Earlier versions of the full list of countries to which flowers could be sent by interflora.co.uk
listed eight countries at the top, starting with the United Kingdom and then the United States,
France, Germany, Italy, Switzerland, Austria, and Spain. This was followed by the remaining
set of countries listed in alphabetical order. The reason for having this particular ordering is
likely to have been because the top eight are the countries that have most customers, with the
U.K. residents using the service the most. The website has changed now to show top countries
by national flag followed by a table format, grouping all of the countries in alphabetical order
using four columns across the page (see Figure 7.6). Do you think this is an improvement over
the use of a single scrolling list of country names shown in Figure 7.5? The use of letter head-
ings and shading makes searching quicker.
https://www.jollyflorist.com
7 I N T E R F A C E S202
Menu Design
Interface menus are typically ordered across the top row or down the side of a screen using
category headers as part of a menu bar. The contents of the menus are also for the large
part invisible, only dropping down when the header is selected or rolled over with a mouse.
Research and Design Considerations
A key research concern is window management—finding ways of enabling users to move flu-
idly between different windows (and displays) and to be able to switch their attention rapidly
between windows to find the information they need or to work on the document/task within
each window without getting distracted. Studies of how people use windows and multiple dis-
plays have shown that window activation time (that is, the time a window is open and with
which the user interacts with it) is relatively short—an average of 20 seconds—suggesting
that people switch frequently between different documents and applications (Hutchings et al.,
2004). Widgets like the taskbar are often used for switching between windows.
Another technique is the use of tabs that appear at the top of the web browser that show
the name and logo of the web pages that have been visited. This mechanism enables users
to rapidly scan and switch among the web pages they have visited. However, the tabs can
quickly multiply if a user visits a number of sites. To accommodate new ones, the web browser
reduces the size of the tabs by shortening the information that appears on each. The downside
of doing this, however, is it can make it more difficult to read and recognize web pages when
looking at the smaller tabs. It is possible to reverse this shrinking by removing unwanted tabs
by clicking the delete icon for each one. This has the effect of making more space available
for the remaining tabs.
There are multiple ways that an online form can be designed to obtain details from
someone. It is not surprising, therefore, that there are so many different types that are in
use. Design guidelines are available to help decide which format and widgets are best to use.
For example, see https://www.smashingmagazine.com/printed-books/form-design-patterns/.
Another option is to automate form completion by asking the user to store their personal
details on their machine or in a company’s database, requiring them only to enter security
information. However, many people are becoming leery of storing their personal data in this
way—fearful because of the number of data breaches that are often reported in the news.
Figure 7.6 An excerpt of the listing of countries in alphabetical order from interflora.co.uk
Source: https://www.interflora.co.uk
https://www.interflora.co.uk
7 . 2 I N T E R F A C E T y p E S 203
The various options under each menu are typically ordered from top to bottom in terms of
most frequently used options and grouped in terms of their similarity with one another; for
example, all formatting commands are placed together.
There are numerous menu interface styles, including flat lists, drop-down, pop-up, contex-
tual, collapsible, mega, and expanding ones, such as cascading menus. Flat menus are good at
displaying a small number of options at the same time or where the size of the display is small,
for example on smartphones, cameras, and smartwatches. However, they often have to nest the
lists of options within each, requiring several steps to be taken by a user to get to the list with
the desired option. Once deep down in a nested menu, the user then has to take the same number
of steps to get back to the top of the menu. Moving through previous screens can be tedious.
Expanding menus enable more options to be shown on a single screen than is possible with
a single flat menu list. This makes navigation more flexible, allowing for the selection of options
to be done in the same window. An example is the cascading menu, which provides secondary
and even tertiary menus to appear alongside the primary active drop-down menu, enabling
further related options to be selected, such as when selecting track changes from the tools menu
leads to a secondary menu of three options by which to track changes in a Word document. The
downside of using expanding menus, however, is that they require precise control. Users can
often end up making errors, namely, overshooting or selecting the wrong options. In particular,
cascading menus require users to move their cursor over the menu item, while holding the
mouse or touchpad down, and then to move their cursor over to the next menu list when
the cascading menu appears and select the next desired option. This can result in the user under
or overshooting a menu option, or sometimes accidentally closing the entire menu. Another
example of an expandable menu is a mega menu, in which many options can be displayed
using a 2D drop-down layout (see Figure 7.7). This type of menu is popular with online shop-
ping sites, where lots of items can be viewed at a glance on the same screen without the need to
scroll. Hovering, tapping, or clicking is used to reveal more details for a selected item.
Figure 7.7 A mega menu
Source: https://www.johnlewis.com
https://www.johnlewis.com
7 I N T E R F A C E S204
Collapsible menus provide an alternative approach to expanding menus in that they
allow further options to be made visible by selecting a header. The headings appear adjacent
to each other, providing the user with an overview of the content available (see Figure 7.8).
This reduces the amount of scrolling needed. Contextual menus provide access to often-used
commands associated with a particular item, for example, an icon. They provide appropri-
ate commands that make sense in the context of a current task. They appear when the user
presses the Control key while clicking an interface element. For example, clicking a photo
on a website together with holding down the Ctrl key results in a small set of relevant menu
options appearing in an overlapping window, such as open it in a new window, save it, or
copy it. The advantage of contextual menus is that they provide a limited number of options
associated with an interface element, overcoming some of the navigation problems associated
with cascading and expanding menus.
Figure 7.8 A template for a collapsible menu
Source: https://inclusive-components.design/collapsible-sections/. Reproduced with permission of Smashing
Magazine
ACTIVITy 7.2
Open an application that you use frequently (for instance, a word processor, email client,
or web browser) on a PC/laptop or tablet and look at the menu header names (but do not
open them just yet). For each menu header—File, Edit, Tools, and so on—write down what
options you think are listed under each. Then look at the contents under each header. How
many options were you able to remember, and how many did you put in the wrong category?
Now try to select the correct menu header for the following options (assuming that they
are included in the application): Replace, Save, Spelling, and Sort. Did you select the correct
header each time, or did you have to browse through a number of them?
Comment
Popular everyday applications, like word processors, have grown enormously in terms of the
functions they offer. The current version (2019) of Microsoft Word, for example, has 8 menu
headers and numerous toolbars. Under each menu header there are on average 15 options,
https://inclusive-components.design/collapsible-sections/
7 . 2 I N T E R F A C E T y p E S 205
Icon Design
The appearance of icons in an interface came about following the Xerox Star project. They
were used to represent objects as part of the desktop metaphor, namely, folders, documents,
trashcans, inboxes, and outboxes. The assumption behind using icons instead of text labels
is that they are easier to learn and remember, especially for non-expert computer users. They
can also be designed to be compact and variably positioned on a screen.
some of which are hidden under subheadings and appear only when they are rolled over with
the mouse. Likewise, for each toolbar, there is a set of tools available, be it for Drawing, For-
matting, Web, Table, or Borders. Remembering the location of frequently used commands like
Spelling and Replace is often achieved by remembering their spatial location. For infrequently
used commands, like sorting a list of references into alphabetical order, users can spend time
flicking through the menus to find the command Sort. It is difficult to remember that the com-
mand Sort should be under the Table heading, since what it is doing is not a table operation,
but a tool to organize a section of a document. It would be more intuitive if the command was
under the Tool header along with similar tools like Spelling. What this example illustrates is
just how difficult it can be to group menu options into clearly defined and obvious categories.
Some fit into several categories, while it can be difficult to group others. The placement of
options in menus can also change between different versions of an application as more func-
tions are added.
Research and Design Considerations
An important design consideration is to decide which terms to use for menu options. Short
phrases like “bring all to front” can be more informative than single words like “front.” How-
ever, the space for listing menu items is often restricted, such that menu names need to be
short. They also need to be distinguishable, that is, not easily confused with one another so
that the user does not choose the wrong one by mistake. Operations such as Quit and Save
should also be clearly separated to avoid the accidental loss of work.
The choice of which type of menu to use will often be determined by the application and
type of device for which is being designed. Which is best will also depend on the number of
menu options and the size of the display available in which to present them. Flat menus are
best for displaying a small number of options at one time, while expanding and collapsible
menus are good for showing a large number of options, such as those available in file and
document creation/editing applications. Usability testing comparing drop-down menus with
mega menus has shown the latter to be more effective and easier to navigate. The main reason
is that megamenus enable users to readily scan many items at a glance on the same page, and
in doing so find what they are looking for (Nielsen and Li, 2017).
7 I N T E R F A C E S206
Icons have become a pervasive feature of the interface. They now populate every app
and operating system and are used for all manner of functions besides representing desktop
objects. These include depicting tools (for example, Paint 3D), status (such as, Wi-Fi strength),
categories of apps (for instance, health or personal finance), and a diversity of abstract opera-
tions (including cut, paste, next, accept, and change). They have also gone through many
changes in their look and feel—black and white, color, shadowing, photorealistic images, 3D
rendering, and animation have all been used.
Whereas early icon designers were constrained by the graphical display technology of the
day, current interface developers have much more flexibility. For example, the use of
anti-aliasing techniques enables curves and non-rectilinear lines to be drawn, enabling more
photo-illustrative styles to be developed (anti-aliasing means adding pixels around a jagged
border of an object to smooth its outline visually). App icons are often designed to be both
visually attractive and informative. The goal is to make them inviting, emotionally appealing,
memorable, and distinctive.
Different graphical genres have been used to group and identify different categories of
icons. Figure 7.9 shows how colorful photorealistic images were used in the original Apple
Aqua set, each slanting slightly to the left, for the category of user applications (such as email)
whereas monochrome straight on and simple images were used for the class of utility applica-
tions (for instance, printer setup). The former has a fun feel to them, whereas the latter has a
more serious look about them. While a number of other styles have since been developed, the
use of slanting versus straight facing icons to signify different icon categories is still in use.
Icons can be designed to represent objects and operations in the interface using concrete
objects and/or abstract symbols. The mapping between the icon and underlying object or opera-
tion to which it refers can be similar (such as a picture of a file to represent the object file), ana-
logical (for instance, a picture of a pair of scissors to represent cut), or arbitrary (for example, the
use of an X to represent delete). The most effective icons are generally those that are isomorphic
since they have a direct mapping between what is being represented and how it is represented.
Many operations in an interface, however, are of actions to be performed on objects, making it
more difficult to represent them using direct mapping. Instead, an effective technique is to use
a combination of objects and symbols that capture the salient part of an action by using anal-
ogy, association, or convention (Rogers, 1989). For example, using a picture of a pair of scissors
to represent cut in a word-processing application provides a sufficient clue as long as the user
understands the convention of cut for deleting text.
Figure 7.9 Two styles of Apple icons used to represent different kinds of functions
7 . 2 I N T E R F A C E T y p E S 207
Another approach that many smartphone designers use is flat 2D icons. These are simple
and use strong colors and pictograms or symbols. The effect is to make them easily recogniz-
able and distinctive. Examples shown in Figure 7.10a include the white ghost on a yellow
background (Snapchat), a white line bubble with a solid white phone handset in a speech
bubble on a lime-green background (WhatsApp), and the sun next to a cloud (weather).
Icons that appear on toolbars or palettes as part of an application or presented on small
device displays (such as digital cameras or smartwatches) have much less screen real estate
available. Because of this, they have been designed to be simple, emphasizing the outline form
of an object or symbol and using only grayscale or one or two colors (see Figure 7.10b). They
tend to convey the status, tool, or action using a concrete object (for example, the airplane
symbol signaling whether the airplane mode is on or off) and abstract symbols (such as three
waves that light up from none to all to convey the strength/power of the area’s Wi-Fi).
(a) (b)
Figure 7.10 2D icons designed for (a) a smartphone and (b) a smartwatch
Source: (a) Helen Sharp (b) https://support.apple.com/en-ca/HT205550
ACTIVITy 7.3
Sketch simple icons to represent the following operations to appear on a digital camera screen:
• Turn image 90-degrees sideways.
• Crop the image.
• Auto-enhance the image.
• More options.
Show them to someone else, tell them that they are icons for a new digital camera
intended to be really simple to use, and see whether they can understand what each represents.
https://support.apple.com/en-ca/HT205550
7 I N T E R F A C E S208
Research and Design Considerations
There are many icon libraries available that developers can download for free (for instance,
https://thenounproject.com/ or https://fontawesome.com/). Various online tutorials and books
on how to design icons are also available (see Hicks, 2012) together with sets of proprietary
guidelines and style guides. For example, Apple provides its developers with style guides,
explaining why certain designs are preferable to others and how to design icon sets. Style
Comment
Figure 7.11 shows the basic Edit Photo icons on an iPhone that appear at the bottom of the
screen when a user selects the edit function. The box with extended lines and two arrows is
the icon for cropping an image; the three overlapping translucent circles represents “differ-
ent lenses” that can be used, the wand in the top-right corner means “auto-enhance,” and the
circle with three dots in it means more functions.
Figure 7.11 The basic Edit Photo icons that appear at the top and bottom of an iPhone display
7 . 2 I N T E R F A C E T y p E S 209
7.2.3 Multimedia
Multimedia, as the name implies, combines different media within a single interface, namely,
graphics, text, video, sound, and animation, and links them together with various forms of
interactivity. Users can click links in an image or text that triggers another media such as an
animation or a video. From there they can return to where they were previously or jump to
another media source. The assumption is that a combination of media and interactivity can
provide better ways of presenting information than can a single media, for example, just text
or video alone. The added value of multimedia is that it can be easier for learning, better for
understanding, more engaging, and more pleasant (Scaife and Rogers, 1996).
Another distinctive feature of multimedia is its ability to facilitate rapid access to mul-
tiple representations of information. Many multimedia encyclopedias and digital libraries
have been designed based on this multiplicity principle, providing an assortment of audio and
visual materials on a given topic. For example, when looking to find information about the
heart, a typical multimedia-based encyclopedia will provide the following:
• One or two video clips of a real live heart pumping and possibly a heart transplant operation
• Audio recordings of the heart beating and perhaps an eminent physician talking about the
cause of heart disease
• Static diagrams and animations of the circulatory system, sometimes with narration
• Several columns of hypertext, describing the structure and function of the heart
Hands-on interactive simulations have also been incorporated as part of multimedia
learning environments. An early example was the Cardiac Tutor, developed to teach
students about cardiac resuscitation. It required students to save patients by selecting
the correct set of procedures in the correct order from various options displayed on the
computer screen (Eliot and Woolf, 1994). Other kinds of multimedia narratives and
guides are also covered in more depth in Chapter 13, “Interaction Design in Practice.” On its
developers’ website (developer.apple.com), advice is given on how and why certain graphical
elements should be used when developing different types of icon. Among the various guide-
lines, it suggests that different categories of application (for example, Business, Utilities, Enter-
tainment, and so on) should be represented by a different genre, and it recommends displaying
a tool to communicate the nature of a task, such as a magnifying glass for searching or a
camera for a photo editing tool. Android and Microsoft also provide extensive guidance and
step-by-step procedures on how to design icons for its applications on its website.
To help disambiguate the meaning of icons, text labels can be used under, above, or to the
side of their icons. This method is effective for toolbars that have small icon sets, such as those
appearing as part of a web browser, but it is not as good for applications that have large icon
sets, for example, photo editing or word processing, since the screen can get cluttered making
it sometimes harder and longer to find an icon. To prevent text/icon clutter on the interface, a
hover function can be used where a text label appears adjacent to or above an icon after the
user holds the cursor over it for a second and for as long as the user keeps the cursor on it. This
method allows identifying information to be temporarily displayed when needed.
http://www.developer.apple.com
7 I N T E R F A C E S210
games have also been developed to support discovery learning by encouraging children
to explore different parts of the display by noticing a hotspot or other kind of link. For
example, https://KidsDiscover.com/apps/ has many tablet apps that use a combination
of animations, photos, interactive 3D models, and audio to teach kids about science and
social studies topics. Using swiping and touching, kids can reveal, scroll through, select
audio narration, and watch video tours. Figure 7.12, for example, has a “slide” mecha-
nism as part of a tablet interface that enables the child to do a side-by-side comparison
of what Roman ruins looks like now and in ancient Roman times.
Another example of a learning app with an interesting UI can be seen at https://www
.abcmouse.com/apps.
Multimedia has largely been developed for training, educational, and entertainment pur-
poses. But to what extent is the assumption that learning (such as reading and scientific
inquiry skills) and playing can be enhanced through interacting with engaging multimedia
interfaces true? What actually happens when users are given unlimited, easy access to mul-
tiple media and simulations? Do they systematically switch between the various media and
“read” all of the multiple representations on a particular subject, or are they more selective
in what they look at and listen to?
Figure 7.12 An example of a multimedia learning app designed for tablets
Source: KidsDiscover app “Roman Empire for iPad”
https://KidsDiscover.com/apps/
https://www.abcmouse.com/apps
https://www.abcmouse.com/apps
7 . 2 I N T E R F A C E T y p E S 211
ACTIVITy 7.4
Watch this video of Don Norman appearing in his first multimedia CD-ROM book (1994),
where he pops up every now and again in boxes or at the side of the page to illustrate the
points being discussed on that page: http://vimeo.com/18687931.
How do you think students used this kind of interactive e-textbook?
Comment
Anyone who has interacted with educational multimedia knows just how tempting it is to
play the video clips and animations while skimming through accompanying text or static
diagrams. The former is dynamic, easy, and enjoyable to watch, while the latter is viewed as
static and difficult to read from the screen. In an evaluation of the original Voyager’s “First
Person: Donald Norman, Defending Human Attributes in the Age of the Machine,” students
consistently admitted to ignoring the text on the interface in search of clickable icons of the
author, which when selected would present an animated video of him explaining some aspect
of design (Rogers and Aldrich, 1996). Given the choice to explore multimedia material in
numerous ways, ironically, users tend to be highly selective as to what they actually pay atten-
tion to, adopting a channel-hopping mode of interaction. While enabling the users to select
the information they want to view or features to explore for themselves, there is the danger
that multimedia environments may in fact promote fragmented interactions where only part
of the media is ever viewed. In a review of research comparing reading from screens versus
paper, Lauren Singer and Patricia Alexandra (2017) found that despite students saying they
preferred reading from screens, their actual performance was worse than when using paper-
based textbooks.
Hence, online multimedia material may be good for supporting certain kinds of activities,
such as browsing, but less optimal for others, for instance reading at length about a topic. One
way to encourage more systematic and extensive interactions (when it is considered important
for the activity at hand) is to require certain activities to be completed that entail the reading
of accompanying text, before the user is allowed to move on to the next level or task.
Research and Design Considerations
A core research question is how to encourage users to interact with all aspects of a multime-
dia app, especially given the tendency to select videos to watch rather than text to read. One
technique is to provide a diversity of hands-on interactivities and simulations that require the
user to complete a task, solve a problem, or explore different aspects of a topic that involves
reading some accompanying text. Specific examples include electronic notebooks that are
integrated as part of the interface, where users can type in their own material; multiple-choice
quizzes that provide feedback about how well they have done; interactive puzzles where they have
to select and position different pieces in the right combination; and simulation-type games
(Continued)
7 I N T E R F A C E S212
7.2.4 Virtual Reality
Virtual reality (VR) has been around since the 1970s when researchers first began devel-
oping computer-generated graphical simulations to create “the illusion of participation
in a synthetic environment rather than external observation of such an environment”
(Gigante, 1993, p. 3). The goal was to create user experiences that feel virtually real when
interacting with an artificial environment. Images are displayed stereoscopically to the
users—most commonly through VR headsets—and objects within the field of vision can
be interacted with via an input device like a joystick.
The 3D graphics can be projected onto Cave Automatic Virtual Environment (CAVE)
floor and wall surfaces, desktops, 3D TV, headsets, or large shared displays, for instance,
IMAX screens. One of the main attractions of VR is that it can provide opportunities for new
kinds of immersive experiences, enabling users to interact with objects and navigate in 3D
space in ways not possible in the physical world or a 2D graphical interface. Besides looking
at and navigating through a 360-degree visual landscape, auditory and haptic feedback can
be added to make the experience feel even more like the real world. The resulting user experi-
ence can be highly engaging; it can feel as if one really is flying around a virtual world. Peo-
ple can become completely absorbed by the experience. The sense of presence can make the
virtual setting seem convincing. Presence, in this case, means “a state of consciousness, the
(psychological) sense of being in the virtual environment” (Slater and Wilbur, 1997, p. 605),
where someone behaves in a similar way to how they would if at an equivalent real event.
VR simulations of the world can be constructed to have a higher level of fidelity with
the objects they represent compared to other forms of graphical interfaces, for example, mul-
timedia. The illusion afforded by the technology can make virtual objects appear to be very
life-like and behave according to the laws of physics. For example, landing and take-off ter-
rains developed for flight simulators can appear to be very realistic. Moreover, it is assumed
that learning and training applications can be improved through having a greater fidelity to
the represented world.
where they have to follow a set of procedures to achieve some goal for a given scenario.
Another approach is to employ dynalinking, where information depicted in one window
explicitly changes in relation to what happens in another. This can help users keep track of
multiple representations and see the relationship between them (Scaife and Rogers, 1996).
Specific guidelines are available that recommend how best to combine multiple media in
relation to different kinds of task, for example, when to use audio with graphics, sound with
animations, and so on, for different learning tasks. As a rule of thumb, audio is good for stim-
ulating the imagination, movies for depicting action, text for conveying details, and diagrams
for conveying ideas. From such generalizations, it is possible to devise a presentation strategy for
online learning. This could be along the lines of the following:
1. Stimulate the imagination through playing an audio clip.
2. Present an idea in diagrammatic form.
3. Display further details about the concept through hypertext.
7 . 2 I N T E R F A C E T y p E S 213
Another distinguishing feature of VR is the different viewpoints it can offer. Players can
have a first-person perspective, where their view of the game or environment is through their
own eyes, or a third-person perspective, where they see the world through an avatar visually
represented on the screen. An example of a first-person perspective is that experienced in
first-person shooter games such as DOOM, where the player moves through the environ-
ment without seeing a representation of themselves. It requires the user to imagine what they
might look like and decide how best to move around. An example of a third-person perspec-
tive is that experienced in Tomb Raider, where the player sees the virtual world above and
behind the avatar of Lara Croft. The user controls Lara’s interactions with the environment
by controlling her movements, for example, making her jump, run, or crouch. Avatars can
be represented from behind or from the front, depending on how the user controls its move-
ments. First-person perspectives are typically used for flying/driving simulations and games,
for instance, car racing, where it is important to have direct and immediate control to steer
the virtual vehicle. Third-person perspectives are more commonly used in games, learning
environments, and simulations, where it is important to see a representation of self with
respect to the environment and others in it. In some virtual environments, it is possible to
switch between the two perspectives, enabling the user to experience different viewpoints on
the same game or training environment.
In the beginning, head-mounted displays were used to present VR experiences. However,
the visuals were often clunky, the headset uncomfortable to wear, and the immersive experi-
ence sometimes resulting in motion sickness and disorientation. Since then, VR technology
has come of age and improved greatly. There are now many off-the-shelf VR headsets (for
example Oculus Go, HTC Vive, and Samsung Gear VR) that are affordable and comfortable.
They also have more accurate head tracking that allow developers to create more compelling
games, movies, and virtual environments.
“Out of Home Entertainment” and VR arcades have also become popular worldwide
and provide a range of social VR experiences, targeted at the general public. For example,
Hyper-Reality has developed a number of spooky games, for 1–4 players, such as Japanese
Adventures, Escape the Lost Pyramid, and the Void. Each game lasts for about 40 minutes,
where players have to carry out a set of tasks, such as finding a lost friend in a realm. The
immersive entertainment is full of surprises at every turn. One moment a player might be on
solid ground and the next in complete darkness. The pleasure is often in not knowing what
is going to happen next and being able to recount the experiences afterward with friends
and family.
Another application area is how VR can enrich the experience of reporting and wit-
nessing current affairs and news, especially feelings of empathy and compassion to real-
life experiences (Aronson-Rath et al., 2016). For example, the BBC together with Aardman
Interactive and University College London researchers developed a VR experience called “We
Wait,” where they put the viewer in a place that few foreign reporters have been, namely, on
a boat with a group of refugees crossing the Mediterranean Sea (Steed et al., 2018). The goal
was to let news reporters and other participants experience how it felt to be there on the
boat with the refugees. They used a particular artistic polygon style rather than realism to
create the characters sitting on the boat (see Figure 7.13). The characters had expressive eyes
intended to convey human emotion in response to gaze interaction. The avatars were found
to generate an empathic response from participants.
7 I N T E R F A C E S214
VR is also starting to be used by airlines and travel companies to enrich someone’s plan-
ning experience of their travel destinations. For example, the airline KLM has developed a
platform called iFly VR (https://360.iflymagazine.com/) that provides an immersive experi-
ence intended to inspire people to discover more about the world. A potential danger of this
approach, however, is that if the VR experience is too lifelike it might make people feel they
have ‘been there, done that’ and hence don’t need to visit the actual place. However, KLM’s
rationale is quite the opposite; if you make the virtual experience so compelling people will
want to go there even more. Their first foray into this adventure follows the famous “Fear-
less Chef” Kiran Jethwa into a jungle in Thailand to look for the world’s most remarkable
coffee beans.
MagicLeap has pushed the envelope even further into new realms of virtual real-
ity; combining cameras, sensors, and speakers in a headset that provides quite a different
experience—one where the user can create their own worlds using various virtual tools, for
example, painting a forest or building a castle—that then come alive in the actual physical
space in which they reside. In this sense, it is not strictly VR, as it allows the wearer to see
the virtual world and virtual objects they have created, or curated, blend with the physical
objects in their living room or other space in which they are located. It is as if by magic the
two are in the same world. In some ways it is a form of augmented reality (AR) – described
in section 7.2.16.
Figure 7.13 Snapshot of polygon graphics used to represent avatars for the “We Wait” VR experience
Source: Steed, Pan, Watson and Slater, https://www.frontiersin.org/articles/10.3389/frobt.2018.00112/full.
Licensed Under CC-BY 4.0
https://www.frontiersin.org/articles/10.3389/frobt.2018.00112/full
7 . 2 I N T E R F A C E T y p E S 215
Watch this video of MagicLeap’s Create World where the virtual world meets the
physical world in magical ways: https://youtu.be/K5246156rcQ.
Research and Design Considerations
VR has been developed to support learning and training for numerous skills. Researchers have
designed apps to help people learn to drive a vehicle, fly a plane, and perform delicate surgical
operations—where it is very expensive and potentially dangerous to start learning with the
real thing. Others have investigated whether people can learn to find their way around a real
building/place before visiting it by first navigating a virtual representation of it, see Gabrielli
et al. (2000).
An early example of VR was the Virtual Zoo project. Allison et al. (1997) found that
people were highly engaged and very much enjoyed the experience of adopting the role of a
gorilla, navigating the environment, and watching other gorillas respond to their movements
and presence.
Virtual environments (VE) have also been designed to help people practice social and
speaking skills and confront their social phobias, see Cobb et al. (2002) and Slater et al.
(1999). An underlying assumption is that the environment can be designed as a safe place
to help people gently overcome their fears (for example, spiders, talking in public, and so
forth) by confronting them through different levels of closeness and unpleasantness (such as
by seeing a small virtual spider move far away, seeing a medium one sitting nearby, and then
finally touching a large one). Studies have shown that people can readily suspend their disbe-
lief, imagining a virtual spider to be a real one or a virtual audience to be a real audience. For
example, Slater et al. (1999) found that people rated themselves as being less anxious after
speaking to a virtual audience that was programmed to respond to them in a positive fashion
than after speaking to virtual audiences programmed to respond to them negatively.
Core design considerations include the importance of having a virtual self-body as part of
a VR experience to enhance the feeling of presence; how to prevent users from experiencing
simulator sickness through experimenting with galvanic stimulation; determining the most
effective ways of enabling users to navigate through them, for instance, first person versus
third person; how to control their interactions and movements, for example, use of head and
body movements; how best to enable users to interact with information in VR, for example,
use of keypads, pointing, joystick buttons; and how to enable users to collaborate and com-
municate with others in the virtual environment.
(Continued)
Peter Rubin’s (2018) guide to VR published in Wired magazine provides a sum-
mary and speculation about its future: https://www.wired.com/story/wired-
guide-to-virtual-reality/.
7 I N T E R F A C E S216
7.2.5 Website Design
Early websites were largely text-based, providing hyperlinks to different places or pages of
text. Much of the design effort was concerned with the information architecture, that is, how
best to structure information at the interface level to enable users to navigate and access it
easily and quickly. For example, Jakob Nielsen (2000) adapted his and Rolf Molich’s usabil-
ity guidelines (Nielsen and Molich, 1990) to make them applicable to website design, focus-
ing on simplicity, feedback, speed, legibility, and ease of use. He also stressed how critical
download time was to the success of a website. Simply, users who have to wait too long for
a page to appear are likely to move on somewhere else.
Since then, the goal of web design has been to develop sites that are not only usable but
also aesthetically pleasing. Getting the graphical design right, therefore, is critical. The use of
graphical elements (such as background images, color, bold text, and icons) can make a web-
site look distinctive, striking, and pleasurable for the user when they first view it and also to
make it readily recognizable on their return. However, there is the danger that designers can
get carried away with the appearance at the expense of making it difficult to find something
and navigate through it.
Steve Krug (2014) discusses this usability versus attractiveness dilemma in terms of the
difference between how designers create websites and how users actually view them. He
argues that many web designers create sites as if the user was going to pore over each page,
reading the finely crafted text word for word; looking at the use of images, color, icons, and
so forth; examining how the various items have been organized on the site; and then con-
templating their options before they finally select a link. Users, however, often behave quite
differently. They will glance at a new page, scan part of it, and click the first link that catches
their interest or looks like it might lead them to what they want.
Much of the content on a web page is not read. In Krug’s words, web designers are
“thinking great literature” (or at least “product brochure”), while the user’s reality is much
closer to a “billboard going by at 60 miles an hour” (Krug, 2014, p. 21). While somewhat of
a caricature of web designers and users, his depiction highlights the discrepancy between the
A central concern is the level of realism to target. Is it necessary to design avatars and the
environments that they inhabit to be life-like, using rich graphics, or can simpler and more
abstract forms be used, but which nonetheless are equally capable of engendering a sense of
presence? Do you need to provide a visual representation of the arm and hands for holding
objects for a self-avatar, or is it enough to have continuous movement of the object? Research
has shown that it is possible for objects to appear to be moving with invisible hands as if
they were present. This has been coined as the “tomato presence,” that is, where presence is
maintained using a stand-in object in VR (for instance, a tomato). (See https://owlchemylabs
.com/tomatopresence/.)
3D software toolkits are also available, making it much easier for developers and
researchers to create virtual environments. The most popular is Unity. 3D worlds can be cre-
ated using their APIs, toolkits, and physics engines to run on multiple platforms, for example,
mobile, desktop, console, TV, VR, AR, and the Web.
https://owlchemylabs.com/tomatopresence/
https://owlchemylabs.com/tomatopresence/
7 . 2 I N T E R F A C E T y p E S 217
meticulous ways that designers create their websites and the rapid and less than systematic
approach that users take to view them. To help navigate their way through the many choices
that web developers have to make, Jason Beaird and James George (2014) have come up with
a number of guidelines intended to help web developers achieve a balance between using
color, layout and composition, texture, typography, and imagery. They also cover mobile and
responsive web design. Other website guidelines are mentioned in Chapter 16.
Web designers now have a number of languages available to design websites, such as
Ruby and Python. HTML5 and web development tools, such as JavaScript and CSS, are also
used. Libraries, such as React, and open source toolkits, such as Bootstrap, enable developers
to get started quickly when prototyping their ideas for a website. WordPress also provides
users with an easy-to-use interface and hundreds of free templates to use as a basis when
creating their own website. In addition, built-in optimization and responsive, mobile-ready
themes are available. Customized web pages are available for smartphone browsers that pro-
vide scrolling lists of articles, games, tunes, and so on, rather than hyperlinked pages.
Another interface element that has become an integral part of any website is breadcrumb
navigation. Breadcrumbs are category labels that appear on a web page that enable users to
peruse other pages without losing track of where they have come from (see Figure 7.14). The
term comes from the way-finding technique that Hansel used in the Brothers Grimm fairy
tale Hansel and Gretel. The metaphor conjures up the idea of leaving a path to follow back.
Breadcrumbs are also used by search engine optimization tools that match up a user’s search
terms with relevant web pages using the breadcrumbs. Breadcrumbs also extol usability in a
number of ways, including helping users know where they are relative to the rest of the web-
site, enabling one-click access to higher site levels, attracting first time visitors to continue to
browse a website after having viewed the landing page (Mifsud, 2011). Therefore, using them
is good practice for other web applications besides websites.
With the arrival of tablets and smartphones, web designers needed to rethink how to
design web browsers and websites for them, as they realized the touchscreen affords a differ-
ent interaction style than PC/laptops. The standard desktop interface was found to not work
as well on a tablet or smartphone. In particular, the typical fonts, buttons, and menu tabs
were too small and awkward to select when using a finger. Instead of double-clicking inter-
face elements, as users do with a mouse or trackpad, tablet and smartphone screens enable
finger tapping. The main methods of navigation are by swiping and pinching. A new style
of website emerged that mapped better to this kind of interaction style but also one that the
Figure 7.14 A breadcrumb trail on the BestBuy website showing three choices made by the user
to get to Smart Lights
Source: https://www.bestbuy.ca
https://www.bestbuy.ca
7 I N T E R F A C E S218
user could interact with easily when using a mouse and trackpad. Responsive websites were
developed that change their layout, graphic design, font, and appearance depending on the
screen size (smartphone, tablet, or PC) on which it was being displayed.
If you look at the design of many websites, you will see that the front page presents a
banner at the top, a short promotional video about the company/product/service, arrows to
the left or right to indicate where to flick to move through pages, and further details appear-
ing beneath the home page that the user can scroll through. Navigation is largely done by
swiping pages horizontally or scrolling up and down.
Tips on designing websites for tablets versus mobile phones can be found here:
BOX 7.3
In-Your-Face Web Ads
Web advertising has become pervasive and invasive. Advertisers realized how effective flash-
ing and animated ads were for promoting their products, taking inspiration from the animated
neon light advertisements used in city centers, such as London’s Piccadilly Circus. But since
banner ads emerged in the 1990s, advertisers have become even more cunning in their tactics.
In addition to designing even flashier banner ads, more intrusive kinds of web ads have begun
to appear on our screens. Short movies and garish cartoon animations, often with audio, now
pop up in floating windows that zoom into view or are tagged on at the front end of an online
newspaper or video clip. Moreover, this new breed of in-your-face, often personalized web ads
frequently requires the user either to wait until they end or to find a check box to close the
window down. Sites that provide free services, such as Facebook, YouTube, and Gmail, are
also populated with web ads. The problem for users is that advertisers pay significant revenues
to online companies to have their advertisements placed on their websites, entitling them to
say where, what, and how they should appear. One way users can avoid them is to set up ad
blockers when browsing the web.
Research and Design Considerations
There are numerous classic books on web design and usability (for example, Krug, 2014;
Cooper et al., 2014). In addition, there are many good online sites offering guidelines and
tips. For example, the BBC provides online guidance specifically for how to design responsive
websites that includes topics such as context, accessibility, and modular design. See https://
www.bbc.co.uk/gel/guidelines/how-to-design-for-the-web. Key design considerations for all
websites are captured well by three core questions proposed by Keith Instone (quoted in Veen,
2001): Where am I? What’s here? Where can I go?
7 . 2 I N T E R F A C E T y p E S 219
7.2.6 Mobile Devices
Mobile devices have become pervasive, with people increasingly using them in all aspects of
their everyday and working lives—including phones, fitness trackers, and watches. Custom-
ized mobile devices are also used by people in a diversity of work settings where they need
access to real-time data or information while walking around. For example, they are now
commonly used in restaurants to take orders, at car rental agencies to check in car returns, in
supermarkets for checking stock, and on the streets for multiplayer gaming.
Larger-sized tablets are also used in mobile settings. For example, many airlines provide
their flight attendants with one so that they can use their customized flight apps while air-
borne and at airports; sales and marketing professionals also use them to demonstrate their
products or to collect public opinions. Tablets and smartphones are also commonly used in
classrooms that can be stored in special “tabcabbies” provided by schools for safe keeping
and recharging.
Smartphones and smartwatches have an assortment of sensors embedded in them, such
as an accelerometer to detect movement, a thermometer to measure temperature, and gal-
vanic skin response to measure changes in sweat level on one’s skin. Other apps may be
designed for fun. An example of an early app developed by magician Steve Sheraton simply
for a moment of pleasure is iBeer (see Figure 7.15). Part of its success was due to the ingen-
ious use of the accelerometer inside the phone. It detects the tilting of the iPhone and uses
this information to mimic a glass of beer being consumed. The graphics and sounds are also
very enticing; the color of the beer together with frothy bubbles and accompanying sound
effects gives the illusion of virtual beer being swished around a virtual glass. The beer can be
drained if the phone is tilted enough, followed by a belch sound when it has been finished.
ACTIVITy 7.5
Look at a fashion brand’s website, such as Nike, and describe the kind of interface used. How
does it contravene the design principles outlined by Jeffrey Veen? Does it matter? For what
type of user experience is it providing? What was your experience in engaging with it?
Comment
Fashion companies’ sites, like Nike, are often designed to be more like a cinematic experience
and use rich multimedia elements, including videos, sounds, music, animations, and interactiv-
ity. Branding is central. In this sense, it contravenes what are considered core usability guide-
lines. Specifically, the site has been designed to entice the visitor to enter the virtual store and
watch high-quality and innovative movies that show cool dudes wearing their products.
Often, multimedia interactivities are embedded into the sites to help the viewer move to other
parts of the site, for example by clicking on parts of an image or a video playing. Screen widg-
ets are also provided, such as menus, skip over, and next buttons. It is easy to become immersed
in the experience and forget that it is a commercial store. It is also easy to get lost and not to
know—Where am I? What’s here? Where can I go? But this is precisely what companies such
as Nike want their visitors to do and to enjoy: the experience.
7 I N T E R F A C E S220
Smartphones can also be used to download contextual information by scanning barcodes
in the physical world. Consumers can instantly download product information by scanning
barcodes using their iPhone when walking around a supermarket, including allergens, such
as nuts, gluten, and dairy. For example, the GoodGuide app enables shoppers to scan prod-
ucts in a store by taking a photo of their barcode to see how they rate for healthiness and
impact on the environment. Others include concert tickets and location-based notifications.
Another method that provides quick access to relevant information is the use of quick
response (QR) codes that store URLs and look like black-and-white checkered squares (see
Figure 7.16). They work by people taking a picture using their camera phone that then takes
them to a particular website. However, despite their universal appeal to companies as a way
of providing additional information or special offers, not many people actually use them in
practice. One of the reasons is that they can be slow, tricky, and cumbersome to use in situ.
People have to download a QR reader app first, open it, and then try to hold it over the QR
code to take a photo, which can take time to open up a webpage.
Figure 7.15 The iBeer smartphone app
Source: Hottrix
Figure 7.16 QR code appearing on a magazine page
7 . 2 I N T E R F A C E T y p E S 221
ACTIVITy 7.6
Smartwatches, such those made by Google, Apple, and Samsung, provide a multitude of func-
tions including fitness tracking, streaming music, texts, email, and the latest tweets. They are
also context and location aware. For example, on detecting the user’s presence, promotional
offers may be pinged to them from nearby stores, tempting them in to buy. How do you feel
about this? Do you think it is the same or worse compared to the way advertisements appear
on a user’s smartphone? Is this kind of context-based advertising ethical?
Comment
Smartwatches are similar to smartphones in that they, too, get pinged with promotions and
ads for nearby restaurants and stores. However, the main difference is that when worn on a
wrist, smartwatches are ever-present; the user only needs to glance down at it to notice a new
notification, whereas they have to take their phones out of their pockets and purses to see
what new item has been pinged (although some people hold their smartphone permanently in
their hands). This means that their attention is always being given to the device, which could
make them susceptible to responding to notifications and spending more money. While some
people might like to get 10 percent off on coffee if they walk into the cafe that has just sent
them a digital voucher, for others such notifications may be seen as very annoying as they are
constantly bombarded with promotions. Worse still, it could tempt children and vulnerable
people who are wearing such a watch to spend money when perhaps they shouldn’t or to nag
their parents or caretakers to buy it for them. However, smartwatch companies are aware of
this potential problem, and they provide settings that the user can change in terms of the level
and type of notifications they want to receive.
Research and Design Considerations
Mobile interfaces typically have a small screen and limited control space. Designers have to
think carefully about what type of dedicated hardware controls to include, where to place
them on the device, and then how to map them to the software. Apps designed for mobile
interfaces need to take into account the ability to navigate through content when using a
mobile display is constrained, whether using touch, pen, or keypad input. The use of vertical
and horizontal scrolling provides a rapid way of scanning though images, menus, and lists. A
number of mobile browsers have also been developed that allow users to view and navigate
the Internet, magazines, or other media in a more streamlined way. For example, Microsoft’s
Edge browser was one of the first mobile browsers that was designed to make it easier to find,
view, and manage content on the go. It provides a customized reading view that enables the
user to re-organize the content of a web page to make it easier for them to focus on what they
want to read. The trade-off, however, is that it makes it less obvious how to perform other
functions that are no longer visible on the screen.
(Continued)
7 I N T E R F A C E S222
7.2.7 Appliances
Appliances include machines for everyday use in the home (for example, washing machines,
microwave ovens, refrigerators, toasters, bread makers, and smoothie makers). What they
have in common is that most people using them will be trying to get something specific done
in a short period of time, such as starting a wash, watching a program, buying a ticket, or
making a drink. They are unlikely to be interested in spending time exploring the interface
or looking through a manual to see how to use the appliance. Many of them now have LED
displays that provide multiple functions and feedback about a process (such as temperature,
minutes remaining, and so on). Some have begun to be connected to the Internet with com-
panion devices, enabling them to be controlled by remote apps. An example is a coffee maker
that can be controlled to come on at a certain time from an app running on a smartphone or
controlled by voice.
Another key concern for mobile display design is the size of the area on the display that
the user touches to make something happen, such as a key, icon, button, or app. The space
needs to be big enough for “all fingers” to press accurately. If the space is too small, the
user may accidentally press the wrong key, which can be annoying. The average fingertip is
between one and two centimeters wide, so target areas should be at least 7 mm to 10 mm so
that they can be accurately tapped with a fingertip. Fitts’ law (see Chapter 16) is often used to
help with evaluating hit area. In their developer design guidelines, Apple also suggests provid-
ing ample touch targets for interactive elements, with a minimum tappable area of 44 pts. ×
44 pts. for all controls.
A number of other guidelines exist providing advice on how to design interfaces for
mobile devices (for instance, see Babich, 2018). An example is avoiding clutter by prioritizing
one primary action per screen.
Research and Design Considerations
Alan Cooper et al. (2014) suggest that appliance interfaces require the designer to view them
as transient interfaces, where the interaction is short. All too often, however, designers pro-
vide full-screen control panels or an unnecessary array of physical buttons that serve to frus-
trate and confuse the user where only a few, presented in a structured way, would be much
better. Here the two fundamental design principles of simplicity and visibility are paramount.
Status information, such as what the photocopier is doing, what the ticket machine is doing,
and how much longer the wash is going to take should be provided in a simple form and
at a prominent place on the interface. A key design question is: as soft displays increasingly
become part of an appliance interface, for example, LCD and touchscreens, what are the
trade-offs with replacing the traditional physical controls, such as dials, buttons, and knobs,
with these soft display controls?
7 . 2 I N T E R F A C E T y p E S 223
ACTIVITy 7.7
Look at the controls on your toaster (or the one in Figure 7.17 if you don’t have one nearby)
and describe what each does. Consider how these might be replaced with an LCD screen.
What would be gained and lost from changing the interface in this way?
Comment
Standard toasters have two main controls, the lever to press down to start the toasting and
a knob to set the amount of time for the toasting. Many come with a small eject button that
can be pressed if the toast starts to burn. Some also come with a range of settings for different
ways of toasting (such as one side, frozen, and so forth), selected by moving a dial or press-
ing buttons.
Designing the controls to appear on an LCD screen would enable more information and
options to be provided, for example, only toast one slice, keep the toast warm, or automati-
cally pop up when the toast is burning. It would also allow precise timing of the toasting in
minutes and seconds. However, it is likely to increase the complexity of what previously was
a set of logical and very simple actions. This has happened in the evolution of microwaves,
washing machines, and tea kettles that have digital interfaces. They also offer many more
options for warming food up, washing clothes, or the temperature to heat water. The down-
side of increasing the number of choices, especially when the interface is not designed well to
support this, is that it can make for a more difficult user experience for mundane tasks.
Figure 7.17 A typical toaster with basic physical controls
Source: https://uk.russellhobbs.com/product/brushed-stainless-steel-toaster-2-slice
https://uk.russellhobbs.com/product/brushed-stainless-steel-toaster-2-slice
7 I N T E R F A C E S224
7.2.8 Voice User Interfaces
A voice user interface (VUI) involves a person talking with a spoken language app, such as
a search engine, a train timetable, a travel planner, or a phone service. It is commonly used
for inquiring about specific information (for instance, flight times or the weather) or issuing
a command to a machine (such as asking a smart TV to select an Action movie or asking a
smart speaker to play some upbeat music). Hence, VUIs use an interaction type of command
or conversation (see Chapter 3), where users speak and listen to an interface rather than click
on, touch, or point to it. Sometimes, the interaction style can involve the user responding
where the system is proactive and initiates the conversation, for example, asking the user if
they would like to stop watching a movie or listen to the latest breaking news.
The first generation of speech systems earned a reputation for mishearing all too often
what a person said (see cartoon). However, they are now much more sophisticated and have
higher levels of recognition accuracy. Machine learning algorithms have been developed that
are continuing to improve their ability to recognize what someone is saying. For speech out-
put, actors are often used to record answers, messages, and prompts, which are much friend-
lier, more convincing, and more pleasant than the artificially-sounding synthesized speech
that was typically used in the early systems.
VUIs have become popular for a range of apps. Speech-to-text systems, such as Dragon,
enable people to dictate rather than have to type, whether it is entering data into a spread-
sheet, using a search engine, or writing a document. The words spoken appear on the screen.
For some people, this mode of interaction is more efficient, especially when they are on the
move. Dragon claims on their website that it is three times faster than typing and it is 99
percent accurate. Speech technology is also used by people with visual impairments, includ-
ing speech recognition word processors, page scanners, web readers, and VUIs for operating
home control systems, including lights, TV, stereo, and other home appliances.
One of the most popular applications of speech technology is call routing, where com-
panies use an automated speech system to enable users to reach one of their services during a
phone call. Callers voice their needs in their own words, for example, “I’m having problems
with my Wi-Fi router,” and in response are automatically forwarded to the appropriate service
(Cohen et al., 2004). This is useful for companies, as it can reduce operating costs. It can also
increase revenue by reducing the number of lost calls. The callers may be happier, as their call
can be routed to an available agent (real or virtual) rather than being lost or sent to voicemail.
Source: Reproduced with permission of King
Features Syndicate
7 . 2 I N T E R F A C E T y p E S 225
In human conversations, people often interrupt each other, especially if they know what
they want, rather than waiting for someone to go through a series of options. For example, they
may stop the waitress at a restaurant in midflow when describing the specials if they know
what they want, rather than let her go through the entire list. Similarly, speech technology has
been designed with a feature called barge-in that allows callers to interrupt a system message
and provide their request or response before the message has finished playing. This can be
useful if the system has numerous options from which the caller may choose, and the chooser
knows already what they want.
There are several ways that a VUI dialog can be structured. The most common is a
directed dialogue where the system is in control of the conversation, asking specific questions
and requiring specific responses, similar to filling in a form (Cohen et al., 2004):
System: Which city do you want to fly to?
Caller: London
System: Which airport: Gatwick, Heathrow, Luton, Stansted, or City?
Caller: Gatwick
System: What day do you want to depart?
Caller: Monday next week.
System: Is that Monday, May 5?
Caller: Yes
Other systems are more flexible, allowing the user to take more initiative and specify
more information in one sentence (for example, “I’d like to go to Paris next Monday for two
weeks”). The problem with this approach is that there is more chance for error, since the
caller might assume that the system can follow all of their needs in one pass as a real travel
agent would (for example, “I’d like to go to Paris next Monday for two weeks, and would
like the cheapest possible flight, preferably leaving from Gatwick airport and definitely with
no stop-overs …”). The list is simply too long and would overwhelm the system’s parser.
Carefully guided prompts can be used to get callers back on track and help them speak
appropriately (for instance, “Sorry, I did not get all that. Did you say you wanted to fly next
Monday?”).
A number of speech-based phone apps exist that enable people to use them while mobile,
making them more convenient to use than text-based entry. For example, people can voice
queries into their phone using Google Voice or Apple Siri rather than entering text manually.
Mobile translators allow people to communicate in real time with others who speak a dif-
ferent language by letting a software app on their phone do the talking (for example, Google
Translate). People speak in their own language using their phone while the software trans-
lates what each person is saying into the language of the other one. Potentially, this means
people from all over the world (there are more than 6,000 languages) can talk to one another
without having to learn another language.
Voice assistants, like Amazon’s Alexa and Google Home, can be instructed by users to
entertain in the home by telling jokes, playing music, keeping track of time, and enabling
users to play games. Alexa also offers a range of “skills,” which are voice-driven capabilities
intended to provide a more personalized experience. For example, “Open the Magic Door” is
an interactive story skill that allows users to choose their path in a story by selecting differ-
ent options through the narrative. Another one, “Kids court,” allows families to settle argu-
ments in an Alexa-run court while learning about the law. Many of the skills are designed
7 I N T E R F A C E S226
to support multiple users taking part at the same time, offering the potential for families to
play together. Social interaction is encouraged by the smart speaker that houses Alexa or
Home. Smart speakers sit in a common space for all to use (similar to a toaster or refrigera-
tor). In contrast, handheld devices, such as a smartphone or tablet, support only single use
and ownership.
Despite advances in speech recognition, conversational interaction is limited mainly to
answering questions and responding to requests. It can be difficult for VUIs to recognize
children’s speech, which is not as articulate as adults. For example, Druga et al. (2017) found
that young children (3–4 years old) experienced difficulty interacting with conversational
and chat agents, resulting in them becoming frustrated. Also, voice assistants don’t always
recognize who is talking in a group, such as a family, and always need to be called by their
name each time someone wants to interact with it. There is still a way to go before voice
assistant interaction resembles human conversation.
7.2.9 Pen-Based Devices
Pen-based devices enable people to write, draw, select, and move objects on an interface using
light pens or styluses that capitalize on the well-honed drawing and writing skills that are devel-
oped from childhood. They have been used to interact with tablets and large displays, instead
of mouse, touch, or keyboard input, for selecting items and supporting freehand sketching.
Digital ink, such as Anoto, uses a combination of an ordinary ink pen with a digital camera that
Research and Design Considerations
Key research questions are what conversational mechanisms to use to structure the voice user
interface and how human-like they should be. Some researchers focus on how to make it
appear natural (that is, like human conversation), while others are concerned more with how
to help people navigate efficiently through a menu system by enabling them to recover easily
from errors (their own or the system’s), to be able to escape and go back to the main menu
(similar to the undo button of a GUI), and to guide those who are vague or ambiguous in
their requests for information or services using prompts. The type of voice actor, male, female,
neutral, or dialect, and form of pronunciation are also topics of research. Do people prefer to
listen to and are more patient with a female or male voice? What about one that is jolly versus
one that is serious?
Michael Cohen et al. (2004) discuss the pros and cons of using different techniques for
structuring the dialogue and managing the flow of voice interactions, the different ways of
expressing errors, and the use of conversational etiquette—all still relevant for today’s VUIs.
A number of commercial guidelines are available for voice interfaces. For example, Cathy
Pearl (2016) has written a practical book that provides a number of VUI design principles and
topics, including which speech recognition engine to use, how to measure the performance of
VUIs, and how to design VUIs for different interfaces, for example, a mobile app, toy, or voice
assistant.
7 . 2 I N T E R F A C E T y p E S 227
digitally records everything written with the pen on special paper (see Figure 7.18). The pen
works by recognizing a special nonrepeating dot pattern that is printed on the paper. The non-
repeating nature of the pattern means that the pen is able to determine which page is being
written on and where on the page the pen is pointing. When writing on digital paper with a
digital pen, infrared light from the pen illuminates the dot pattern, which is then picked up by
a tiny sensor. The pen decodes the dot pattern as the pen moves across the paper and stores the
data temporarily in the pen. The digital pen can transfer data that has been stored in the pen
via Bluetooth or a USB port to a computer. Handwritten notes can also be converted and saved
as standard typeface text. This can be useful for applications that require people to fill in paper-
based forms and also for taking notes during meetings.
Another advantage of digital pens is that they allow users to annotate existing docu-
ments, such as spreadsheets, presentations, and diagrams quickly and easily in a similar
way to how they would do this when using paper-based versions. This is useful for a team
who is working together and communicating to each other from different locations. One
problem with using pen-based interactions on small screens, however, is that sometimes
it can be difficult to see options on the screen because a user’s hand can obscure part of it
when writing.
Figure 7.18 The Anoto pen being used to fill in a paper form and a schematic showing its internal
components
Source: www.grafichewanda.it/anoto.php?language=EN
http://www.grafichewanda.it/anoto.php?language=EN
7 I N T E R F A C E S228
7.2.10 Touchscreens
Single touchscreens, used in walk-up kiosks such as ticket machines or museum guides, ATMs,
and cash registers (for instance, restaurants), have been around for a while. They work by
detecting the presence and location of a person’s touch on the display; options are selected by
tapping on the screen. Multitouch surfaces, on the other hand, support a much wider range
of more dynamic fingertip actions, such as swiping, flicking, pinching, pushing, and tapping.
They do this by registering touches at multiple locations using a grid (see Figure 7.19). This
multitouch method enables devices, such as smartphones and tabletops, to recognize and
respond to more than one touch at the same time. This enables users to use multiple digits to
perform a variety of actions, such as zooming in and out of maps, moving photos, selecting
letters from a virtual keyboard when writing, and scrolling through lists. Two hands can also
be used together to stretch and move objects on a tabletop surface, similar to how both hands
are used to stretch an elastic band or scoop together a set of objects.
BOX 7.4
Electronic Ink
Digital ink is not to be confused with the term electronic ink (or e-ink). Electronic ink is a dis-
play technology designed to mimic the appearance of ordinary ink on paper used in e-readers,
such as the Kindle. The display used reflects light like ordinary paper.
Figure 7.19 A multitouch interface
Source: www.sky-technology.eu/en/blog/article/item/multi-touch-technology-how-it-works.html
http://www.sky-technology.eu/en/blog/article/item/multi-touch-technology-how-it-works.html
7 . 2 I N T E R F A C E T y p E S 229
The flexibility of interacting with digital content afforded by finger gestures has resulted
in many ways of experiencing digital content. This includes reading, scanning, zooming, and
searching interactive content on tablets, as well as creating new digital content.
7.2.11 Gesture-Based Systems
Gestures involve moving arms and hands to communicate (for instance, waving to say good-
bye or raising an arm to speak in class) or to provide information to someone (for example,
holding two hands apart to show the size of something). There has been much interest in how
technology can be used to capture and recognize a user’s gestures for input by tracking them
using cameras and then analyzing them using machine learning algorithms.
David Rose (2018) created a video that depicts many sources of inspiration for where
gesture is used in a variety of contexts, including those made by cricket umpires, live concert
signers for the deaf, rappers, Charlie Chaplin, mime artists, and Italians. His team at IDEO
developed a gesture system to recognize a small set of gestures and used these to control a
Philips HUE light set and a Spotify station. They found that gestures need to be sequential
to be understood in the way a sentence is composed of a noun, then verb, and object plus
operation. For example, for “speaker, on,” they used a gesture on one hand to designate the
Research and Design Considerations
Touchscreens have become pervasive, increasingly becoming the main interface that many
people use on a daily basis. However, they are different from GUIs, and a central design con-
cern is what types of interaction techniques to use to best support different activities. For
example, what is the optimal way to enable users to choose from menu options, find files, save
documents, and so forth, when using a touch interface? These operations are well mapped to
interaction styles available in a GUI, but it is not as obvious how to support them on a touch
interface. Alternative conceptual models have been developed for the user to carry out these
actions on the interface, such as the use of cards, carousels, and stacks (see Chapter 3). The
use of these objects enables users to swipe and move through digital content quickly. However,
it is also easy to swipe too far when using a carousel. Typing on a virtual keyboard with two
thumbs or one fingertip is also not as fast and efficient as using both hands when using a con-
ventional keyboard, although many people have learned to be very adept at pecking at virtual
keys on a smartphone. Predictive text can also be used to help people type faster.
Both hands may be used on multitouch tabletops to enable users to make digital objects
larger and smaller or to rotate them. Dwelling touches (pressing and holding a finger down)
can also be used to enable a user to perform dragging actions and to bring up pop-up menus.
One or more fingers can also be used together with a dwell action to provide a wider range
of gestures. However, these can be quite arbitrary, requiring users to learn them rather than
being intuitive. Another limitation of touchscreens is that they do not provide tactile feedback
in the same way that keys or mice do when pressed. To compensate, visual, audio, and haptic
feedback can be used. See also the section on shareable interfaces for more background on
multitouch design considerations.
7 I N T E R F A C E S230
noun, and another on the other hand to designate the verb. So, to change the volume, the
user needs to point to a speaker with their left hand while raising their right hand to signal
turn the volume up.
One area where gesture interaction has been developed is in the operating room. Sur-
geons need to keep their hands sterile during operations but also need to be able to look at
X-rays and scans during an operation. However, after being scrubbed and gloved, they need
to avoid touching any keyboards, phones, and other nonsterile surfaces. A far from ideal
workaround is to pull their surgical gown over their hands and manipulate a mouse through
the gown. As an alternative, Kenton O’Hara et al. (2013) developed a touchless gesture-based
system, using Microsoft’s Kinect technology, which recognized a range of gestures that sur-
geons could use to interact with and manipulate MRI or CT images, including single-handed
gestures for moving forward or backward through images, and two-handed gestures for
zooming and panning (see Figure 7.20).
Figure 7.20 Touchless gesturing in the operating theater
Source: Used courtesy of Kenton O’Hara
Watch David Rose’s inspirations for gesture video at https://vimeo.com/224522900.
7 . 2 I N T E R F A C E T y p E S 231
7.2.12 Haptic Interfaces
Haptic interfaces provide tactile feedback, by applying vibration and forces to the person,
using actuators that are embedded in their clothing or a device that they are carrying, such
as a smartphone or smartwatch. Gaming consoles have also employed vibration to enrich the
experience. For example, car steering wheels that are used with driving simulators can vibrate
in various ways to provide the feel of the road. As the driver makes a turn, the steering wheel
can be programmed to feel like it is resisting—in the way that a real steering wheel does.
Vibrotactile feedback can also be used to simulate the sense of touch between remote
people who want to communicate. Actuators embedded in clothing can be designed to re-
create the sensation of a hug or a squeeze by being buzzed on various parts of the body.
Another use of haptics is to provide real-time feedback to guide people when learning a
musical instrument, such as a violin or drums. For example, the MusicJacket (van der Linden
et al., 2011) was developed to help novice violin players learn how to hold their instrument
correctly and develop good bowing action. Vibrotactile feedback was provided via the jacket
to give nudges at key places on the arm and torso to inform the student when they were either
holding their violin incorrectly or their bowing trajectory had deviated from a desired path
(see Figure 7.21). A user study with novice players showed that they were able to react to the
vibrotactile feedback and adjust their bowing or their posture in response.
Another form of feedback is called ultrahaptics, which creates the illusion of touch in
midair. It does this by using ultrasound to make three-dimensional shapes and textures that
can be felt but not seen by the user (www.ultrahaptics.com). This technique can be used
to create the illusion of having buttons and sliders that appear in midair. One potential
use is in the automotive industry to replace existing physical buttons and knobs or touch-
screens. The ultra-haptic buttons and knobs can be designed to appear next to the driver
when needed, for example, when detecting the driver wants to turn down the volume or
change the radio station.
Research and Design Considerations
A key design concern for using gestural input is to consider how a computer system recognizes
and delineates the user’s gestures. In particular, how does it determine the start and end point
of a hand or arm movement, and how does it know the difference between a deictic gesture (a
deliberate pointing movement) and hand waving (an unconscious gesticulation) that is used
to emphasize what is being said verbally?
In addition to being used as a form of input, gestures can be represented as output to
show real-time avatar movement or someone’s own arm movements. Smartphones, laptops,
and some smart speakers (for example, Facebook’s Portal) have cameras that can perceive
three dimensions and record a depth for every pixel. This can be used to create a represen-
tation of someone in a scene, for example, how they are posing and moving, and also to
respond to their gestures. One design question that this raises is how realistic must the mir-
rored graphical representation of the user be in order for them to be believable and for the
user to connect their gestures with what they are seeing on the screen.
http://www.ultrahaptics.com
7 I N T E R F A C E S232
Haptics are also being embedded into clothing, sometimes called exoskeletons. Inspired
by the “right trousers” in the Wallace and Gromit animated short movie, Jonathan Rossiter
and his team (2018) developed a new kind of exoskeleton that can help people stand up and
move around using artificial muscles that consist of air bubbles which are activated using
tiny electric motors (see Figure 7.22). These are stiffened or relaxed using grapheme parts to
make the trousers move. One application area is to help people who have walking difficulties
and those who need to exercise but find it difficult to do so.
7.2.13 Multimodal Interfaces
Multimodal interfaces are intended to enrich user experiences by multiplying the way infor-
mation is experienced and controlled at the interface through using different modalities, such
as touch, sight, sound, and speech (Bouchet and Nigay, 2004). Interface techniques that have
been combined for this purpose include speech and gesture, eye-gaze and gesture, haptic
and audio output, and pen input and speech (Dumas et al., 2009). The assumption is that
multimodal interfaces can support more flexible, efficient, and expressive means of human–
computer interaction that are more akin to the multimodal experiences that humans encoun-
ter in the physical world (Oviatt, 2017). Different input/outputs may be used at the same
time, for example, using voice commands and gestures simultaneously to move through a
virtual environment, or alternately using speech commands followed by gesturing. The most
common combination of technologies used for multimodal interfaces is speech and vision
processing (Deng and Huang, 2004). Multimodal interfaces can also be combined with mul-
tisensor input to enable other aspects of the human body to be tracked. For example, eye
gaze, facial expressions, and lip movements can also be tracked to provide data about a user’s
Figure 7.21 The MusicJacket with embedded actuators that nudge the player to move their arm up
to be in the correct position
Source: Helen Sharp
7 . 2 I N T E R F A C E T y p E S 233
Figure 7.22 Trousers with artificial muscles that use a new kind of bubble haptic feedback
Source: Used courtesy of The Right Trousers Project: Wearable Soft Robotics for Independent Living
Research and Design Considerations
Haptics are now commonly used in gaming consoles, smartphones, and controllers to alert
or heighten a user experience. Haptic feedback is also being developed in clothing and other
wearables as a way of simulating being touched, stroked, prodded, or buzzed. A promising
application area is sensory-motor skills, such as in sports training and learning to play a musi-
cal instrument. For example, patterns of vibrations have been placed across snowboarders’
bodies to indicate which moves to take while snowboarding. A study reported faster reaction
times than when the same instructions were given verbally (Spelmezan et al., 2009). Other
uses are posture trainers that buzz when a user slouches and fitness trackers that also buzz
when they detect that their users have not taken enough steps in the past hour.
(Continued)
7 I N T E R F A C E S234
attention or other behavior. This kind of sensing can provide input for customizing user inter-
faces and experiences to the perceived need, desire, or level of interest.
A person’s body movement can also be tracked so that it can be represented back to them
on a screen in the form of an avatar that appears to move just like them. For example, the
Kinect was developed as a gesture and body movement gaming input system for the Xbox.
Although now defunct in the gaming industry, it proved effective at detecting multimodal
input in real time. It consisted of an RGB camera for facial and gesture recognition, a depth
sensor (an infrared projector paired with a monochrome camera) for movement tracking,
and downward-facing mics for voice recognition (see Figure 7.23). The Kinect looked for
Figure 7.23 Microsoft’s Xbox Kinect
Source: Stephen Brashear / Invision for Microsoft / AP Images
A key design question is where best to place the actuators on the body, whether to use a
single or a sequence of touches, when to activate, and at what intensity and how often to use
them to make the feeling of being touched convincing (e.g., Jones and Sarter, 2008). Providing
continuous haptic feedback would be simply too annoying. People would also habituate too
quickly to the feedback. Intermittent buzzes can be effective at key moments when a person
needs to attend to something but not necessarily tell them what to do. For example, a study by
Johnson et al. (2010) of a commercially available haptic device, intended to improve posture
by giving people a vibrotactile buzz when they slouched, found that while the buzzing did not
show them how to improve their posture, it did improve their body awareness.
Different kinds of buzzes can also be used to indicate different tactile experiences that
map to events; for example, a smartphone could transmit feelings of slow tapping to feel like
water dropping, which is meant to indicate it is about to rain and transmit the sensation of
heavy tapping to indicate a thunderstorm is looming.
7 . 2 I N T E R F A C E T y p E S 235
someone’s body. On finding it, it locked onto it and measured the three-dimensional position-
ing of the key joints in their body. This information was converted into a graphical avatar of
the user that could be programmed to move just like them. Many people readily saw them-
selves as the avatar and learnt how to play games in this manner.
7.2.14 Shareable Interfaces
Shareable interfaces are designed for more than one person to use. Unlike PCs, laptops, and
mobile devices, which are aimed at single users, shareable interfaces typically provide mul-
tiple inputs and sometimes allow simultaneous input by collocated groups. These include
large wall displays, for example SmartBoards (see Figure 7.24a), where people use their own
pens or gestures, and interactive tabletops, where small groups can interact with informa-
tion being displayed on the surface using their fingertips. Examples of interactive tabletops
include Smart’s SmartTable and Circle Twelve’s DiamondTouch (Dietz and Leigh, 2001; see
Figure 7.24b). The DiamondTouch tabletop is unique in that it can distinguish between dif-
ferent users touching the surface concurrently. An array of antennae is embedded in the touch
surface and each one transmits a unique signal. Each user has their own receiver embedded in
a mat on which they’re standing or a chair in which they’re sitting. When a user touches the
tabletop, very small signals are sent through the user’s body to their receiver that identifies
which antenna has been touched and sends this to the computer. Multiple users can interact
simultaneously with digital content using their fingertips.
An advantage of shareable interfaces is that they provide a large interactional space that
can support flexible group working, enabling groups to create content together at the same
time. Compared with a co-located group trying to work around a single-user PC or laptop,
where typically one person takes control, making it more difficult for others to take part,
Research and Design Considerations
Multimodal systems rely on recognizing aspects of a user’s behavior, including handwriting,
speech, gestures, eye movements, or other body movements. In many ways, this is much harder
to accomplish and calibrate than single modality systems that are programmed to recognize
one aspect of a user’s behavior. The most researched modes of interaction are speech, gesture,
and eye-gaze tracking. A key research question is what is actually gained from combining dif-
ferent input and outputs and whether talking and gesturing as humans do with other humans
is a natural way of interacting with a computer (see Chapter 4). Guidelines for multimodal
design can be found in Reeves et al. (2004) and Oviatt et al. (2017).
Watch this video of Circle Twelve’s demonstration of the Diamond Touch tabletop:
http://youtu.be/S9QRdXlTndU.
7 I N T E R F A C E S236
multiple users can interact with large display. Users can point to and touch the informa-
tion being displayed, while simultaneously viewing the interactions and having the same
shared point of reference (Rogers et al., 2009). There are now a number of tabletop apps that
have been developed for museums and galleries which enable visitors to learn about various
aspects of the environment (see Clegg et al., 2019).
(a)
(b)
Figure 7.24 (a) A SmartBoard in use during a meeting and (b) Mitsubishi’s interactive tabletop
interface
Source: (a) Used courtesy of SMART Technologies Inc. (b) Mitsubishi Electric Research Labs
7 . 2 I N T E R F A C E T y p E S 237
Another type of shareable interface is software platforms that enable groups of people
to work together simultaneously even when geographically apart. Early examples included
shared editing tools developed in the 1980s (for example, ShRedit). Various commercial
products now exist that enable multiple remote people to work on the same document at
the same time (such as Google Docs and Microsoft Excel). Some enable up to 50 people to
edit the same document at the same time with more watching on. These software programs
provide various functions, such as synchronous editing, tracking changes, annotating, and
commenting. Another collaborative tool is the Balsamiq Wireframes editor, which provides a
range of shared functions, including collaborative editing, threaded comments with callouts,
and project history.
Research and Design Considerations
Early research on shareable interfaces focused largely on interactional issues, such as how
to support electronically based handwriting and drawing, and the selecting and moving of
objects around the display (Elrod et al., 1992). The PARCTAB system (Schilit et al., 1993)
investigated how information could be communicated between palm-sized, A4-sized, and
whiteboard-sized displays using shared software tools, such as Tivoli (Rønby-Pedersen et al.,
1993). Another concern was how to develop fluid and direct styles of interaction with large
displays, both wall-based and tabletop, involving freehand and pen-based gestures (see Shen
et al., 2003). Current research is concerned with how to support ecologies of devices so that
groups can share and create content across multiple devices, such as tabletops and wall dis-
plays (see Brudy et al., 2016).
A key research issue is whether shareable surfaces can facilitate new and enhanced forms
of collaborative interaction compared with what is possible when groups work together using
their own devices, like laptops and PCs (see Chapter 5, “Social Interaction”). One benefit is
easier sharing and more equitable participation. For example, tabletops have been designed to
support more effective joint browsing, sharing, and manipulation of images during decision-
making and design activities (Shen et al., 2002; Yuill and Rogers, 2012). Core design concerns
include whether size, orientation, and shape of the display have an effect on collaboration.
User studies have shown that horizontal surfaces compared with vertical ones support more
turn-taking and collaborative working in co-located groups (Rogers and Lindley, 2004), while
providing larger-sized tabletops does not necessarily improve group working but can encour-
age a greater division of labor (Ryall et al., 2004).
The need for both personal and shared spaces has been investigated to see how best
to enable users to move between working on their own and together as a group. Several
researchers have designed cross-device systems, where a variety of devices, such as tablets,
smartphones, and digital pens can be used in conjunction with a shareable surface. For
example, SurfaceConstellations was developed for linking mobile devices to create novel
cross-device workspace environments (Marquardt et al., 2018). Design guidelines and sum-
maries of empirical research on tabletops and multitouch devices can be found in Müller-
Tomfelde (2010).
7 I N T E R F A C E S238
7.2.15 Tangible Interfaces
Tangible interfaces use sensor-based interaction, where physical objects, such as bricks, balls,
and cubes, are coupled with digital representations (Ishii and Ullmer, 1997). When a person
manipulates the physical object(s), it is detected by a computer system via the sensing mecha-
nism embedded in the physical object, causing a digital effect to occur, such as a sound, ani-
mation, or vibration (Fishkin, 2004). The digital effects can take place in a number of media
and places, or they can be embedded in the physical object itself. For example, Oren Zucker-
man and Mitchel Resnick’s (2005) early Flow Blocks prototype depicted changing numbers
and lights that were embedded in the blocks, depending on how they were connected. The
flow blocks were designed to simulate real-life dynamic behavior and react when arranged
in certain sequences.
Another type of tangible interface is where a physical model, for example, a puck, a
piece of clay, or a model, is superimposed on a digital desktop. Moving one of the physical
pieces around the tabletop causes digital events to take place on the tabletop. One of the
earliest tangible interfaces, Urp, was built to facilitate urban planning; miniature physical
models of buildings could be moved around on the tabletop and used in combination with
tokens for wind and shadow-generating tools, causing digital shadows surrounding them to
change over time and visualizations of airflow to vary. Tangible interfaces differ from other
approaches, such as mobile, insofar as the representations are artifacts in their own right that
the user can directly act upon, lift up, rearrange, sort, and manipulate.
The technologies that have been used to create tangibles include RFID tags and sensors
embedded in physical objects and digital tabletops that sense the movements of objects and
subsequently provide visualizations surrounding the physical objects. Many tangible systems
have been built with the goal of encouraging learning, design activities, playfulness, and col-
laboration. These include planning tools for landscape and urban planning (see Hornecker,
2005; Underkoffler and Ishii, 1998). Another example is Tinkersheets, which combine tangi-
ble models of shelving with paper forms for exploring and solving warehouse logistics prob-
lems (Zufferey, et al., 2009). The underlying simulation allows students to set parameters by
placing small magnets on the form.
Tangible computing has been described as having no single locus of control or interac-
tion (Dourish, 2001). Instead of just one input device, such as a mouse, there is a coordinated
interplay of different devices and objects. There is also no enforced sequencing of actions and
no modal interaction. Moreover, the design of the interface objects exploits their affordances
to guide the user in how to interact with them. A benefit of tangibility is that physical objects
and digital representations can be positioned, combined, and explored in creative ways, ena-
bling dynamic information to be presented in different ways. Physical objects can also be held
in both hands and combined and manipulated in ways not possible using other interfaces.
This allows for more than one person to explore the interface together and for objects to be
placed on top of each other, beside each other, and inside each other; the different configura-
tions encourage different ways of representing and exploring a problem space. In so doing,
people are able to see and understand situations differently, which can lead to greater insight,
learning, and problem-solving than with other kinds of interfaces (Marshall et al., 2003).
A number of toolkits have been developed to encourage children to learn coding,
electronics, and STEM subjects. These include littleBits (https://littlebits.com/), MicroBit
(https://microbit.org/), and MagicCubes (https://uclmagiccube.weebly.com/). The toolkits
https://littlebits.com/
https://microbit.org/
https://uclmagiccube.weebly.com/
7 . 2 I N T E R F A C E T y p E S 239
provide children with opportunities to connect physical electronic components and sen-
sors to make digital events occur. For example, the MagicCubes can be programmed to
change color depending on the speed at which they are shaken; slow is blue and very fast is
multicolor. Research has shown that the tangible toolkits provide many opportunities for
discovery learning, exploration, and collaboration (Lechelt et al., 2018). The Cubes have
been found to encourage a diversity of children, between the ages of 6 and 16, and those
with cognitive disabilities, to learn through collaborating, frequently showing and telling
each other and their instructors about their discoveries. These moments are facilitated by
the cube’s form factor, making it easy to show off to others, for example, by waving a cube
in the air (see Figure 7.25).
Tangible toolkits have also been developed for the visually impaired. For example, Torino
(renamed by Microsoft to Code Jumper) was developed as a programming language for
teaching programming concepts to children age 7–11, regardless of level of vision (Morrison
et al., 2018). It consists of a set of beads that can be connected and manipulated to create
physical strings of code that play stories or music.
Figure 7.25 Learning to code with the MagicCubes; sharing, showing, and telling
Source: Elpida Makriyannis
7 I N T E R F A C E S240
BOX 7.5
VoxBox—A Tangible Questionnaire Machine
Traditional methods for gathering public opinions, such as surveys, involve approaching peo-
ple in situ, but it can disrupt the positive experience they are having. VoxBox (see Figure 7.26)
is a tangible system designed to gather opinions on a range of topics in situ at an event
through playful and engaging interaction (Golsteijn et al., 2015). It is intended to encour-
age wider participation by grouping similar questions, encouraging completion, gathering
answers to open and closed questions, and connecting answers and results. It was designed
as a large physical system that provides a range of tangible input mechanisms through which
people give their opinions, instead of using, for example, text messages or social media input.
The various input mechanisms include sliders, buttons, knobs, and spinners about which peo-
ple are all familiar. In addition, the system has a transparent tube at the side that drops a ball
step by step as sets of questions are completed to act as an incentive for completion and as a
progress indicator. The results of the selections are aggregated and presented as simple digital
visualizations on the other side (for example, 95 percent are engaged; 5 percent are bored).
VoxBox has been used at a number of events drawing in the crowds, who become completely
absorbed in answering questions in this tangible format.
Figure 7.26 VoxBox—front and back of the tangible machine questionnaire
Source: Yvonne Rogers
7 . 2 I N T E R F A C E T y p E S 241
7.2.16 Augmented Reality
Augmented reality (AR) became an overnight success with the arrival of Pokémon Go in
2016. The smartphone app became an instant hit worldwide. Using a player’s smartphone
camera and GPS signal, the AR game makes it seem as if virtual Pokémon characters are
appearing in the real world—popping up all over the place, such as on buildings, on streets,
and in parks. As players walk around a given place, they may be greeted with rustling bits
of grass that signal a Pokémon nearby. If they walk closer, a Pokémon may pop up on their
smartphone screen, as if by magic, and look as if they are actually in front of them. For exam-
ple, one might be spotted sitting on a branch of a tree or a garden fence.
AR works by superimposing digital elements, like Pokémons, onto physical devices and
objects. Closely related to AR is the concept of mixed reality, where views of the real world
are combined with views of a virtual environment (Drascic and Milgram, 1996). To begin,
augmented reality was mostly a subject of experimentation within medicine, where virtual
objects, for example X-rays and scans, were overlaid on part of a patient’s body to aid the
physician’s understanding of what was being examined or operated on.
AR was then used to aid controllers and operators in rapid decision-making. One exam-
ple is air traffic control, where controllers are provided with dynamic information about the
aircraft in their section that is overlaid on a video screen showing real planes landing, taking
off, and taxiing. The additional information enables the controllers to identify planes easily,
Research and Design Considerations
Researchers have developed conceptual frameworks that identify the novel and specific fea-
tures of a tangible interface (see Fishkin, 2004; Ullmar et al., 2005; Shaer and Hornecker,
2010). A key design concern is what kind of coupling to use between the physical action
and digital effect. This includes determining where the digital feedback is provided in rela-
tion to the physical artifact that has been manipulated. For example, should it appear on top
of the object, beside it, or in some other place? The type and placement of the digital media
will depend to a large extent on the purpose of using a tangible interface. If it is to support
learning, then an explicit mapping between action and effect is critical. In contrast, if it is for
entertainment purposes, for example, playing music or storytelling, then it may be better to
design them to be more implicit and unexpected. Another key design question is what kind
of physical artifact to use to enable the user to carry out an activity in a natural way. Bricks,
cubes, and other component sets are most commonly used because of their flexibility and
simplicity, enabling people to hold them in both hands and to construct new structures that
can be easily added to or changed. Sticky notes and cardboard tokens can also be used for
placing material onto a surface that is transformed or attached to digital content (Klemmer
et al. 2001; Rogers et al., 2006).
Another research question is with what types of digital outputs should tangible interfaces
be combined? Overlaying physical objects with graphical feedback that changes in response
to how the object is manipulated has been the main approach. In addition, audio and haptic
feedback has also been used. Tangibles can also be designed to be an integral part of a multi-
modal interface.
7 I N T E R F A C E S242
which were difficult to make out—something especially useful in poor weather conditions.
Similarly, head-up displays (HUDs) are used in military and civil planes to aid pilots when
landing during poor weather conditions. A HUD provides electronic directional markers on
a fold-down display that appears directly in the field of view of the pilot. A number of high-
end cars now provide AR windshield technology, where navigation directions can literally
look like they are painted on the road ahead of the driver (see Chapter 2, “The Process of
Interaction Design”).
Instructions for building or repairing complex equipment, such as photocopiers and car
engines, have also been designed to replace paper-based manuals, where drawings are super-
imposed upon the machinery itself, telling the mechanic what to do and where to do it.
There are also many AR apps available now for a range of contexts, from education to car
navigation, where digital content is overlaid on geographic locations and objects. To reveal
the digital information, users open the AR app on a smartphone or tablet and the content
appears superimposed on what is viewed through the screen.
Other AR apps have been developed to aid people walking in a city or town. Directions
(in the form of a pointing hand or arrow) and local information (for instance, the nearest
bakery) are overlaid on the image of the street ahead that appears on someone’s smart-
phone screen. These change as the person walks up the street. Virtual objects and infor-
mation are also being combined to make more complex augmented realities. Figure 7.27
shows a weather alert with animated virtual lightning effects alongside information about
a nearby café and the price of properties for sale or rent on a street. Holograms of people
and other objects are also being introduced into AR environments that can appear to move
Figure 7.27 Augmented reality overlay used on a car windshield
Source: https://wayray.com
7 . 2 I N T E R F A C E T y p E S 243
and/or talk. For example, virtual tour guides are beginning to appear in museums, cities,
and theme parks, which can appear to by moving, talking, or gesturing to visitors who are
using an AR app.
The availability of mapping platforms, such as those provided by Niantics and
Google, together with Apple’s ARKit, SparkAR Studio, and Google’s ARCore, has made it
easier for developers and students alike to develop new kinds of AR games and AR apps.
Another popular AR game that has emerged since Pokémon Go is Jurassic World Alive,
where players walk around in the real world to find as many virtual dinosaurs as they can.
It is similar to Pokémon Go but with different gaming mechanisms. For example, players
have to study the dinosaurs they come across by collecting their DNA and then re-creating
it. Microsoft’s Hololens toolkit has also enabled new mixed reality user experiences to be
created, allowing users to create or interact with virtual elements in their surroundings.
Most AR apps use the backward-facing camera on a smartphone or tablet to overlay
the virtual content onto the real world. Another approach is to use the forward-facing
camera (used for selfies) to superimpose digital content onto the user’s face or body. The
most popular app that has used this technique is SnapChat, which provides numerous
filters with which people can experiment plus the opportunity to create their own filters.
Adding accessories such as ears, hair, moving lips, and headgear enables people to trans-
form their physical appearance in all sorts of fun ways.
These kinds of virtual try-ons work by analyzing the user’s facial features and build-
ing a 2-D or 3-D model in real time. So, when they move their head, the make-up or acces-
sories appear to move with them as if they are really on their face. Several AR mirrors now
exist in retail that allow shoppers to try on sunglasses, jewelry, and make-up. The goal is
to let them try on as many different products as they like to see how they look with them
on. Clearly, there are advantages to virtual try-ons: it can be more convenient, engaging,
and easier compared to trying on the real thing. There are disadvantages too, however,
in that they only give an impression of what you look like. For example, the user cannot
feel the weight of a virtual accessory on their head or the texture of virtual make-up on
their face.
The same technology can be used to enable people to step into historical, famous,
film, or stage characters (for instance, David Bowie or Queen Victoria). For example, a
virtual try-on app that was developed as part of a cultural experience was the Magic-
Face (Javornik, et al., 2017). The goal was to enable audiences to experience firsthand
what it was like to try on the make-up of a character from an opera. The opera chosen
was Philip Glass’s Akhnaten, set in ancient Egyptian time (see Figure 7.28a). The vir-
tual make-up developed were for a Pharaoh and his wife. The app was developed by
University College London researchers alongside the English National Opera and AR
company, Holition. To provide a real-world context, the app was designed to run on
a tablet display that was disguised as a real mirror and placed in an actor’s dressing
room (see Figure 7.28b). On encountering the mirror in situ, visiting school children
were fascinated by the way the virtual make-up made them look like Akhnaten and his
wife, Nefertiti. The singers and make-up artists who were in the production also tried
it out and saw great potential for using the app to enhance their existing repertoire of
rehearsal and make-up tools.
7 I N T E R F A C E S244
(a) (b)
Figure 7.28 (a) A principal singer trying on the virtual look of Akhnaten and (b) a framed AR mirror
in the ENO dressing room
Source: Used courtesy of Ana Javornik
Research and Design Considerations
A key research concern when designing augmented reality is what form the digital augmenta-
tion should take and when and where it should appear in the physical environment (Rogers
et al., 2005). The information (such as navigation cues) needs to stand out but not distract the
person from their ongoing activity in the physical world. It also needs to be simple and align
with the real-world objects taking into account that the user will be moving. Another concern
is how much digital content to overlay on the physical world and how to attract the user’s
attention to it. There is the danger that the physical world becomes overloaded with digital
ads and information “polluting” it to the extent that people will turn the AR app off.
One of the limitations of current AR technology is that sometimes the modeling can be
slightly off so that the overlaying of the digital information appears in the wrong place or is
out of sync with what is being overlaid. This may not be critical for fun applications, but it
may be disconcerting if eye shadow appears on someone’s ear. It may also break the magic
of the AR experience. Ambiguity and uncertainty may be exploited to good effect in mixed
reality games, but it could be disastrous in a more serious context, such as the military or
medical setting.
7 . 2 I N T E R F A C E T y p E S 245
7.2.17 Wearables
Wearables are a broad category of devices that are worn on the body. These include smart-
watches, fitness trackers, fashion tech, and smart glasses. Since the early experimental days
of wearable computing, where Steve Mann (1997) donned head and eye cameras to enable
him to record what he saw while also accessing digital information on the move, there have
been many innovations and inventions, including Google Glass.
New flexible display technologies, e-textiles, and physical computing (for example,
Arduino) provide opportunities to design wearables that people will actually want to wear.
Jewelry, caps, glasses, shoes, and jackets have all been the subject of experimentation designed
to provide the user with a means of interacting with digital information while on the move
in the physical world. Early wearables focused on convenience, enabling people to carry out
a task (for example, selecting music) without having to take out and control a handheld
device. Examples included a ski jacket with integrated music player controls that enabled
the wearer to simply touch a button on their arm with their glove to change a music track.
More recent applications have focused on how to combine textiles, electronics, and haptic
technologies to promote new forms of communication. For example, CuteCircuit developed
the KineticDress, which was embedded with sensors that followed the body of the wearer
to capture their movements and interaction with others. These were then displayed through
electroluminescent embroidery that covered the external skirt section of the dress. Depend-
ing on the amount and speed of the wearer’s movement, it changed patterns, displaying the
wearer’s mood to the audience and creating a magic halo around her.
Exoskeleton clothing (see section 7.2.12) is also an area where fashion meets technology
in order to augment and assist people who have problems with walking by literally walk-
ing or exercising the person wearing them. In this way, it combines haptics with a wear-
able. Within the construction industry, exoskeleton suits have also been developed to provide
additional power to workers—a bit like Superman—where metal frameworks are fitted with
motorized muscles to multiply the wearer’s strength. It can make lifting objects feel lighter
and in doing so protect the worker from physical injuries.
DILEMMA
Google Glass: Seeing Too Much?
Google Glass was a wearable that went on sale in 2014 in various fashion styles (see
Figure 7.29). It was designed to look like a pair of glasses, but with one lens of the glass being
an interactive display with an embedded camera that could be controlled with speech input.
It allowed the wearer to take photos and videos on the move and look at digital content, such
as email, texts, and maps. The wearer could also search the web using voice commands, and
the results would appear on the screen. A number of applications were developed beyond
those for everyday use, including WatchMeTalk, which provided live captions to help the
(Continued)
7 I N T E R F A C E S246
Watch the interesting video of London through Google Glass at http://youtu.be/
Z3AIdnzZUsE and the Talking Shoe concept at http://youtu.be/VcaSwxbRkcE.
hearing-impaired in their day-to-day conversations and Preview for Glass that enabled the
wearer to watch a movie trailer the moment they looked at a movie poster.
However, being in the company of someone wearing a Google Glass was felt by many to
be unnerving, as the wearer looked up and to the right to view what was on the glass screen
rather than looking at you and into your eyes. One of the criticisms of wearers of Google
Glass was that it made them appear to be staring into the distance. Others were worried that
those wearing Google Glass were recording everything that was happening in front of them.
As a reaction, a few bars and restaurants in the United States implemented a “no Glass” policy
to prevent customers from recording other patrons.
The original Google Glass was retired after a couple of years. Since then, other types of
smart glasses have come onto the market that synch a user’s smartphone with the display and
camera on the glasses via Bluetooth. These include Vuzic Blade, which has a camera onboard
and voice control that is connected to Amazon Echo devices, along with the provision of
turn-by-turn navigation and location-based alerts; and Snap’s Spectacles, which simply allows
the wearer to share photos and video with their friends on Snapchat they take when wearing
these glasses.
Figure 7.29 Google Glass
Source: Google Inc.
http://youtu.be/Z3AIdnzZUsE
http://youtu.be/Z3AIdnzZUsE
7 . 2 I N T E R F A C E T y p E S 247
7.2.18 Robots and Drones
Robots have been around for some time, most notably as characters in science-fiction mov-
ies, but they also play an important role as part of manufacturing assembly lines, as remote
investigators of hazardous locations (for example, nuclear power stations and bomb dis-
posal), and as search and rescue helpers in disasters (for instance in forest fires) or faraway
places (like Mars). Console interfaces have been developed to enable humans to control and
navigate robots in remote terrains, using a combination of joysticks and keyboard controls
together with cameras and sensor-based interactions (Baker et al., 2004). The focus has been
on designing interfaces that enable users to steer and move a remote robot effectively with
the aid of live video and dynamic maps.
Domestic robots that help with the cleaning and gardening have become popular. Robots
are also being developed to help the elderly and disabled with certain activities, such as
picking up objects and cooking meals. Pet robots, in the guise of human companions, have
been commercialized. Several research teams have taken the “cute and cuddly” approach to
designing robots, signaling to humans that the robots are more pet-like than human-like. For
example, Mitsubishi developed Mel the penguin (Sidner and Lee, 2005) whose role was to
host events, while the Japanese inventor Takanori Shibata developed Paro in 2004, a baby
harp seal that looks like a cute furry cartoon animal, and whose role was as a companion
(see Figure 7.30). Sensors were embedded in the pet robots, enabling them to detect certain
human behaviors and respond accordingly. For example, they can open, close, and move
their eyes, giggle, and raise their flippers. The robots encourage being cuddled or spoken to,
Research and Design Considerations
A core design concern specific to wearable interfaces is comfort. Users need to feel comfort-
able wearing clothing that is embedded with technology. It needs to be light, small, not get
in the way, fashionable, and (with the exception of the displays) preferably hidden in the
clothing. Another related issue is hygiene. Is it possible to wash or clean the clothing once
worn? How easy is it to remove the electronic gadgetry and replace it? Where are the bat-
teries going to be placed, and how long is their lifetime? A key usability concern is how does
the user control the devices that are embedded in their clothing. Are touch, speech, or more
conventional buttons and dials preferable?
A number of technologies can be developed and combined to create wearables including
LEDs, sensors, actuators, tangibles, and AR. There is abundant scope for thinking creatively
about when and whether to make something wearable as opposed to mobile. In Chapter 1,
“What Is Interaction Design?” we mentioned how assistive technology can be designed to be
fashionable in order to overcome stigmas of having to wear a monitoring device (for instance,
for glucose levels), substitution (for example, a prosthetic) or amplifying device (for example,
hearing aids).
7 I N T E R F A C E S248
as if they were real pets or animals. The appeal of pet robots is thought to be partially due to
their therapeutic qualities, being able to reduce stress and loneliness among the elderly and
infirm (see Chapter 6, “Emotional Interaction,” for more on cuddly robot pets). Paro has
since been used to help patients with dementia to make them feel more at ease and comforted
(Griffiths, 2014). Specifically, it has been used to encourage social behavior among patients
who often anthropomorphize it. For example, they might say as a joke “it’s farted on me!”
which makes them and others around them laugh, leading to further laughter and joking.
This form of encouraging of social interaction is thought to be therapeutic.
Drones are a form of unmanned aircraft that are controlled remotely. They were first used
by hobbyists and then by the military. Since then, they have become more affordable, acces-
sible, and easier to fly. As a result, they have begun to be used in a wider range of contexts.
These include entertainment, such as carrying drinks and food to people at festivals and par-
ties; agricultural applications, such as flying them over vineyards and fields to collect data that
is useful to farmers (see Figure 7.31); and helping to track poachers in wildlife parks in Africa
(Preece, 2016). Compared with other forms of data collection, they can fly low and stream
photos to a ground station where the images can be stitched together into maps and then used
to determine the health of a crop or when is the best time to harvest the crop.
Figure 7.30 (a) Mel, the penguin robot, designed to host activities; (b) Japan’s Paro, an interactive
seal, designed as a companion, primarily for the elderly and sick children
Source: (a) Mitsubishi Electric Research Labs (b) Parorobots.com
Watch the video of Robot Pets of the Future at http://youtu.be/wBFws1lhuv0.
Watch the video of Rakuten delivering beer via drone to golfers on a golf course at
https://youtu.be/ZameOVS2Skw.
http://www.Parorobots.com
7 . 2 I N T E R F A C E T y p E S 249
Figure 7.31 A drone being used to survey the state of a vineyard
Source: Drone inspecting vineyard / Shutterstock
Research and Design Considerations
An ethical concern is whether it is acceptable to create robots that exhibit behaviors that
humans will consider to be human- or animal-like. While this form of attribution also occurs
for agent interfaces (see Chapter 3), having a physical embodiment—as robots do—can make
people suspend their disbelief even more, viewing the robots as pets or humans.
This raises the moral question as to whether such anthropomorphism should be encour-
aged. Should robots be designed to be as human-like as possible, looking like us with human
features, such as eyes and a mouth, behaving like us, communicating like us, and emotionally
responding like us? Or, should they be designed to look like robots and behave like robots,
for instance, vacuum cleaner robots that serve a clearly defined purpose? Likewise, should
the interaction be designed to enable people to interact with the robot as if it were another
human being, for example, by talking to it, gesturing at it, holding its hand, and smiling at it?
Or, should the interaction be designed to be more like human–computer interaction, in other
words, by pressing buttons, knobs, and dials to issue commands?
For many people, the cute pet approach to robotic interfaces seems preferable to one that
seeks to design them to be more like fully fledged human beings. Humans know where they
stand with pets and are less likely to be unnerved by them and, paradoxically, are more likely
to suspend their disbelief in the companionship they provide.
Another ethical concern is whether it is acceptable to use unmanned drones to take a
series of images or videos of fields, towns, and private property without permission or people
knowing what is happening. They are banned from certain areas such as airports, where they
present a real danger. Another potential problem is the noise they make when flying. Having a
drone constantly buzzing past your house or delivering drinks to golf players or festival goers
nearby can be very annoying.
7 I N T E R F A C E S250
7.2.19 Brain–Computer Interfaces
Brain–computer interfaces (BCI) provide a communication pathway between a person’s brain
waves and an external device, such as a cursor on a screen or a tangible puck that moves via
airflow. The person is trained to concentrate on the task (for example, moving the cursor or
the puck). Several research projects have investigated how this technique can be used to assist
and augment human cognitive or sensory-motor functions. The way BCIs work is by detect-
ing changes in the neural functioning of the brain. Our brains are filled with neurons that
comprise individual nerve cells connected to one another by dendrites and axons. Every time
we think, move, feel, or remember something, these neurons become active. Small electric
signals rapidly move from neuron to neuron, which to a certain extent can be detected by
electrodes that are placed on a person’s scalp. The electrodes are embedded in specialized
headsets, hairnets, or caps.
Brain–computer interfaces have also been developed to control various games. For
example, Brainball was developed as a game to be controlled by players’ brain waves in
which they compete to control a ball’s movement across a table by becoming more relaxed
and focused. Other possibilities include controlling a robot and being able to fly a virtual
plane. Pioneering medical research, conducted by the BrainGate research group at Brown
University, has started using brain-computer interfaces to enable people who are paralyzed
to control robots (see Figure 7.32). For example, a robotic arm controlled by a tethered
BCI has enabled patients who are paralyzed to feed themselves (see video mentioned next).
Another startup company, NextMind, is developing a noninvasive brain-sensing device
intended for the mass market to enable users to play games and control electronic and
mobile devices in real time using just their thoughts. It is researching how to combine
brain-sensing technology with innovative machine-learning algorithms that can translate
brain waves into digital commands.
Watch a video of a woman who is paralyzed moving a robot with her mind at
http://youtu.be/ogBX18maUiM.
Source: Tim Cordell / Cartoon Stock
7 . 2 I N T E R F A C E T y p E S 251
7.2.20 Smart Interfaces
The motivation for many new technologies is to make them smart, whether it is a smartphone,
smartwatch, smart building, smart home, or smart appliance (for example smart lighting,
smart speakers, or virtual assistants). The adjective is often used to suggest that the device
has some intelligence and it is connected to the Internet. More generally, smart devices are
designed to interact with users and other devices connected to a network, many of which are auto-
mated, not requiring users to interact with them directly (Silverio-Fernández et al., 2018).
The goal is to make them context-aware, that is, to understand what is happening around
them and execute appropriate actions. To achieve this, some have been programmed with
AI so that they can learn the context and a user’s behavior. Using this intelligence, they then
change settings or switch things on according to the user’s assumed preferences. An example
is the smart Nest thermostat that is designed to learn from a householder’s behavior. Rather
than make the interface invisible, the designers chose to turn it into an aesthetically pleasing
one that could be easily viewed (see Box 6.2).
Smart buildings have been designed to be more energy efficient, efficient, and cost effec-
tive. Architects are motivated to use state-of-the-art sensor technology to control building
systems, such as ventilation, lighting, security, and heating. Often, the inhabitants of such
buildings are considered to be the ones at fault for wasting energy, as they may leave the
lights and heating on overnight when not needed, or they forget to lock a door or window.
One benefit of having automated systems take control of building services is to reduce these
kinds of human errors—a phrase often used by engineers is to take the human “out of the
loop.” While some smart buildings and homes have improved how they are managed and cut
costs, they can also be frustrating to the user, who sometimes would like to be able to open a
window to let fresh air in or raise a blind to let in natural lighting. Taking the human out of
Figure 7.32 A brain-computer interface being used by a woman who is paralyzed to select letters
on a screen (developed by BrownGate)
Source: Brown University
7 I N T E R F A C E S252
the loop means that these operations are no longer available. Windows are locked or sealed,
and heating is controlled centrally.
Instead of simply introducing ever more automation that takes the human out of the
loop further, another approach is to consider the needs of the inhabitants in conjunction
with introducing smart technology. For example, a new approach that focuses on inhabitants
is called human–building interaction (HBI). It is concerned with understanding and shap-
ing people’s experiences with, and within, built environments (Alavi et al., 2019). The focus
is on human values, needs, and priorities in addressing people’s interactions with “smart”
environments.
7.3 Natural User Interfaces and Beyond
As we have seen, there are many kinds of interfaces that can be used to design for user expe-
riences. The staple for many years was the GUI, then the mobile device interface, followed
by touch, and now wearables and smart interfaces. Without question, they have been able to
support all manner of user activities. What comes next? Will other kinds of interfaces that are
projected to be more natural become more mainstream?
A natural user interface (NUI) is designed to allow people to interact with a computer
in the same way that they interact with the physical world—using their voice, hands, and
bodies. Instead of using a keyboard, mouse, or touchpad (as is the case with GUIs), NUIs
enable users to speak to machines, stroke their surfaces, gesture at them in the air, dance on
mats that detect feet movements, smile at them to get a reaction, and so on. The naturalness
refers to the use of everyday skills humans have developed and learned, such as talking, writ-
ing, gesturing, walking, and picking up objects. In theory, they should be easier to learn and
map more readily onto how people interact with the world than compared with learning to
use a GUI.
Instead of having to remember which function keys to press to open a file, a NUI means a
person only has to raise their arm or say “open.” But how natural are NUIs? Is it more natural
to say “open” than to flick a switch when you want to open a door? And is it more natural to
raise both arms to change a channel on the TV than to press a button on a remote device or
tell it what to do by speaking to it? Whether a NUI is natural depends on a number of fac-
tors, including how much learning is required, the complexity of the app or device’s interface,
and whether accuracy and speed are needed (Norman, 2010). Sometimes a gesture is worth a
thousand words. Other times, a word is worth a thousand gestures. It depends on how many
functions the system supports.
Consider the sensor-based faucets that were described in Chapter 1. The gesture-based
interface works mostly (with the exception of people wearing black clothing that cannot be
detected) because there are only two functions: (1) turning on the water by waving one’s
hands under the tap, and (2) turning off the water by removing them from the sink. Now
think about other functions that faucets usually provide, such as controlling water tem-
perature and flow. What kind of a gesture would be most appropriate for changing the
temperature and then the flow? Would one decide on the temperature first by raising one’s
left arm and the flow by raising one’s right arm? How would someone know when to stop
raising their arm to get the right temperature? Would they need to put a hand under the tap
7 . 4 W h I C h I N T E R F A C E ? 253
to check? But if they put their right hand under the tap, that might that have the effect of
decreasing the flow? And when does the system know that the desired temperature and flow
has been reached? Would it require having both arms suspended in midair for a few seconds
to register that was the desired state? It is a difficult problem on how to provide these choices,
and it is probably why sensor-based faucets in public bathrooms all have their temperature
and flow set to a default.
Our overview of different interface types in this chapter has highlighted how gestural,
voice, and other kinds of NUIs have made controlling input and interacting with digital
content easier and more enjoyable, even though sometimes they can be less than perfect. For
example, using gestures and whole-body movements have proven to be highly enjoyable as a
form of input for computer games and physical exercises. Furthermore, new kinds of gesture,
voice, and touch interfaces have made the web and online tools more accessible to those who
are visually impaired. For example, the iPhone’s VoiceOver control features have empowered
visually impaired individuals to be able to easily send email, use the web, play music, and so
on, without having to buy an expensive customized phone or screen reader. Moreover, being
able to purchase a regular phone means not being singled out for special treatment. And
while some gestures may feel cumbersome for sighted people to learn and use, they may not
be so for blind or visually impaired people. The iPhone VoiceOver press and guess feature
that reads out what you tap on the screen (for example, “messages,” “calendar,” “mail: 5
new items”) can open up new ways of exploring an application while a three-finger tap can
become a natural way to turn the screen off.
An emerging class of human–computer interfaces are those that rely largely on subtle,
gradual, and continuous changes triggered by information obtained implicitly from the user
together with the use of AI algorithms that are coded to learn about the user’s behavior
and preferences. These are connected with lightweight, ambient, context-aware, affective,
and augmented cognition interfaces (Solovey et al., 2014). Using brain, body, behavioral, and
environmental sensors, it is now possible to capture subtle changes in people’s cognitive and emo-
tional states in real time. This opens up new doors in human–computer interaction. In par-
ticular, it allows for information to be used as both continuous and discrete input, potentially
enabling new outputs to match and be updated with what people might want and need at any
given time. Adding AI to the mix will also enable a new type of interface to emerge that goes
beyond simply being natural and smart—one that allows people to develop new superpow-
ers that will enable them to work synergistically with technology to solve ever-more complex
problems and undertake unimaginable feats.
7.4 Which Interface?
This chapter presented an overview of the diversity of interfaces that is now available or
currently being researched. There are many opportunities to design for user experiences
that are a far cry from those originally developed using the command-based interfaces of
the 1980s. An obvious question this raises is which one and how do you design it? In many
contexts, the requirements for the user experience that have been identified will determine
what kind of interface might be appropriate and what features to include. For example, if
a healthcare app is being developed to enable patients to monitor their dietary intake, then a
7 I N T E R F A C E S254
mobile device that has the ability to scan barcodes and/or take pictures of food items that
can be compared with a database would be a good interface to use, enabling mobility,
effective object recognition, and ease of use. If the goal is to design a work environment to
support collocated group decision-making activities, then combining shareable technolo-
gies and personal devices that enable people to move fluidly among them would be good
to consider using.
But how to decide which interface is preferable for a given task or activity? For example,
is multimedia better than tangible interfaces for learning? Is voice effective as a command-
based interface? Is a multimodal interface more effective than a single media interface? Are
wearable interfaces better than mobile interfaces for helping people find information in for-
eign cities? How does VR differ from AR, and which is the ultimate interface for playing
games? In what way are tangible environments more challenging and captivating than vir-
tual worlds? Will shareable interfaces, such as interactive furniture, be better at supporting
communication and collaboration compared with using networked desktop technologies?
And so forth. These questions are currently being researched. In practice, which interface
is most appropriate, most useful, most efficient, most engaging, most supportive, and so on
will depend on the interplay of a number of factors, including reliability, social acceptability,
privacy, ethical, and location concerns.
In-Depth Activity
Choose a game that you or someone you know plays a lot on a smartphone (for example,
Candy Crush Saga, Fortnite, or Minecraft). Consider how the game could be played using
different interfaces other than the smartphone’s. Select three different interfaces (for instance,
tangible, wearable, and shareable) and describe how the game could be redesigned for each of
these, taking into account the user group being targeted. For example, the tangible game could
be designed for children, the wearable interface for young adults, and the shareable interface
for older people.
1. Go through the research and design considerations for each interface and consider whether
they are relevant for the game setting and what considerations they raise.
2. Describe a hypothetical scenario of how the game would be played for each of the three
interfaces.
3. Consider specific design issues that will need to be addressed. For example, for the share-
able surface would it be best to have a tabletop or a wall-based surface? How will the users
interact with the game elements for each of the different interfaces—by using a pen, finger-
tips, voice, or other input device? How do you turn a single-player game into a multiple
player one? What rules would you need to add?
4. Compare the pros and cons of designing the game using the three different interfaces with
respect to how it is played on the smartphone.
F U R T h E R R E A D I N G 255
Further Reading
Many practical books have been published on interface design. Some have been revised into
second editions. Publishers like New Riders and O’Reilly frequently offer up-to-date books
for a specific interface area (for example web or voice). Some are updated on a regular basis
while others are published when a new area emerges. There are also a number of excellent
online resources, sets of guidelines, and thoughtful blogs and articles.
DASGUPTA, R. (2019) Voice User Interface Design: Moving from GUI to Mixed Modal
Interaction. Apress. This is a guide that covers the challenges of moving from GUI design to
mixed-modal interactions. It describes how our interactions with devices are rapidly chang-
ing, illustrating this through a number of case studies and design principles of VUI design.
Summary
This chapter provided an overview of the diversity of interfaces that can be designed for user
experiences, identifying key design issues and research questions that need to be addressed. It
has highlighted the opportunities and challenges that lie ahead for designers and researchers
who are experimenting with and developing innovative interfaces. It also explained some of
the assumptions behind the benefits of different interfaces—some that are currently supported
and others that are still unsubstantiated. The chapter presented a number of interaction tech-
niques that are particularly suited (or not) for a given interface type. It also discussed the
dilemmas facing designers when using a particular kind of interface, for example, abstract
versus realism, menu selection versus free-form text input, and human-like versus non-human-like.
Finally, it presented pointers to specific design guidelines and exemplary systems that have
been designed using a given interface.
Key Points
• Many interfaces have emerged post the WIMP/GUI era, including voice, wearable, mobile,
tangible, brain-computer, smart, robots, and drones.
• A range of design and research questions need to be considered when deciding which inter-
face to use and what features to include.
• Natural user interfaces may not be as natural as graphical user interfaces—it depends on
the task, user, and context.
• An important concern that underlies the design of any kind of interface is how information
is represented to the user (be it speech, multimedia, virtual reality, augmented reality), so
that they can make sense of it with respect to their ongoing activity, for example, playing a
game, shopping online, or interacting with a pet robot.
• Increasingly, new interfaces that are context-aware or monitor people raise ethical issues
concerned with what data is being collected and for what is it being used.
7 I N T E R F A C E S256
ROWLAND, C., GOODMAN, E., CHARLIER, M., LIGHT, A. and LUI, A. (2015) Designing
Connected Products. O’Reilly. This collection of chapters covers the challenges of designing
connected products that go beyond the traditional scope of interaction design and software
development. It provides a road map and covers a range of aspects, including pairing devices,
new business models, and flow of data in products.
GOOGLE Material Design https://material.io/design/ This online resource provides a living
online document that visually illustrates essential interface design principles. It is beautifully
laid out and very informative to click through all of the interactive examples that it provides.
It shows how to add some physical properties to the digital world to make it feel more intui-
tive to use across platforms.
KRISHNA, G. (2015) The Best Interfaces Are No Interfaces. New Riders. This polemical and
funny book challenges the reader to think beyond the screen when designing new interfaces.
KRUG, S. (2014) Don’t Make Me Think! (3rd edn). New Riders Press. The third edition of
this very accessible classic guide on web design presents up-to-date principles and examples
on web design with a focus on mobile usability. It is highly entertaining with lots of great
illustrations.
NORMAN, D. (2010) Natural interfaces are not natural, interactions, May/June, 6–10. This
is a thought-provoking essay by Don Norman about what is natural may not appear to be
natural, which is still very relevant today.
https://material.io/design/
257
INTERVIEW with
Leah Buechley
Leah Buechley is an independent designer,
engineer, and educator. She has a PhD in
computer science and a degree in physics.
She began her studies as a dance major and
has also been deeply engaged in theater,
art, and design over the years. She was the
founder and director of the high-low tech
group at the MIT media lab from 2009
to 2014. She has always blended the sci-
ences and the arts in her education and her
career, as witnessed by her current work,
which consists of computer science, indus-
trial design, interaction design, art, and
electrical engineering.
What is the focus of your work?
I’m most interested in changing the culture
of technology and engineering to make it
more diverse and inclusive. To achieve that
goal, I blend computation and electronics
with a range of different materials and em-
ploy techniques drawn from art, craft, and
design. This approach leads to technol-
ogies and learning experiences that appeal
to a diverse group of people.
Can you give me some examples of how
you mesh the digital with physical materi-
als?
My creative focus for the last several years
has been computational design—a process
in which objects are designed via an algo-
rithm and then constructed with a combi-
nation of fabrication and hand building.
I’m especially excited about computational
ceramics and have been developing a set of
tools and techniques that enable people to
integrate programming and hand building
with clay.
I’ve also been working on a project
called LilyPad Arduino (or LilyPad) for
over 10 years. LilyPad is a construction
kit that enables people to embed com-
puters and electronics into fabric. It’s a set
of sewable electronic pieces, including mi-
crocontrollers, sensors, and LEDs, that are
stitched together with conductive thread.
People can use the kit to make singing pil-
lows, glow-in-the-dark handbags, and in-
teractive ball gowns.
Another example is the work my for-
mer students and I have done in paper-
based computing. My former student Jie
Qi developed a kit called Chibitronics
circuit stickers that lets you build inter-
active paper-based projects. Based on her
years of research in my group at MIT, the
kit is a set of flexible peel-and-stick elec-
tronic stickers. You can connect ultra-thin
LEDs, microcontrollers, and sensors with
conductive ink, tape, or thread to quickly
make beautiful electronic sketches.
The LilyPad and Chibitronics kits are
now used by people around the world to
learn computing and electronics. It’s been
fascinating and exciting to see this research
have a tangible impact.
Why would anyone want to wear a com-
puter in their clothing?
Computers open up new creative possibil-
ities for designers. Computers are simply
(Continued)
I N T E R V I E W W I T h L E A h B U E C h L E y
7 I N T E R F A C E S258
a new tool, albeit an especially powerful
one, in a designer’s toolbox. They allow
clothing designers to make garments that
are dynamic and interactive. Clothing that
can, for example, change color in response
to pollution levels, sparkle when a loved
one calls you on the phone, or notify you
when you blood pressure increases.
How do you involve people in your research?
I engage with people in a few different
ways. First, I design hardware and soft-
ware tools to help people build new and
different kinds of technology. The LilyPad
is a good example of this kind of work. I
hone these designs by teaching workshops
to different groups of people. And once a
tool is stable, I work hard to disseminate it
to users in the real world. The LilyPad has
been commercially available since 2007,
and it has been fascinating and exciting to
see how a group of real-world designers—
who are predominantly female—is using it
to build things like smart sportswear, plush
video game controllers, soft robots, and in-
teractive embroideries.
I also strive to be as open as possi-
ble with my own design and engineering
explorations. I document and publish as
much information as I can about the mate-
rials, tools, and processes I use. I apply an
open source approach not only to the soft-
ware and hardware I create but, as much
as I can, to the entire creative process. I
develop and share tutorials, classroom and
workshop curricula, materials references,
and engineering techniques.
What excites you most about your work?
I am infatuated with materials. There is
nothing more inspiring than a sheet of
heavy paper, a length of wool felt, a slab of
clay, or a box of old motors. My thinking
about design and technology is largely
driven by explorations of materials and
their affordances. So, materials are always
delightful. For example, the shape and sur-
face pattern of the cup in Figure 7.33 were
computationally designed. A template of
the design was then laser cut and pressed
into a flat sheet or “slab” of clay. Finally,
the clay was folded into shape and then
fired and glazed using traditional ceramic
techniques. But the real-world adoption of
tools I’ve designed and the prospect this
presents for changing technology culture
is perhaps what’s most exciting. My most
dearly held goal is to expand and diversify
technology culture, and it’s tremendously
rewarding to see evidence that my work is
doing that.
Figure 7.33 An example of a computational cup
Source: Used courtesy of Leah Buechley
Chapter 8
D A T A G A T H E R I N G
Objectives
The main goals of the chapter are to accomplish the following:
• Discuss how to plan and run a successful data gathering program.
• Enable you to plan and run an interview.
• Empower you to design a simple questionnaire.
• Enable you to plan and carry out an observation.
8.1 Introduction
Data is everywhere. Indeed, it is common to hear people say that we are drowning in data
because there is so much of it. So, what is data? Data can be numbers, words, measurements,
descriptions, comments, photos, sketches, films, videos, or almost anything that is useful for
understanding a particular design, user needs, and user behavior. Data can be quantitative or
qualitative. For example, the time it takes a user to find information on a web page and the
number of clicks to get to the information are forms of quantitative data. What the user says
about the web page is a form of qualitative data. But what does it mean to collect these and
other kinds of data? What techniques can be used, and how useful and reliable is the data
that is collected?
This chapter presents some techniques for data gathering that are commonly used in
interaction design activities. In particular, data gathering is a central part of discovering
requirements and evaluation. Within the requirements activity, data gathering is conducted
8.1 Introduction
8.2 Five Key Issues
8.3 Data Recording
8.4 Interviews
8.5 Questionnaires
8.6 Observation
8.7 Choosing and Combining Techniques
8 D ATA G AT H E R I N G260
to collect sufficient, accurate, and relevant data so that design can proceed. Within evalua-
tion, data gathering captures user reactions and their performance with a system or proto-
type. All of the techniques that we will discuss can be done with little to no programming or
technical skills. Recently, techniques for scraping large volumes of data from online activi-
ties, such as Twitter posts, have become available. These and other techniques for managing
huge amounts of data, and the implications of their use, are discussed in Chapter 10, “Data
at Scale.”
Three main techniques for gathering data are introduced in this chapter: interviews,
questionnaires, and observation. The next chapter discusses how to analyze and interpret
the data collected. Interviews involve an interviewer asking one or more interviewees a
set of questions, which may be highly structured or unstructured; interviews are usually
synchronous and are often face-to-face, but they don’t have to be. Increasingly, interviews
are conducted remotely using one of the many teleconferencing systems, such as Skype or
Zoom, or on the phone. Questionnaires are a series of questions designed to be answered
asynchronously, that is, without the presence of the investigator. These questionnaires may
be paper-based or available online. Observation may be direct or indirect. Direct obser-
vation involves spending time with individuals observing their activities as they happen.
Indirect observation involves making a record of the user’s activity as it happens, to be
studied at a later date. All three techniques may be used to collect qualitative or quanti-
tative data.
Although this is a small set of basic techniques, they are flexible and can be combined
and extended in many ways. Indeed, it is important not to focus on just one data gathering
technique, if possible, but to use them in combination so as to avoid biases that are inherent
in any one approach.
8.2 Five Key Issues
Five key issues require attention for any data gathering session to be successful: goal setting,
identifying participants, the relationship between the data collector and the data provider,
triangulation, and pilot studies.
8.2.1 Setting Goals
The main reason for gathering data is to glean information about users, their behavior,
or their reaction to technology. Examples include understanding how technology fits into
family life, identifying which of two icons representing “send message” is easier to use,
and finding out whether the planned redesign for a handheld meter reader is headed in the
right direction. There are many different reasons for gathering data, and before beginning,
it is important to set specific goals for the study. These goals will influence the nature of
data gathering sessions, the data gathering techniques to be used, and the analysis to be
performed (Robson and McCartan, 2016).
The goals may be expressed more or less formally, for instance, using some structured
or even mathematical format or using a simple description such as the ones in the previous
paragraph. Whatever the format, however, they should be clear and concise. In interaction
design, it is more common to express goals for data gathering informally.
8 . 2 F I v E K E y I s s u E s 261
8.2.2 Identifying Participants
The goals developed for the data gathering session will indicate the types of people from
whom data is to be gathered. Those people who fit this profile are called the population
or study population. In some cases, the people from whom to gather data may be clearly
identifiable—maybe because there is a small group of users and access to each one is easy.
However, it is more likely that the participants to be included in data gathering need to be
chosen, and this is called sampling. The situation where all members of the target population
are accessible is called saturation sampling, but this is quite rare. Assuming that only a por-
tion of the population will be involved in data gathering, then there are two options: prob-
ability sampling or nonprobability sampling. In the former case, the most commonly used
approaches are simple random sampling or stratified sampling; in the latter case, the most
common approaches are convenience sampling or volunteer panels.
Random sampling can be achieved by using a random number generator or by choosing
every nth person in a list. Stratified sampling relies on being able to divide the population into
groups (for example, classes in a secondary school) and then applying random sampling. Both
convenience sampling and volunteer panels rely less on choosing the participants and more on
the participants being prepared to take part. The term convenience sampling is used to describe
a situation where the sample includes those who were available rather than those specifically
selected. Another form of convenience sampling is snowball sampling, in which a current par-
ticipant finds another participant and that participant finds another, and so on. Much like a
snowball adds more snow as it gets bigger, the population is gathered up as the study progresses.
The crucial difference between probability and nonprobability methods is that in the
former you can apply statistical tests and generalize to the whole population, while in the
latter such generalizations are not robust. Using statistics also requires a sufficient number of
participants. Vera Toepoel (2016) provides a more detailed treatment of sampling, particu-
larly in relation to survey data.
BOX 8.1
How Many Participants Are Needed?
A common question is, how many participants are needed for a study? In general, having
more participants is better because interpretations of statistical test results can be stated with
higher confidence. What this means is that any differences found among conditions are more
likely to be caused by a genuine effect rather than being due to chance.
More formally, there are many ways to determine how many participants are needed.
Four of these are saturation, cost and feasibility analysis, guidelines, and prospective power
analysis (Caine, 2016).
• Saturation relies on data being collected until no new relevant information emerges, and
so it is not possible to know the number in advance of the saturation point being reached.
• Choosing the number of participants based on cost and feasibility constraints is a prac-
tical approach and is justifiable; this kind of pragmatic decision is common in industrial
projects but rarely reported in academic research.
(Continued)
8 D ATA G AT H E R I N G262
8.2.3 Relationship with Participants
One significant aspect of any data gathering is the relationship between the person (people)
doing the gathering and the person (people) providing the data. Making sure that this rela-
tionship is clear and professional will help to clarify the nature of the study. How this is
achieved varies in different countries and different settings. In the United States and United
Kingdom, for example, it is achieved by asking participants to sign an informed consent
form, while in Scandinavia such a form is not required. The details of this form will vary, but
it usually asks the participants to confirm that the purpose of the data gathering and how the
data will be used has been explained to them and that they are willing to continue. It usually
explains that their data will be private and kept in a secure place. It also often includes a
statement that participants may withdraw at any time and that in this case none of their data
will be used in the study.
The informed consent form is intended to protect the interests of both the data gatherer
and the data provider. The gatherer wants to know that the data they collect can be used in
their analysis, presented to interested parties, and published in reports. The data provider
wants reassurance that the information they give will not be used for other purposes or in
any context that would be detrimental to them. For example, they want to be sure that per-
sonal contact information and other personal details are not made public. This is especially
true when people with disabilities or children are being interviewed. In the case of children,
using an informed consent form reassures parents that their children will not be asked threat-
ening, inappropriate, or embarrassing questions, or be asked to look at disturbing or violent
images. In these cases, parents are asked to sign the form. Figure 8.1 shows an example of a
typical informed consent form.
This kind of consent is also not generally required when gathering requirements data
for a commercial company where a contract usually exists between the data collector and
the data provider. An example is where a consultant is hired to gather data from company
staff during the course of discovering requirements for a new interactive system to support
timesheet entry. The employees of this company would be the users of the system, and the
consultant would therefore expect to have access to the employees to gather data about
the timesheet activity. In addition, the company would expect its employees to cooper-
ate in this exercise. In this case, there is already a contract in place that covers the data
• Guidelines may come from experts or from “local standards,” for instance, from an accepted
norm in the field.
• Prospective power analysis is a rigorous method used in statistics that relies on existing
quantitative data about the topic; in interaction design, this data is often unavailable, mak-
ing this approach infeasible, such as when a new technology is being developed.
Kelly Caine (2016) investigated the sample size (number of participants) for papers published
at the international Computer-Human Interaction (CHI) conference in 2014. She found that
several factors affected the sample size, including the method being used and whether the data
was collected in person or remotely. In this set of papers, the sample size varied from 1 to
916,000, with the most common size being 12. So, a “local standard” for interaction design
would therefore suggest 12 as a rule of thumb.
8 . 2 F I v E K E y I s s u E s 263
gathering activity, and therefore an informed consent form is less likely to be required. As
with most ethical issues, the important thing is to consider the situation and make a judg-
ment based on the specific circumstances. Increasingly, projects and organizations that
collect personal data from people need to demonstrate that it is protected from unauthorized
You are invited to participate in a research project being conducted by the researchers listed on the
bottom of the page. In order for us to be allowed to use any data you wish to provide, we must have
your consent.
In the simplest terms, we hope you will use the mobile phone, tabletop, and project website at
the University of Maryland to
• Take pictures
• Share observations about the sights you see on campus
• Share ideas that you have to improve the design of the phone or tabletop application or website
• Comment on pictures, observations, and design ideas of others
The researchers and others using CampusNet will be able to look at your comments and pictures
on the tabletop and/or website, and we may ask if you are willing to answer a few more questions
(either on paper, by phone, or face-to-face) about your whole experience. You may stop participat-
ing at any time.
A long version of this consent form is available for your review and signature, or you may opt to
sign this shorter one by checking off all the boxes that re�ect your wishes and signing and dating
the form below.
___I agree that any photos I take using the CampusNet application may be uploaded to the tabletop
at the University of Maryland and/or a website now under development.
___I agree to allow any comments, observations, and pro�le information that I choose to share with
others via the online application to be visible to others who use the application at the same time
or after me.
___I agree to be videotaped/audiotaped during my participation in this study.
___I agree to complete a short questionnaire during or after my participation in this study.
NAME
[Please print]
SIGNATURE
DATE
[Contact information of Senior Researcher responsible for the project]
Crowdsourcing Design for Citizen Science Organizations
SHORT VERSION OF CONSENT FORM for participants at the University of Maryland –
18 YEARS AND OLDER
Figure 8.1 Example of an informed consent form
8 D ATA G AT H E R I N G264
access. For example, the European Union’s General Data Protection Regulation (GDPR)
came into force in May 2018. It applies to all EU organizations and offers the individual
unprecedented control over their personal data.
Incentives to take part in data gathering sessions may also be needed. For example, if
there is no clear advantage to the respondents, incentives may persuade them to take part; in
other circumstances, respondents may see it as part of their job or as a course requirement
to take part. For example, if support sales executives are asked to complete a questionnaire
about a new mobile sales application, then they are likely to agree if the new device will
impact their day-to-day lives. In this case, the motivation for providing the required informa-
tion is clear. However, when collecting data to understand how appealing a new interactive
app is for school children, different incentives would be appropriate. Here, the advantage for
individuals to take part is not so obvious.
8.2.4 Triangulation
Triangulation is a term used to refer to the investigation of a phenomenon from (at least)
two different perspectives (Denzin, 2006; Jupp, 2006). Four types of triangulation have been
defined (Jupp, 2006).
• Triangulation of data means that data is drawn from different sources at different times, in
different places, or from different people (possibly by using a different sampling technique).
• Investigator triangulation means that different researchers (observers, interviewers, and so
on) have been involved in collecting and interpreting the data.
• Triangulation of theories means the use of different theoretical frameworks through which
to view the data or findings.
• Methodological triangulation means to employ different data gathering techniques.
The last of these is the most common form of triangulation—to validate the results of
some inquiry by pointing to similar results yielded through different perspectives. However,
validation through true triangulation is difficult to achieve. Different data gathering methods
result in different kinds of data, which may or may not be compatible. Using different theo-
retical frameworks may or may not result in complementary findings, but to achieve theo-
retical triangulation would require the theories to have similar philosophical underpinnings.
Using more than one data gathering technique, and more than one data analysis approach, is
good practice because it leads to insights from the different approaches even though it may
not be achieving true triangulation.
Triangulation has sometimes been used to make up for the limitations of another type of
data collection (Mackay and Anne-Laure Fayard, 1997). This is a different rationale from the
original idea, which has more to do with the verification and reliability of data. Furthermore,
For more information about GDPR and data protection law in Europe and the
United Kingdom, see:
https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-
regulation-gdpr/
https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/
https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/
8 . 2 F I v E K E y I s s u E s 265
a kind of triangulation is being used increasingly in crowd sourcing and other studies involv-
ing large amounts of data to check that the data collected from the original study is real and
reliable. This is known as checking for “ground truth.”
8.2.5 Pilot Studies
A pilot study is a small trial run of the main study. The aim is to make sure that the proposed
method is viable before embarking on the real study. For example, the equipment and instruc-
tions can be checked, the questions for an interview or in a questionnaire can be tested for
clarity, and an experimental procedure can be confirmed as viable. This can identify potential
problems in advance so that they can be corrected. Distributing 500 questionnaires and then
being told that two of the questions were very confusing wastes time, annoys participants,
and is an expensive error that could be avoided by doing a pilot study.
If it is difficult to find participants or access to them is limited, asking colleagues or peers
to participate can work as an alternative for a pilot study. Note that anyone involved in a
pilot study cannot be involved in the main study itself. Why? Because they will know more
about the study and this can distort the results.
For an example of methodological triangulation, see:
https://medium.com/design-voices/the-power-of-triangulation-in-design-
research-64a0957d47d2
For more information about ground truth and how ground truth databases are
used to check data obtained in autonomous driving, see “The HCI Bench Mark
Suite: Stereo and Flow Ground Truth with Uncertainties for Urban Autonomous
Driving” at https://ieeexplore.ieee.org/document/7789500/.
BOX 8.2
Data, Information, and Conclusions
There is an important difference between raw data, information, and conclusions. Data is
what you collect; this is then analyzed and interpreted and conclusions drawn. Information
is gained from analyzing and interpreting the data and conclusions represent the actions to
be taken based on the information. For example, consider a study to determine whether a
new screen layout for a local leisure center has improved the user’s experience when booking
a swimming lesson. In this case, the data collected might include a set of times to complete
the booking, user comments regarding the new screen layout, biometric readings of the user’s
(Continued)
https://medium.com/design-voices/the-power-of-triangulation-in-design-research-64a0957d47d2
https://medium.com/design-voices/the-power-of-triangulation-in-design-research-64a0957d47d2
https://ieeexplore.ieee.org/document/7789500/
8 D ATA G AT H E R I N G266
8.3 Data Recording
Capturing data is necessary so that the results of a data gathering session can be analyzed
and shared. Some forms of data gathering, such as questionnaires, diaries, interaction
logging, scraping, and collecting work artifacts, are self-documenting and no further
recording is necessary. For other techniques, however, there is a choice in recording
approaches. The most common of these are taking notes, photographs, or recording audio
or video. Often, several data recording approaches are used together. For example, an
interview may be voice recorded, and then to help the interviewer in later analysis, a
photograph of the interviewee may be taken to remind the interviewer about the context
of the discussion.
Which data recording approaches are used will depend on the goal of the study and
how the data will be used, the context, the time and resources available, and the sensitivity
of the situation; the choice of data recording approach will affect the level of detail collected
and how intrusive the data gathering will be. In most settings, audio recording, photographs, and
notes will be sufficient. In others, it is essential to collect video data so as to record in detail
the intricacies of the activity and its context. Three common data recording approaches are
discussed next.
8.3.1 Notes Plus Photographs
Taking notes (by hand or by typing) is the least technical and most flexible way of record-
ing data, even if it seems old-fashioned. Handwritten notes may be transcribed in whole or
in part, and while this may seem tedious, it is usually the first step in analysis, and it gives
the analyst a good overview of the quality and contents of the data collected. Tools exist for
supporting data collection and analysis, but the advantages of handwritten notes include
that using pen and paper can be less intrusive than typing and is more flexible, for example,
for drawing diagrams of work layouts. Furthermore, researchers often comment that writ-
ing notes helps them to focus on what is important and starts them thinking about what the
data is telling them. The disadvantages of notes include that it can be difficult to capture
the right highlights, and it can be tiring to write and listen or observe at the same time.
It is easy to lose concentration, biases creep in, handwriting can be difficult to decipher, and
the speed of writing is limited. Working with a colleague can reduce some of these problems
while also providing another perspective.
heart rate while booking a lesson, and so on. At this stage, the data is raw. Information will
emerge once this raw data has been analyzed and the results interpreted. For example, analyz-
ing the data might indicate that users who have been using the leisure center for more than
five years find the new layout frustrating and take longer to book, while those who have been
using it for less than two years find the new layout helpful and can book lessons more quickly.
This indicates that the new layout is good for newcomers but not so good for long-term users
of the leisure center; this is information. A conclusion from this might be that a more extensive
help system is needed for more experienced users to become used to the changes.
8 . 3 D ATA R E c O R D I N G 267
If appropriate, photograph(s) and short videos (captured via smartphones or other hand-
held devices) of artifacts, events, and the environment can supplement notes and hand-drawn
sketches, providing that permission has been given to collect data using these approaches.
8.3.2 Audio Plus Photographs
Audio recording is a useful alternative to note-taking and is less intrusive than video. Dur-
ing observation, it allows observers to focus on the activity rather than on trying to capture
every spoken word. In an interview, it allows the interviewer to pay more attention to the
interviewee rather than trying to take notes as well as listening. It isn’t always necessary to
transcribe all of the data collected—often only sections are needed, depending on the goals
of the study. Many studies do not need a great level of detail, and instead recordings are used
as a reminder and as a source of anecdotes for reports. It is surprising how evocative audio
recordings of people or places from the data session can be, and those memories provide
added context to the analysis. If audio recording is the main or only data collection tech-
nique, then the quality needs to be good; performing interviews remotely, for example using
Skype, can be compromised because of poor connections and acoustics. Audio recordings are
often supplemented with photographs.
8.3.3 Video
Smartphones can be used to collect short video clips of activity. They are easy to use and
less obtrusive than setting up sophisticated cameras. But there are occasions when a video is
needed for long periods of time or when holding a phone is unreliable, for example, recording
how designers collaborate together in a workshop or how teens interact in a “makerspace,”
in which people can work on projects while sharing ideas, equipment, and knowledge. For
these kinds of sessions, more professional video equipment that clearly captures both visual
and audio data is more appropriate. Other ways of recording facial expressions together with
verbal comments are also being used, such as GoToMeeting, which can be operated both in-
person and remotely. Using such systems can create additional planning issues that have to
be addressed to minimize how intrusive the recording is, while at the same time making sure
that the data is of good quality (Denzin and Lincoln, 2011). When considering whether to use
a camera, Heath et al. (2010) suggest the following issues to consider:
• Deciding whether to fix the camera’s position or use a roving recorder. This decision
depends on the activity being recorded and the purpose to which the video data will be put,
for example, for illustrative purposes only or for detailed data analysis. In some cases, such
as pervasive games, a roving camera is the only way to capture the required action. For
some studies, the video on a smartphone may be adequate and require less effort to set up.
• Deciding where to point the camera in order to capture what is required. Heath and his
colleagues suggest carrying out fieldwork for a short time before starting to video record
in order to become familiar with the environment and be able to identify suitable recording
locations. Involving the participants themselves in deciding what and where to record also
helps to capture relevant action.
• Understanding the impact of the recording on participants. It is often assumed that video
recording will have an impact on participants and their behavior. However, it is worth taking
an empirical approach to this issue and examining the data itself to see whether there is any
evidence of people changing their behavior such as orienting themselves toward the camera.
8 D ATA G AT H E R I N G268
8.4 Interviews
Interviews can be thought of as a “conversation with a purpose” (Kahn and Cannell, 1957). How
much like an ordinary conversation the interview will be depends on the type of interview. There
are four main types of interviews: open-ended or unstructured, structured, semi-structured, and
group interviews (Fontana and Frey, 2005). The first three types are named according to how
much control the interviewer imposes on the conversation by following a predetermined set of
questions. The fourth type, which is often called a focus group, involves a small group guided by
a facilitator. The facilitation may be quite informal or follow a structured format.
The most appropriate approach to interviewing depends on the purpose of the interview,
the questions to be addressed, and the interaction design activity. For example, if the goal is first
to gain impressions about users’ reactions to a new design concept, then an informal, open-
ended interview is often the best approach. But if the goal is to get feedback about a particular
design feature, such as the layout of a new web browser, then a structured interview or question-
naire is often better. This is because the goals and questions are more specific in the latter case.
8.4.1 Unstructured Interviews
Open-ended or unstructured interviews are at one end of a spectrum of how much control the
interviewer has over the interview process. They are exploratory and are similar to conversa-
tions around a particular topic; they often go into considerable depth. Questions posed by
AcTIvITy 8.1
Imagine that you are a consultant who is employed to help develop a new augmented reality
garden planning tool to be used by amateur and professional garden designers. The goal is
to find out how garden designers use an early prototype as they walk around their clients’
gardens sketching design ideas, taking notes, and asking the clients about what they like and
how they and their families use the garden. What are the advantages and disadvantages of the
three approaches (note-taking, audio recording with photographs, and video) for data record-
ing in this environment?
Comment
Handwritten notes do not require specialized equipment. They are unobtrusive and flexible
but difficult to do while walking around a garden. If it starts to rain, there is no equipment to
get wet, but notes may get soggy and difficult to read (and write!). Garden planning is a highly
visual, aesthetic activity, so supplementing notes with photographs would be appropriate.
Video captures more information, for example, continuous panoramas of the landscape,
what the designers are seeing, sketches, comments, and so on, but it is more intrusive and will
also be affected by the weather. Short video sequences recorded on a smartphone may be suf-
ficient as the video is unlikely to be used for detailed analysis. Audio may be a good compro-
mise, but synchronizing audio with activities such as looking at sketches and other artifacts
later can be tricky and error prone.
8 . 4 I N T E R v I E w s 269
the interviewer are open, meaning that there is no particular expectation about the format or
content of answers. For example, the first question asked of all participants might be: “What
are the pros and cons of having a wearable?” Here, the interviewee is free to answer as fully
or as briefly as they want, and both the interviewer and interviewee can steer the interview.
For example, often the interviewer will say: “Can you tell me a bit more about . . .” This is
referred to as probing.
Despite being unstructured and open, the interviewer needs a plan of the main topics to
be covered so that they can make sure that all of the topics are discussed. Going into an inter-
view without an agenda should not be confused with being open to hearing new ideas (see
section 8.4.5, “Planning and Conducting an Interview”). One of the skills needed to conduct
an unstructured interview is getting the balance right between obtaining answers to relevant
questions and being prepared to follow unanticipated lines of inquiry.
A benefit of unstructured interviews is that they generate rich data that is often interre-
lated and complex, that is, data that provides a deep understanding of the topic. In addition,
interviewees may mention issues that the interviewer has not considered. A lot of unstruc-
tured data is generated, and the interviews will not be consistent across participants since
each interview takes on its own format. Unstructured interviews can be time-consuming
to analyze, but they can also produce rich insights. Themes can be identified across inter-
views using techniques from grounded theory and other analytic approaches, as discussed in
Chapter 9, “Data Analysis, Interpretation, and Presentation.”
8.4.2 Structured Interviews
In structured interviews, the interviewer asks predetermined questions similar to those in
a questionnaire (see section 8.5, “Questionnaires”), and the same questions are used with
each participant so that the study is standardized. The questions need to be short and clearly
worded, and they are typically closed questions, which means that they require an answer
from a predetermined set of alternatives. (This may include an “other” option, but ideally
this would not be chosen often.) Closed questions work well if the range of possible answers
is known or if participants don’t have much time. Structured interviews are useful only when
the goals are clearly understood and specific questions can be identified. Example questions
for a structured interview might be the following:
• “Which of the following websites do you visit most frequently: Amazon.com, Google.com,
or msn.com?”
• “How often do you visit this website: every day, once a week, once a month, less often than
once a month?”
• “Do you ever purchase anything online: Yes/No? If your answer is Yes, how often do you
purchase things online: every day, once a week, once a month, less frequently than once
a month?”
Questions in a structured interview are worded the same for each participant and are
asked in the same order.
8.4.3 Semi-structured Interviews
Semi-structured interviews combine features of structured and unstructured interviews and
use both closed and open questions. The interviewer has a basic script for guidance so that
the same topics are covered with each interviewee. The interviewer starts with preplanned
Http://www.Amazon.com
http://www.Google.com
http://www.msn.com
8 D ATA G AT H E R I N G270
questions and then probes the interviewee to say more until no new relevant information is
forthcoming. Here’s an example:
Which music websites do you visit most frequently?
Answer: Mentions several but stresses that they prefer hottestmusic.com
Why?
Answer: Says that they like the site layout
Tell me more about the site layout.
Answer: Silence, followed by an answer describing the site’s layout
Anything else that you like about the site?
Answer: Describes the animations
Thanks. Are there any other reasons for visiting this site so often that you haven’t mentioned?
It is important not to pre-empt an answer by phrasing a question to suggest that a par-
ticular answer is expected. For example, “You seemed to like this use of color . . .” assumes
that this is the case and will probably encourage the interviewee to answer that this is true
so as not to offend the interviewer. Children are particularly prone to behave in this way (see
Box 8.3, “Working with different kinds of users.”) The body language of the interviewer, for
example whether they are smiling, scowling, looking disapproving, and so forth, can have a
strong influence on whether the interviewee will agree with a question, and the interviewee
needs to have time to speak and not be rushed.
Probes are a useful device for getting more information, especially neutral probes such as “Do
you want to tell me anything else?” and prompts that remind interviewees if they forget terms or
names help to move the interview along. Semi-structured interviews are intended to be broadly
replicable, so probing and prompting aim to move the interview along without introducing bias.
BOX 8.3
Working with Different Kinds of Users
Focusing on the needs of users and including users in the design process is a central theme of
this book. But users vary considerably based on their age, educational, life, and cultural experi-
ences, and physical and cognitive abilities. For example, children think and react to situations
differently than adults. Therefore, if children are to be included in data gathering sessions, then
child-friendly methods are needed to make them feel at ease so that they will communicate with
you. For very young children of pre-reading or early reading age, data gathering sessions need
to rely on images and chat rather than written instructions or questionnaires. Researchers who
work with children have developed sets of “smileys,” such as those shown in Figure 8.2, so that
children can select the one that most closely represents their feelings (see Read et al., 2002).
Awful Not very good Really good BrilliantGood
Figure 8.2 A smileyometer gauge for early readers
Source: Read et al. (2002)
http;//www.hottestmusic.com
8 . 4 I N T E R v I E w s 271
The examples in Box 8.3 demonstrate that technology developers need to adapt their
data collection techniques to suit the participants with whom they work. As the saying goes,
“One size doesn’t fit all.”
8.4.4 Focus Groups
Interviews are often conducted with one interviewer and one interviewee, but it is also com-
mon to interview people in groups. One form of group interview that is sometimes used in
interaction design activities is the focus group. Normally, three to ten people are involved, and
the discussion is led by a trained facilitator. Participants are selected to provide a representa-
tive sample of the target population. For example, in the evaluation of a university website,
a group of administrators, faculty, and students may form three separate focus groups because
they use the web for different purposes. In requirements activities, a focus group may be held
in order to identify conflicts in expectations or terminology from different stakeholders.
Similarly, different approaches are needed when working with users from different cul-
tures (Winschiers-Theophilus et al., 2012). In their work with local communities in Namibia,
Heike Winschiers-Theophilus and Nicola Bidwell (2013) had to find ways of communicating
with local participants, which included developing a variety of visual and other techniques to
communicate ideas and collect data about the collective understanding and feelings inherent
in the local cultures of the people with whom they worked.
Laurianne Sitbon and Shanjana Farhin (2017) report a study in which researchers inter-
acted with people with intellectual disabilities, where they involved caregivers who knew each
participant well and could appropriately make the researchers’ questions more concrete. This
made it more understandable for the participants. An example of this was when the interviewer
assumed that the participant understood the concept of a phone app to provide information
about bus times. The caregiver made their questions more concrete for the participant by relat-
ing the concept of the phone app to familiar people and circumstances and bringing in a per-
sonal example (for instance, “So you don’t have to ring your mom to say ‘Mom, I am lost’”).
Another group of technology users are studied by the field of Animal-Computer Interac-
tion (Mancini et al., 2017). Data gathering with animals poses additional and different chal-
lenges. For example, in their study of dogs’ attention to TV screens, Ilyena Hirskyj-Douglas
et al. (2017) used a combination of observation and tracking equipment to capture when a
dog turns their head. But interpreting the data, or checking that the interpretation is accurate,
requires animal behavior expertise.
Source: Mike Baldwin / Cartoon Stock
8 D ATA G AT H E R I N G272
The benefit of a focus group is that it allows diverse or sensitive issues to be raised that
might otherwise be missed, for example in the requirements activity to understand multiple
points within a collaborative process or to hear different user stories (Unger and Chandler,
2012). The method is more appropriate for investigating shared issues rather than individual
experiences. Focus groups enable people to put forward their own perspectives. A preset
agenda is developed to guide the discussion, but there is sufficient flexibility for the facili-
tator to follow unanticipated issues as they are raised. The facilitator guides and prompts
discussion, encourages quiet people to participate, and stops verbose ones from dominating
the discussion. The discussion is usually recorded for later analysis, and participants may be
invited to explain their comments more fully at a later date.
The format of focus groups can be adapted to fit within local cultural settings. For
example, a study with the Mbeere people of Kenya aimed to find out how water was being
used, any plans for future irrigation systems, and the possible role of technology in water
management (Warrick et al., 2016). The researcher met with the elders from the commu-
nity, and the focus group took the form of a traditional Kenyan “talking circle,” in which
the elders sit in a circle and each person gives their opinions in turn. The researcher, who
was from the Mbeere community, knew that it was impolite to interrupt or suggest that the
conversation needed to move along, because traditionally each person speaks for as long as
they want.
8.4.5 Planning and Conducting an Interview
Planning an interview involves developing the set of questions or topics to be covered, col-
lating any documentation to give to the interviewee (such as consent form or project descrip-
tion), checking that recording equipment works, structuring the interview, and organizing a
suitable time and place.
Developing Interview Questions
Questions may be open-ended (or open) or closed-ended (or closed). Open questions are best
suited where the goal of the session is exploratory; closed questions are best suited where
the possible answers are known in advance. An unstructured interview will usually consist
mainly of open questions, while a structured interview will usually consist of closed ques-
tions. A semi-structured interview may use a combination of both types.
Focus groups can be useful, but only if used for the right kind of activities. For a
discussion of when focus groups don’t work, see the following links:
https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-
technique-ux
http://gerrymcgovern.com/why-focus-groups-dont-work/
https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-technique-ux
https://www.nomensa.com/blog/2016/are-focus-groups-useful-research-technique-ux
http://gerrymcgovern.com/why-focus-groups-dont-work/
8 . 4 I N T E R v I E w s 273
The following guidelines help in developing interview questions (Robson and
McCartan, 2016):
• Long or compound questions can be difficult to remember or confusing, so split them
into two separate questions. For example, instead of “How do you like this smartphone
app compared with previous ones that you have used?” say, “How do you like this
smartphone app?” “Have you used other smartphone apps?” If so, “How did you like
them?” This is easier for the interviewee to respond to and easier for the interviewer
to record.
• Interviewees may not understand jargon or complex language and might be too embar-
rassed to admit it, so explain things to them in straightforward ways.
• Try to keep questions neutral, both when preparing the interview script and in conversa-
tion during the interview itself. For example, if you ask “Why do you like this style of
interaction?” this question assumes that the person does like it and will discourage some
interviewees from stating their real feelings.
DILEMMA
What They Say and What They Do
What users say isn’t always what they do. People sometimes give the answers that they think
show them in the best light, they may have forgotten what happened, or they may want to
please the interviewer by answering in the way they think will satisfy them. This may be
problematic when the interviewer and interviewee don’t know each other, especially if the
interview is being conducted remotely by Skype, Cisco Webex, or another digital conferenc-
ing system.
For example, Yvonne Rogers et al. (2010) conducted a study to investigate whether a
set of twinkly lights embedded in the floor of an office building could persuade people to
take the stairs rather than the lift (or elevator). In interviews, participants told the research-
ers that they did not change their behavior but logged data showed that their behavior did,
in fact, change significantly. So, can interviewers believe all of the responses they get? Are the
respondents telling the truth, or are they simply giving the answers that they think the inter-
viewer wants to hear?
It isn’t possible to avoid this behavior, but an interviewer can be aware of it and reduce
such biases by choosing questions carefully, by getting a large number of participants, or by
using a combination of data gathering techniques.
8 D ATA G AT H E R I N G274
AcTIvITy 8.2
Several devices are available for reading ebooks, watching movies, and browsing photographs
(see Figure 8.3). The design differs between makes and models, but they are all aimed at pro-
viding a comfortable user experience. An increasing number of people also read books and
watch movies on their smartphones, and they may purchase phones with larger screens for
this purpose.
The developers of a new device for reading books online want to find out how appeal-
ing it will be to young people aged 16–18, so they have decided to conduct some interviews.
1. What is the goal of this data gathering session?
2. Suggest ways of recording the interview data.
(a) (b)
(c) (d)
Figure 8.3 (a) Sony’s eReader, (b) Amazon’s Kindle, (c) Apple’s iPad, and (d) Apple’s iPhone
Source: (a) Sony Europe Limited, (b) Martyn Landi / PA Archive / PA Images, (c) Mark Lennihan / AP
Images, and (d) Helen Sharp
8 . 4 I N T E R v I E w s 275
It is helpful when collecting answers to closed-ended questions to list possible responses
together with boxes that can be checked. Here’s one way to convert some of the questions
from Activity 8.2:
1. Have you used a device for reading books online before? (Explore previous knowledge.)
Interviewer checks box: □ Yes □ No □ Don’t remember/know
2. Would you like to read a book using a device designed for reading online? (Explore initial
reaction; then explore the response.)
Interviewer checks box: □ Yes □ No □ Don’t know
3. Why?
If response is “Yes” or “No,” interviewer asks, “Which of the following statements repre-
sents your feelings best?”
3. Suggest a set of questions for use in an unstructured interview that seeks to understand the
appeal of reading books online to young people in the 16–18 year old age group.
4. Based on the results of the unstructured interviews, the developers of the new device have
found that an important acceptance factor is whether the device can be handled easily.
Write a set of semi-structured interview questions to evaluate this aspect based on an initial
prototype and run a pilot interview with two of your peers. Ask them to comment on your
questions and refine them based on their comments.
Comment
1. The goal is to understand what makes devices for reading books online appealing to peo-
ple aged 16–18.
2. Audio recording will be less cumbersome and distracting than taking notes, and all impor-
tant points will be captured. Video recording is not needed in this initial interview as it
isn’t necessary to capture any detailed interactions. However, it would be useful to take
photographs of any devices referred to by the interviewee.
3. Possible questions include the following: Why do you read books online? Do you ever
read print-based books? If so, what makes you choose to read online versus a print-based
format? Do you find reading a book online comfortable? In what way(s) does reading
online versus reading from print affect your ability to become engrossed in the story you
are reading?
4. Semi-structured interview questions may be open or closed-ended. Some closed-ended
questions that you might ask include the following:
• Have you used any kind of device for reading books online before?
• Would you like to read a book online using this device?
• In your opinion, is the device easy to handle?
Some open-ended questions, with follow-on probes, include the following:
• What do you like most about the device? Why?
• What do you like least about the device? Why?
• Please give me an example of where the device was uncomfortable or difficult to use.
8 D ATA G AT H E R I N G276
For “Yes,” interviewer checks one of these boxes:
⬜ I don’t like carrying heavy books.
⬜ This is fun/cool.
⬜ My friend told me they are great.
⬜ It’s the way of the future.
⬜ Another reason (interviewer notes the reason).
For “No,” interviewer checks one of these boxes:
⬜ I don’t like using gadgets if I can avoid it.
⬜ I can’t read the screen clearly.
⬜ I prefer the feel of paper.
⬜ Another reason (interviewer notes the reason).
4. In your opinion, is the device for reading online easy to handle or cumbersome?
Interviewer checks one of these boxes:
⬜ Easy to handle
⬜ Cumbersome
⬜ Neither
Running the Interview
Before starting, make sure that the goals of the interview have been explained to the inter-
viewee and that they are willing to proceed. Finding out about the interviewee and their
environment before the interview will make it easier to put them at ease, especially if it is an
unfamiliar setting.
During the interview, it is better to listen more than to talk, to respond with sympathy
but without bias, and to appear to enjoy the interview. The following is a common sequence
for an interview (Robson and McCartan, 2016):
1. An introduction in which the interviewer introduces themselves and explains why they
are doing the interview, reassures interviewees regarding any ethical issues, and asks
if they mind being recorded, if appropriate. This should be exactly the same for each
interviewee.
2. A warm-up session where easy, nonthreatening questions come first. These may include
questions about demographic information, such as “What area of the country do you
live in?”
3. A main session in which the questions are presented in a logical sequence, with the more
probing ones at the end. In a semi-structured interview, the order of questions may vary
between participants, depending on the course of the conversation, how much probing is
done, and what seems more natural.
4. A cooling-off period consisting of a few easy questions (to defuse any tension that may
have arisen).
5. A closing session in which the interviewer thanks the interviewee and switches off the
recorder or puts their notebook away, signaling that the interview has ended.
8 . 4 I N T E R v I E w s 277
8.4.6 Other Forms of Interview
Conducting face-to-face interviews and focus groups can be impractical, but the prevalence
of Skype, Cisco WebEx, Zoom, and other digital conferencing systems, email, and phone-
based interactions (voice or chat), sometimes with screen-sharing software, make remote
interviewing a good alternative. These are carried out in a similar fashion to face-to-face
sessions, but poor connections and acoustics can cause different challenges, and participants
may be tempted to multitask rather than focus on the session at hand. Advantages of remote
focus groups and interviews, especially when done through audio-only channels, include the
following:
• The participants are in their own environment and are more relaxed.
• Participants don’t have to travel.
• Participants don’t need to worry about what they wear.
• For interviews involving sensitive issues, interviewees can remain anonymous.
In addition, participants can leave the conversation whenever they want to by just cut-
ting the connection, which adds to their sense of security. From the interviewer’s perspective,
a wider set of participants can be reached easily, but a potential disadvantage is that the
facilitator does not have a good view of the interviewees’ body language.
Retrospective interviews, that is, interviews that reflect on an activity or a data gathering
session in the recent past, may be conducted with participants to check that the interviewer
has correctly understood what was happening. This is a common practice in observational
studies where it is sometimes referred to as member checking.
8.4.7 Enriching the Interview Experience
Face-to-face interviews often take place in a neutral location away from the interviewee’s
normal environment. This creates an artificial context, and it can be difficult for interviewees
to give full answers to the questions posed. To help combat this, interviews can be enriched
by using props such as personas prototypes or work artifacts that the interviewee or inter-
viewer brings along, or descriptions of common tasks (examples of these kinds of props are
scenarios and prototypes, which are covered in Chapter 11, “Discovering Requirements,”
and Chapter 12, “Design, Prototyping, and Construction”). These props can be used to pro-
vide context for the interviewees and help to ground the data in a real setting. Figure 8.4
illustrates the use of personas in a focus group setting.
For more information and some interesting thoughts on remote usability testing,
see http://www.uxbooth.com/articles/hidden-benefits-remote-research/.
8 D ATA G AT H E R I N G278
As another example, Clara Mancini et al. (2009) used a combination of questionnaire
prompts and deferred contextual interviews when investigating mobile privacy. A simple
multiple-choice questionnaire was sent electronically to the participants’ smartphones, and
they answered the questions using these devices. Interviews about the recorded events were
conducted later, based on the questionnaire answers given at the time of the event.
8.5 Questionnaires
Questionnaires are a well-established technique for collecting demographic data and users’
opinions. They are similar to interviews in that they can have closed or open-ended questions,
but once a questionnaire is produced, it can be distributed to a large number of participants
without requiring additional data gathering resources. Thus, more data can be collected than
would normally be possible in an interview study. Furthermore, participants who are located
in remote locations or those who cannot attend an interview at a particular time can be
involved more easily. Often a message is sent electronically to potential participants directing
them to an online questionnaire.
Effort and skill are needed to ensure that questions are clearly worded and the data col-
lected can be analyzed efficiently. Well-designed questionnaires are good for getting answers
to specific questions from a large group of people. Questionnaires can be used on their own
Figure 8.4 Enriching a focus group with personas displayed on the wall for all participants to see
8 . 5 Q u E s T I O N N A I R E s 279
or in conjunction with other methods to clarify or deepen understanding. For example, infor-
mation obtained through interviews with a small selection of interviewees might be corrobo-
rated by sending a questionnaire to a wider group to confirm the conclusions.
Questionnaire questions and structured interview questions are similar, so which technique
is used when? Essentially, the difference lies in the motivation of the respondent to answer the
questions. If their motivation is high enough to complete a questionnaire without anyone else
present, then a questionnaire will be appropriate. On the other hand, if the respondents need
some persuasion to answer the questions, a structured interview format would be better. For
example, structured interviews are easier and quicker to conduct if people will not stop to com-
plete a questionnaire, such as at a train station or while walking to their next meeting.
It can be harder to develop good questionnaire questions compared with structured
interview questions because the interviewer is not available to explain them or to clarify any
ambiguities. Because of this, it is important that questions are specific; when possible, ask
closed-ended questions and offer a range of answers, including a “no opinion” or “none of
these” option. Finally, use negative questions carefully, as they can be confusing and may lead
to false information. Some questionnaire designers, however, use a mixture of negative and
positive questions deliberately because it helps to check the users’ intentions.
8.5.1 Questionnaire Structure
Many questionnaires start by asking for basic demographic information (gender, age, place
of birth) and details of relevant experience (the number of hours a day spent searching on the
Internet, the level of expertise within the domain under study, and so on). This background
information is useful for putting the questionnaire responses into context. For example, if two
responses conflict, these different perspectives may be because of their level of experience—a
group of people who are using a social networking site for the first time are likely to express
different opinions than another group with five years’ experience of using such sites. However,
only contextual information that is relevant to the study goal needs to be collected. For exam-
ple, it is unlikely that a person’s height will provide relevant context to their responses about
Internet use, but it might be relevant for a study concerning wearables.
Specific questions that contribute to the data-gathering goal usually follow these demo-
graphic questions. If the questionnaire is long, the questions may be subdivided into related
topics to make it easier and more logical to complete.
The following is a checklist of general advice for designing a questionnaire:
• Think about the ordering of questions. The impact of a question can be influenced by
question order.
• Consider whether different versions of the questionnaire are needed for different populations.
• Provide clear instructions on how to complete the questionnaire, for example, whether
answers can be saved and completed later. Aim for both careful wording and good typography.
• Think about the length of the questionnaire, and avoid questions that don’t address the
study goals.
• If the questionnaire has to be long, consider allowing respondents to opt out at different stages.
It is usually better to get answers to some sections than no answers at all because of dropout.
• Think about questionnaire layout and pacing; for instance, strike a balance between using
white space, or individual web pages, and the need to keep the questionnaire as compact
as possible.
8 D ATA G AT H E R I N G280
8.5.2 Question and Response Format
Different formats of question and response can be chosen. For example, with a closed-ended
question, it may be appropriate to indicate only one response, or it may be appropriate to
indicate several. Sometimes, it is better to ask users to locate their answer within a range.
Selecting the most appropriate question and response format makes it easier for respondents
to answer clearly. Some commonly used formats are described next.
Check Boxes and Ranges
The range of answers to demographic questions is predictable. Nationality, for example, has
a finite number of alternatives, and asking respondents to choose a response from a prede-
fined list makes sense for collecting this information. A similar approach can be adopted if
details of age are needed. But since some people do not like to give their exact age, many
questionnaires ask respondents to specify their age as a range. A common design error arises
when the ranges overlap. For example, specifying two ranges as 15–20 and 20–25 will cause
confusion; that is, which box do people who are 20 years old check? Making the ranges
15–19 and 20–24 avoids this problem.
A frequently asked question about ranges is whether the interval must be equal in all
cases. The answer is no—it depends on what you want to know. For example, people who
might use a website about life insurance are likely to be employed individuals who are 21–65
years old. The question could, therefore, have just three ranges: under 21, 21–65, and over 65.
In contrast, to see how the population’s political views vary across generations might require
10-year cohort groups for people over 21, in which case the following ranges would be
appropriate: under 21, 21–30, 31–40, and so forth.
Rating Scales
There are a number of different types of rating scales, each with its own purpose (see Oppen-
heim, 2000). Two commonly used scales are the Likert and semantic differential scales. Their
purpose is to elicit a range of responses to a question that can be compared across respondents.
They are good for getting people to make judgments, such as how easy, how usable, and the like.
Likert scales rely on identifying a set of statements representing a range of possible opin-
ions, while semantic differential scales rely on choosing pairs of words that represent the
range of possible opinions. Likert scales are more commonly used because identifying suitable
statements that respondents will understand consistently is easier than identifying semantic
pairs that respondents interpret as intended.
Likert Scales
Likert scales are used for measuring opinions, attitudes, and beliefs, and consequently they
are widely used for evaluating user satisfaction with products. For example, users’ opinions
about the use of color in a website could be evaluated with a Likert scale using a range of
numbers, as in question 1 here, or with words as in question 2:
1. The use of color is excellent (where 1 represents strongly agree and 5 represents strongly
disagree):
1 2 3 4 5
□ □ □ □ □
8 . 5 Q u E s T I O N N A I R E s 281
2. The use of color is excellent:
Strongly agree Agree OK Disagree Strongly disagree
□ □ □ □ □
In both cases, respondents would be asked to choose the right box, number, or phrase.
Designing a Likert scale involves the following steps:
1. Gather a pool of short statements about the subject to be investigated. Examples are “This
control panel is clear” and “The procedure for checking credit rating is too complex.”
A brainstorming session with peers is a good way to identify key aspects to be investigated.
2. Decide on the scale. There are three main issues to be addressed here: How many points
does the scale need? Should the scale be discrete or continuous? How can the scale be rep-
resented? See Box 8.4 What Scales to Use: Three, Five, Seven, or More? for more on this.
3. Select items for the final questionnaire, and reword as necessary to make them clear.
In the first example above, the scale is arranged with 1 as the highest choice on the left
and 5 as the lowest choice on the right. The logic for this is that first is the best place to be
in a race and fifth would be the worst place. While there is no absolute right or wrong way
of ordering the numbers other researchers prefer to arrange the scales the other way around
with 1 as the lowest on the left and 5 as the highest on the right. They argue that intuitively
the highest number suggests the best choice and the lowest number suggests the worst choice.
Another reason for going from lowest to highest is that when the results are reported, it is
more intuitive for readers to see high numbers representing the best choices. The important
thing is to be consistent.
Semantic Differential Scales
Semantic differential scales explore a range of bipolar attitudes about a particular item, each
of which is represented as a pair of adjectives. The participant is asked to choose a point
between the two extremes to indicate agreement with the poles, as shown in Figure 8.5. The
score for the investigation is found by summing the scores for each bipolar pair. Scores are
then computed across groups of participants. Notice that in this example the poles are mixed
so that good and bad features are distributed on the right and the left. In this example, there
are seven positions on the scale.
UglyAttractive
ConfusingClear
ColorfulDull
BoringExciting
PleasingAnnoying
UnhelpfulHelpful
Poor Well designed
Figure 8.5 An example of a semantic differential scale
8 D ATA G AT H E R I N G282
BOX 8.4
What Scales to Use: Three, Five, Seven, or More?
Issues to address when designing Likert and semantic differential scales include the following:
how many points are needed on the scale, how should they be presented, and in what form?
Many questionnaires use seven- or five-point scales, and there are also three-point scales.
Some even use nine-point scales. Arguments for the number of points go both ways. Advocates
of long scales argue that they help to show discrimination. Rating features on an interface is
more difficult for most people than, say, selecting among different flavors of ice cream, and
when the task is difficult, there is evidence to show that people “hedge their bets.” Rather
than selecting the poles of the scales if there is no right or wrong, respondents tend to select
values nearer the center. The counterargument is that people cannot be expected to discern
accurately among points on a large scale, so any scale of more than five points is unnecessarily
difficult to use.
Another aspect to consider is whether to give the scale an even or odd number of points.
An odd number provides a clear central point, while an even number forces participants to
decide and prevents them from sitting on the fence.
We suggest the following guidelines:
How many points on the scale?
Use a small number, three, for example, when the possibilities are very limited, as in Yes/No
type answers.
□ □ □
Yes Don’t know No
Use a medium-sized range, five, for example, when making judgments that involve like/
dislike or agree/disagree statements.
Strongly
agree
Agree OK Disagree Strongly disagree
□ □ □ □ □
Use a longer range, seven or nine, for example, when asking respondents to make subtle
judgments, such as when asking about a user experience dimension such as “level of appeal”
of a character in a video game.
very appealing ok repulsive
Discrete or continuous?
Use boxes for discrete choices and scales for finer judgments.
What order?
Decide which way to order the scale, and be consistent.
8 . 5 Q u E s T I O N N A I R E s 283
8.5.3 Administering Questionnaires
Two important issues when using questionnaires are reaching a representative sample of par-
ticipants and ensuring a reasonable response rate. For large surveys, potential respondents
need to be selected using a sampling technique. However, interaction designers commonly use
a small number of participants, often fewer than 20 users. Completion rates of 100 percent are
often achieved with these small samples, but with larger or more remote populations, ensuring
that surveys are returned is a well-known problem. A 40 percent return is generally acceptable
AcTIvITy 8.3
Spot four poorly designed features in the excerpt from a questionnaire in Figure 8.6.
Comment
Some of the features that could be improved upon include the following:
• Question 2 requests an exact age. Many people prefer not to give this information and
would rather position themselves within a range.
• In question 3, the number of hours spent searching is indicated with overlapping scales, that
is, 1–3 and 3–5. How would someone answer if they spend 3 hours a day searching online?
• For question 4, the questionnaire doesn’t say how many boxes to check.
• The space left for people to answer open-ended question 5 is too small, which will annoy
some people and deter them from giving their opinions.
Many online survey tools prevent users from making some of these design errors. It is impor-
tant, however, to be aware of such things because paper is still sometimes used.
2. State your age in years
3. How many hours a day do you spend
searching online?
4. Which of the following do you do online?
5. How useful is the Internet to you?
purchase goods
<1 hour 1–3 hours 3–5 hours >5 hours
send e-mail
visit chatrooms
use bulletin boards
find information
read the news
Figure 8.6 A questionnaire with poorly designed features
8 D ATA G AT H E R I N G284
for many surveys, but much lower rates are common. Depending on your audience, you might
want to consider offering incentives (see section 8.2.3, “Relationship with Participants”).
While questionnaires are often online, paper questionnaires may be more convenient in some
situations, for example, if participants do not have Internet access or if it is expensive to use. Occa-
sionally, short questionnaires are sent within the body of an email, but more often the advantages
of the data being compiled automatically and either partly or fully analyzed make online ques-
tionnaires attractive. Online questionnaires are interactive and can include check boxes, radio
buttons, pull-down and pop-up menus, help screens, graphics, or videos (see Figure 8.7). They can
also provide immediate data validation; for example, the entry must be a number between 1 and
20, and automatically skip questions that are irrelevant to some respondents, such as questions
aimed only at teenagers. Other advantages of online questionnaires include faster response rates
and automatic transfer of responses into a database for analysis (Toepoel, 2016).
The main problem with online questionnaires is the difficulty of obtaining a random
sample of respondents; online questionnaires usually rely on convenience sampling, and
hence their results cannot be generalized. In some countries, online questions, often delivered
via smartphones, are frequently used in conjunction with television to elicit viewers’ opinions
of programs and political events.
Figure 8.7 An excerpt from a web-based questionnaire showing check boxes, radio buttons, and
pull-down menus
8 . 5 Q u E s T I O N N A I R E s 285
Deploying an online questionnaire involves the following steps (Toepoel, 2016, Chapter 10):
1. Plan the survey timeline. If there is a deadline, work backward from the deadline and plan
what needs to be done on a weekly basis.
2. Design the questionnaire offline. Using plain text is useful as this can then be copied more
easily into the online survey tool.
3. Program the online survey. How long this will take depends on the complexity of the
design, for example, how many navigational paths it contains or if it has a lot of interac-
tive features.
4. Test the survey, both to make sure that it behaves as envisioned and to check the ques-
tions themselves. This includes getting feedback from content experts, survey experts, and
potential respondents. This last group forms the basis of a pilot study.
5. Recruit respondents. As mentioned earlier, participants may have different reasons for
taking part in the survey, but especially when respondents need to be encouraged, make
the invitations intriguing, simple, friendly, respectful, trustworthy, motivating, interesting,
informative, and short.
There are many online questionnaire templates available that provide a range of options,
including different question types (for example open-ended, multiple choice), rating scales
(such as Likert, semantic differential), and answer types (for example, radio buttons, check
boxes, drop-down menus).
The following activity asks you to make use of one of these templates. Apart from being
able to administer an online questionnaire widely, these templates also enable the question-
naire to be segmented. For example, airline satisfaction questionnaires often have different
sections for check-in, baggage handling, airport lounge, inflight movies, inflight food service,
and so forth. If you didn’t use an airport lounge or check your baggage, you can skip those
sections. This avoids respondents getting frustrated by having to go through questions that
are not relevant to them. It is also a useful technique for long questionnaires, as it ensures
that if a respondent opts out for lack of time or gets tired of answering the questions, the
data that has been provided already is available to be analyzed.
AcTIvITy 8.4
Go to questionpro.com, surveymonkey.com, or a similar survey site and design your own
questionnaire using the set of widgets that is available for a free trial period.
Create an online questionnaire for the set of questions that you developed for Activity 8.2.
For each question, produce two different designs; for example, use radio buttons and drop-
down menus for one question, and provide a 10-point semantic differential scale and a 5-point
scale for another question.
What differences (if any) do you think the two designs will have on a respondent’s behav-
ior? Ask a number of people to answer one or the other of your questions and see whether the
answers differ for the two designs.
(Continued)
http://www.questionpro.com
http://www.surveymonkey.com
8 D ATA G AT H E R I N G286
BOX 8.5
Do people answer online questionnaires differently than paper and
pencil? If so, why?
There has been much research examining how people respond to surveys when using a com-
puter compared with paper and pencil methods. Some studies suggest that people are more
revealing and consistent in their responses when using a computer to report their habits and
behaviors, such as eating, drinking, and amount of exercise see Luce et al. (2003). Students
have also been found to rate their instructors less favorably when online (Chang, 2004).
In a Danish study in which 3,600 people were invited to participate, the researchers concluded
that although response rates for web-based invitations were lower, they were more cost-effective
(by a factor of 10) and had only slightly lower numbers of missing values than questionnaires sent
via paper (Ebert et al., 2018). Similarly, a study by Diaz de Rada and Dominguez-Alvarez (2014),
in which the quality of the information collected from a survey given to citizens of Andalusia in
Spain was analyzed, several advantages of using online versus paper-based questionnaires were
identified. These included a low number of unanswered questions, more detailed answers to
open-ended questions, and longer answers to questions in the online questionnaires than in the
paper questionnaires. In the five open-ended questions, respondents wrote 63 characters more on
average on the online questionnaires than on the paper questionnaires. For the questions in which
participants had to select from a drop-down menu, there was a better response rate than when
the selection was presented on paper with blank spaces.
One factor that can influence how people answer questions is the way the information
is structured, such as the use of headers, the ordering, and the placement of questions. Online
questionnaires provide more options for presenting information, including the use of drop-
down menus, radio buttons, and jump-to options, which may influence how people read
and navigate a questionnaire. But do these issues affect respondents’ answers? Smyth et al.
(2005) have found that providing forced choice formats results in more options being selected.
Another example is provided by Funcke et al. (2011), who found that continuous sliders ena-
bled researchers to collect more accurate data because they support continuous rather than
discrete scales. They also encouraged higher response rates. What can be concluded from these
investigations is that the details of questionnaire design can impact how respondents react.
Comment
Respondents may have used the response types in different ways. For example, they may select
the end options more often from a drop-down menu than from a list of options that are cho-
sen via radio buttons. Alternatively, you may find no difference and that people’s opinions are
not affected by the widget style used. Some differences, of course, may be due to the variation
between individual responses rather than being caused by features in the questionnaire design.
To tease the effects apart, you would need to ask a large number of participants (for instance,
in the range 50–100) to respond to the questions for each design.
8 . 6 O B s E R v AT I O N 287
8.6 Observation
Observation is useful at any stage during product development. Early in design, observation
helps designers understand the users’ context, tasks, and goals. Observation conducted later
in development, for example, in evaluation, may be used to investigate how well a prototype
supports these tasks and goals.
Users may be observed directly by the investigator as they perform their activities or indi-
rectly through records of the activity that are studied afterward (Bernard, 2017). Observation
may also take place in the field or in a controlled environment. In the former case, individuals
are observed as they go about their day-to-day tasks in the natural setting. In the latter case,
individuals are observed performing specified tasks within a controlled environment such as
a usability laboratory.
AcTIvITy 8.5
To appreciate the different merits of observation in the field and observation in a controlled
environment, read the following scenarios and answer the questions that appear after.
Scenario 1 A usability consultant joins a group of tourists who have been given a wear-
able navigation device that fits onto a wrist strap to test on a visit to Stockholm. After sight-
seeing for the day, they use the device to find a list of restaurants within 2 kilometers of their
current position. Several are listed, and they find the phone numbers of a few, call them to ask
about their menus, select one, make a booking, and head off to the restaurant. The usability
consultant observes some difficulty operating the device, especially on the move. Discussion
with the group supports the evaluator’s impression that there are problems with the interface,
but on balance the device is useful, and the group is pleased to get a table at a good restau-
rant nearby.
Scenario 2 A usability consultant observes how participants perform a preplanned task
using the wearable navigation device in a usability laboratory. The task requires the partici-
pants to find the phone number of a restaurant called Matisse. It takes them several minutes
to do this, and they appear to have problems. The video recording and interaction log suggest
that the interface is quirky and the audio interaction is of poor quality. This is supported by
participants’ answers on a user satisfaction questionnaire.
1. What are the advantages and disadvantages of these two types of observation?
2. When might each type of observation be useful?
Comment
1. The advantages of the field study are that the observer saw how the device could be used
in a real situation to solve a real problem. They experienced the delight expressed with
the overall concept and the frustration with the interface. By watching how the group
used the device on the move, they gained an understanding of what the participants liked
and what was lacking. The disadvantage is that the observer was an insider in the group,
(Continued)
8 D ATA G AT H E R I N G288
8.6.1 Direct Observation in the Field
It can be difficult for people to explain what they do or to describe accurately how they
achieve a task. It is unlikely that an interaction designer will get a full and true story using
interviews or questionnaires. Observation in the field can help fill in details about how users
behave and use technology, and nuances that are not elicited from other forms of investiga-
tion may be observed. Understanding the context provides important information about why
activities happen the way that they do. However, observation in the field can be complicated
and harder to do well than at first appreciated. Observation can also result in a lot of data,
some of which may be tedious to analyze and not very relevant.
All data gathering should have a clearly stated goal, but it is particularly important to
have a focus for an observation session because there is always so much going on. On the
other hand, it is also important to be prepared to change the plan if circumstances change.
For example, the plan may be to spend one day observing an individual performing a task,
but an unexpected meeting crops up, which is relevant to the observation goal and so it
makes sense to attend the meeting instead. In observation, there is a careful balance between
being guided by goals and being open to modifying, shaping, or refocusing the study as more
is learned about the situation. Being able to keep this balance is a skill that develops with
experience.
so how objective could they be? The data is qualitative, and while anecdotes can be very
persuasive, how useful are they? Maybe they were having such a good time that their
judgment was clouded and they missed hearing negative comments and didn’t notice
some of the participant’s annoyance. Another study could be done to find out more, but
it is not possible to replicate the exact conditions of this study. The advantages of the lab
study are that it is easier to replicate, so several users could perform the same task, specific
usability problems can be identified, users’ performance can be compared, and averages
for such measures as the time it took to do a specific task and the number of errors can
be calculated. The observer could also be more objective as an outsider. The disadvantage
is that the study is artificial and says nothing about how the device would be used in the
real environment.
2. Both types of study have merits. Which is better depends on the goals of the study. The
lab study is useful for examining details of the interaction style to make sure that usabil-
ity problems with the interface and button design are diagnosed and corrected. The field
study reveals how the navigation device is used in a real-world context and how it inte-
grates with or changes users’ behavior. Without this study, it is possible that developers
might not have discovered the enthusiasm for the device because the reward for doing
laboratory tasks is not as compelling as a good meal! In fact, according to Kjeldskov and
Skov (2014), there is no definitive answer to which kind of study is preferable for mobile
devices. They suggest that the real question is when and how to engage with longitudinal
field studies.
8 . 6 O B s E R v AT I O N 289
Structuring Frameworks for Observation in the Field
During an observation, events can be complex and rapidly changing. There is a lot for observ-
ers to think about, so many experts have a framework to structure and focus their observa-
tion. The framework can be quite simple. For example, this is a practitioner’s framework for
use in evaluation studies that focuses on just three easy-to-remember items:
The person: Who is using the technology at any particular time?
The place: Where are they using it?
The thing: What are they doing with it?
Even a simple framework such as this one based on who, where, and what can be surpris-
ingly effective to help observers keep their goals and questions in sight. Experienced observ-
ers may prefer a more detailed framework, such as the following (Robson and McCarten,
2016, p. 328), which encourages them to pay greater attention to the context of the activity:
Space: What is the physical space like, and how is it laid out?
Actors: What are the names and relevant details of the people involved?
Activities: What are the actors doing, and why?
Objects: What physical objects are present, such as furniture?
Acts: What are specific individual actions?
Events: Is what you observe part of a special event?
Time: What is the sequence of events?
Goals: What are the actors trying to accomplish?
Feelings: What is the mood of the group and of individuals?
This framework was devised for any type of observation, so when used in the context of
interaction design, it might need to be modified slightly. For example, if the focus is going to
be on how some technology is used, the framework could be modified to ask the following:
Objects: What physical objects, in addition to the technology being studied, are present, and
do they impact on the technology use?
Both of these frameworks are relatively general and could be used in many different
types of study, or as a basis for developing a new framework for a specific study.
AcTIvITy 8.6
1. Find a small group of people who are using any kind of technology, for example, smart-
phones, household appliances, or video game systems, and try to answer the question,
“What are these people doing?” Watch for three to five minutes, and write down what you
observe. When finished, note down how it felt to be doing this and any reactions in the
group of people observed.
2. If you were to observe the group again, what would you do differently?
3. Observe this group again for about 10 minutes using the detailed framework given above.
(Continued)
8 D ATA G AT H E R I N G290
Degree of Participation
Depending on the type of study, the degree of participation within the study environment
varies across a spectrum, which can be characterized as insider at one end and outsider at
the other. Where a particular study falls along this spectrum depends on its goal and on the
practical and ethical issues that constrain and shape it.
An observer who adopts an approach right at the outsider end of the spectrum is called a
passive observer, and they will not take any part in the study environment at all. It is difficult
to be a truly passive observer in the field, simply because it’s not possible to avoid interacting
with the activities. Passive observation is more appropriate in lab studies.
An observer who adopts an approach at the insider end of this spectrum is called a
participant observer. This means that they attempt, at various levels depending on the
type of study, to become a member of the group being studied. This can be a difficult role
to play since being an observer also requires a certain level of detachment, while being a
participant assumes a different role. As a participant observer, it is important to keep the
two roles clear and separate so that observation notes are objective while participation is
also maintained. It may not be possible to take a full participant observer approach for
other reasons. For example, the observer may not be skilled enough in the task at hand,
the organization/group may not be prepared for an outsider to take part in their activities,
or the timescale may not provide sufficient opportunity to become familiar enough with
the task to participate fully. Similarly, if observing activity in a private place such as the
home, full participation would be difficult even if, as suggested by some researchers
(for example, Bell et al., 2005), you have spent time getting to know the family before start-
ing the study. Chandrika Cycil et al. (2013) overcame this issue in their study of in-car
conversations between parents and children by traveling with the families initially for a
week and then asking family members to video relevant episodes of activity. In this way,
they had gained an understanding of the context and family dynamics and then collected
more detailed data to study activity in depth.
Comment
1. What problems did this exercise highlight? Was it hard to watch everything and remember
what happened? How did the people being watched feel? Did they know they were being
watched? Perhaps some of them objected and walked away. If you didn’t tell them that
they were being watched, should you have?
2. The initial goal of the observation, that is, to find out what the people are doing, was
vague, and chances are that it was quite a frustrating experience not knowing what
was significant and what could be ignored. The questions used to guide observation
need to be more focused. For example, you might ask the following: What are the
people doing with the technology? Is everyone in the group using it? Are they looking
pleased, frustrated, serious, happy? Does the technology appear to be central to the
users’ goals?
3. Ideally, you will have felt more confident this second time, partly because it is the second
time doing some observation and partly because the framework provided a structure for
what to look at.
8 . 6 O B s E R v AT I O N 291
Planning and Conducting an Observation in the Field
The frameworks introduced in the previous section are useful for providing focus and also for
organizing the observation and data gathering activity. Choosing a framework is important,
but there are other decisions that need to be made, including the level of participation to
adopt, how to make a record of the data, how to gain acceptance in the group being studied,
how to handle sensitive issues such as cultural differences or access to private spaces, and how
to ensure that the study uses different perspectives (people, activities, job roles, and so forth).
One way to achieve this last point is to work as a team. This can have several benefits.
• Each person can agree to focus on different people or different parts of the context, thereby
covering more ground.
• Observation and reflection can be interweaved more easily when there is more than
one observer.
• More reliable data is likely to be generated because observations can be compared.
• Results will reflect different perspectives.
Once in the throes of an observation, there are other issues that need to be considered.
For example, it will be easier to relate to some people more than others. Although it will be
tempting to pay attention to them more than others, attention needs to be paid to everyone
in the group. Observation is a fluid activity, and the study will need to be refocused as it
progresses in response to what is learned. Having observed for a while, interesting phenom-
ena that seem relevant will start to emerge. Gradually, ideas will sharpen into questions that
guide further observation.
Observing is also an intense and tiring activity, but checking notes and records and
reviewing observations and experiences at the end of each day is important. If this is not
done, then valuable information may be lost as the next day’s events override the previous
day’s findings. Writing a diary or private blog is one way of achieving this. Any documents
or other artifacts that are collected or copied (such as minutes of a meeting or discussion
items) can be annotated, describing how they are used during the observed activity. Where
an observation lasts several days or weeks, time can be taken out of each day to go through
notes and other records.
As notes are reviewed, separate personal opinion from observation and mark issues for
further investigation. It is also a good idea to check observations and interpretations with an
informant or members of the participant group for accuracy.
DILEMMA
When to Stop Observing?
Knowing when to stop doing any type of data gathering can be difficult for novices, but it is
particularly tricky in observational studies because there is no obvious ending. Schedules often
dictate when your study ends. Otherwise, stop when nothing new is emerging. Two indica-
tions of having done enough are when similar patterns of behavior are being seen and when
all of the main stakeholder groups have been observed and a good understanding of their
perspectives has been achieved.
8 D ATA G AT H E R I N G292
Ethnography
Ethnography has traditionally been used in the social sciences to uncover the organization
of societies and their activities. Since the early 1990s, it has gained credibility in interaction
design, and particularly in the design of collaborative systems; see Box 8.6, “Ethnography
in Requirements” and Crabtree (2003). A large part of most ethnographic studies is direct
observation, but interviews, questionnaires, and studying artifacts used in the activities also
feature in many ethnographic studies. A distinguishing feature of ethnographic studies com-
pared with other data gathering is that a situation is observed without imposing any a priori
structure or framework upon it, and everything is viewed as “strange.” In this way, the aim is
to capture and articulate the participants’ perspective of the situation under study.
BOX 8.6
Ethnography in Requirements
The MERboard is a tool scientists and engineers use to display, capture, annotate, and share
information in support of the operation of two Mars Exploration Rovers (MERs) on the sur-
face of Mars. The MER (see Figure 8.8) acts like a human geological explorer by collecting
and analyzing samples and then transmitting the results to the scientists on Earth. The scien-
tists and engineers collaboratively analyze the data received, decide what to study next, create
plans of action, and send commands to the robots on the surface of Mars.
The requirements for MERboard were identified partly through ethnographic field-
work, observations, and analysis (Trimble et al., 2002). The team of scientists and engi-
neers ran a series of field tests that simulated the process of receiving data, analyzing it,
creating plans, and transmitting them to the MERs. The main problems they identified
stemmed from the scientists’ limitations in displaying, sharing, and storing information
(see Figure 8.9a).
Figure 8.8 Mars Exploration Rover
Source: NASA Jet Propulsion Laboratory (NASA-JPL)
8 . 6 O B s E R v AT I O N 293
Ethnography has become popular within interaction design because it allows designers
to obtain a detailed and nuanced understanding of people’s behavior and the use of technol-
ogy that cannot be obtained by other methods of data gathering (Lazar et al., 2017). While
there has been much discussion of how big data can address many design issues, big data is
likely to be most powerful when combined with ethnography to explain how and why people
do what they do (Churchill, 2018).
The observer in an ethnographic study adopts a participant observer (insider) role as
much as possible (Fetterman, 2010). While participant observation is a hallmark of eth-
nographic studies, it is also used within other methodological frameworks such as action
research (Hayes, 2011), where one of the goals is to improve the current situation.
Ethnographic data is based on what is available, what is “ordinary,” what it is that people
do, say, and how they work. The data collected therefore has many forms: documents, notes
taken by the observer(s), pictures, and room layout sketches. Notes may include snippets of
conversations and descriptions of rooms, meetings, what someone did, or how people reacted
to a situation. Data gathering is opportunistic, and observers make the most of opportunities
as they present themselves. Often, interesting phenomena do not reveal themselves immedi-
ately but only later, so it is important to gather as much as possible within the framework of
observation. Initially, spend time getting to know people in the participant group and bond-
ing with them. Participants need to understand why the observers are there, what they hope
These observations led to the development of MERboard (see Figure 8.9b), which contains
four core applications: a whiteboard for brainstorming and sketching, a browser for display-
ing information from the web, the capability to display personal information and information
across several screens, and a file storage space linked specifically to MERboard.
(a) (b)
Figure 8.9 (a) The situation before MERboard; (b) a scientist using MERboard to present
information
Source: Trimble et al. (2002)
8 D ATA G AT H E R I N G294
to achieve, and how long they plan to be there. Going to lunch with them, buying coffee,
and bringing small gifts, for example, cookies, can greatly help this socialization process.
Moreover, key information may be revealed during one of these informal gatherings.
It is important to show interest in the stories, gripes, and explanations that are provided and
to be prepared to step back if a participant’s phone rings or someone else enters the workspace.
A good tactic is to explain to one of the participants during a quiet moment what you think is
happening and then let them correct any misunderstandings. However, asking too many ques-
tions, taking pictures of everything, showing off your knowledge, and getting in their way can be
very off-putting. Putting up cameras on tripods on the first day may not be a good idea. Listening
and watching while sitting on the sidelines and occasionally asking questions is a better approach.
The following is an illustrative list of materials that might be recorded and collected dur-
ing an ethnographic study (adapted from Crabtree, 2003, p. 53):
• Activity or job descriptions
• Rules and procedures (and so on) that govern particular activities
• Descriptions of activities observed
• Recordings of the talk taking place between parties involved in observed activities
• Informal interviews with participants explaining the detail of observed activities
• Diagrams of the physical layout, including the position of artifacts
• Photographs of artifacts (documents, diagrams, forms, computers, and so on) used in the
course of observed activities
• Videos of artifacts as used in the course of observed activities
• Descriptions of artifacts used in the course of observed activities
• Workflow diagrams showing the sequential order of tasks involved in observed activities
• Process maps showing connections between activities
Traditionally, ethnographic studies in this field aim to understand what people do and how
they organize action and interaction within a particular context of interest to designers. However,
recently there has been a trend toward studies that draw more on ethnography’s anthropological
roots and the study of culture. This trend has been brought about by the perceived need to use
different approaches because the computers and other digital technologies, especially mobile
devices, are embedded in everyday activity, and not just in the workplace as in the 1990s.
BOX 8.7
Doing Ethnography Online
As collaboration and social activity online have increased, ethnographers have adapted their
approach to study social media and the various forms of computer-mediated communication
(Rotman et al., 2013; Bauwens and Genoud, 2014). This practice has various names, the most
common of which are online ethnography (Rotman et al., 2012), virtual ethnography (Hine,
2000), and netnography (Kozinets, 2010). Where a community or activity has both an online and
offline presence, it is common to incorporate both online and offline techniques within the data
gathering program. However, where the community or activities of interest exist almost exclu-
sively online, then mostly online techniques are used and virtual ethnography becomes central.
8 . 6 O B s E R v AT I O N 295
8.6.2 Direct Observation in Controlled Environments
Observing users in a controlled environment may occur within a purposely built usability
lab, but portable labs that can be set up in any room are quite common. Portable laboratories
can mean that more participants take part because they don’t have to travel away from their
normal environment. Observation in a controlled environment inevitably takes on a more
formal character than observation in the field, and the user may feel more apprehensive.
As with interviews, it is a good idea to prepare a script to guide how the participants will be
greeted, be told about the goals of the study and how long it will last, and have their rights
explained. Use of a script ensures that each participant will be treated in the same way, which
brings more credibility to the results obtained from the study.
The same basic data recording techniques are used for direct observation in the labora-
tory and field studies (that is, capturing photographs, taking notes, collecting video, and so
on), but the way in which these techniques are used is different. In the lab the emphasis is on
the details of what individuals do, while in the field the context is important, and the focus is
on how people interact with each other, the technology, and their environment.
The arrangement of equipment with respect to the participant is important in a controlled
study because details of the person’s activity need to be captured. For example, one camera
might record facial expressions, another might focus on mouse and keyboard activity, and
another might record a broad view of the participant and capture body language. The stream
of data from the cameras can be fed into a video editing and analysis suite where it is coor-
dinated and time-stamped, annotated, and partially edited.
Why is it necessary to distinguish between online and face-to-face ethnography? It is
important because interaction online is different from interaction in person. For example,
communication in person is richer (through gesture, facial expression, tone of voice, and so
on) than online communication, and anonymity is more easily achieved when communicating
online. In addition, virtual worlds have a persistence, due to regular archiving, that does not
typically occur in face-to-face situations. This makes characteristics of the communication dif-
ferent, which often includes how ethnographers introduce themselves to the community, how
they act within the community, and how they report their findings. For these reasons, some
researchers who work primarily online also try to meet with some of the participants face-to-
face, particularly when working on sensitive topics (Lingel, 2012).
Special tools may be developed to support ethnographic data collection. Mobilab is an
online collaborative platform that was developed for citizens living in Switzerland to report and
discuss their daily mobility during an eight-week period using their mobile phones, tablets, and
computers (Bauwens and Genoud, 2014). Mobilab enabled the researchers to more easily engage
in discussion with participants on a variety of topics, including trucks parking on a bikeway.
For observational studies in large social spaces, such as digital libraries or Facebook,
there are different ethical issues to consider. For example, it is unrealistic to ask everyone
using a digital library to sign any kind of form agreeing to be involved in the study, yet
participants do need to understand the observer’s role and the purpose of their study. The
presentation of results needs to be modified too. Quotes from participants in the community,
even if anonymized in the report, can easily be attributed by a simple search of the community
archive or the IP address of the sender, so care is needed to protect their privacy.
8 D ATA G AT H E R I N G296
The Think-Aloud Technique
One of the problems with observation is that the observer doesn’t know what users are think-
ing and can only guess from what they see. Observation in the field should not be intrusive,
as this will disturb the context the study is trying to capture. This limits the questions being
asked of the participant. However, in a controlled environment, the observer can afford to
be a little more intrusive. The think-aloud technique is a useful way of understanding what
is going on in a person’s head.
Imagine observing someone who has been asked to evaluate the interface of the web
search engine Lycos.com. The user, who does not have much experience of web searches,
is told to look for a phone for a 10-year-old child. They are told to type www.lycos.com
and then proceed however they think best. They type the URL and get a screen similar to
the one in Figure 8.10.
Next, they type child’s phone in the search box. They get a screen similar to the one
shown in Figure 8.11. They are silent. What is going on? What are they thinking? One
way around the problem of knowing what they are doing is to collect a think-aloud pro-
tocol, a technique developed by Anders Ericsson and Herbert Simon (1985) for examin-
ing people’s problem-solving strategies. The technique requires people to say out loud
everything that they are thinking and trying to do so that their thought processes are
externalized.
So, let’s imagine an action replay of the situation just described, as follows, but this time
the user has been instructed to think aloud:
“I’m typing in www.lycos.com, as you told me.”
“Now I am typing child’s phone and then clicking the search button.
“Oh! Now I have a choice of other websites to go to. Hmm, I wonder which one I should
select. Well, it’s for a young child so I want a ‘child-safe phone.’ This one mentions safe
phones
“Gosh, there’s a lot more models to select from than I expected! Hmm, some of these are
for older children. I wonder what I do next to find one for a 10-year-old.”
Figure 8.10 Home page of Lycos search engine
Source: https://www.lycos.com
http://Lycos.com.
8 . 6 O B s E R v AT I O N 297
Now you know more about what the user is trying to achieve, but they are silent again.
They are looking at the screen, but what are they thinking now? What are they looking at?
The occurrence of these silences is one of the biggest problems with the think-aloud
technique.
Figure 8.11 The screen that appears in response to searching for “child’s phone”
Source: https://www.lycos.com
AcTIvITy 8.7
Try a think-aloud exercise yourself. Go to a website, such as Amazon or eBay, and look for
something to buy. Think aloud as you search and notice how you feel and behave.
Afterward, reflect on the experience. Was it difficult to keep speaking all the way through
the task? Did you feel awkward? Did you stop talking when you got stuck?
Comment
Feeling self-conscious and awkward doing this is a common response, and some people say
they feel really embarrassed. Many people forget to speak out loud and find it difficult to do so
when the task becomes difficult. In fact, you probably stopped speaking when the task became
demanding, and that is exactly the time when an observer is most eager to hear what’s happening.
(Continued)
8 D ATA G AT H E R I N G298
8.6.3 Indirect Observation: Tracking Users’ Activities
Sometimes direct observation is not possible because it is too intrusive or observers cannot
be present over the duration of the study, and so activities are tracked indirectly. Diaries and
interaction logs are two techniques for doing this.
Diaries
Participants are asked to write a diary of their activities on a regular basis, including things like
what they did, when they did it, what they found hard or easy, and what their reactions were
to the situation. For example, Sohn et al. (2008) asked 20 participants to record their mobile
information needs through text messages and then to use these messages as prompts to help
them answer six questions on a website at the end of each day. From the data collected, they
identified 16 categories of mobile information needs, the most frequent of which was “trivia.”
Diaries are useful: when participants are scattered and unreachable in person; when the
activity is private, for example, in the home; or when it relates to feelings, for instance, emo-
tions or motivation. For example, Jang et al. (2016) used diaries with interviews to collect
data about users’ experiences with smart TVs in the home as compared to within a con-
trolled lab setting. The study in the home was conducted over several weeks during which
participants were asked to keep a diary of their experiences and feelings. Surveys were also
collected. This mixed-methods study informed the user experience design of future systems.
Diaries have several advantages: they do not take up much researcher time to collect
data; they do not require special equipment or expertise; and they are suitable for long-term
studies. In addition, templates, like those used in open-ended online questionnaires, can be
created online to standardize the data entry format so that the data can be entered directly
into a database for analysis. However, diary studies rely on participants being reliable and
remembering to complete them at the assigned time and as instructed, so incentives may be
needed, and the process has to be straightforward.
Determining how long to run a diary study can be tricky. If the study goes on for too
long, participants may lose interest and need incentives to continue. In contrast, if the study
is too short, important data may be missed. For example, in a study of children’s experiences
of a game, Elisa Mekler et al. (2014) used diaries to collect data after each gaming session
in a series. After the first few sessions, all of the children in the study showed loss of motiva-
tion for the game. However, by the end of the study, those who completed the game were
more motivated than those who did not complete the game. Had the data been collected only
once, the researchers may not have observed the impact of game completion on the children’s
motivation.
If a user is silent during a think-aloud protocol, the observer could interrupt and remind
them to think out loud, but that would be intrusive. Another solution is to have two people
work together so that they talk to each other. Working with another person (called construc-
tive interaction [Miyake, 1986]) is often more natural and revealing because participants
talk in order to help each other along. This technique has proved to be particularly success-
ful with children, and it also avoids possible cultural influences on concurrent verbalization
(Clemmensen et al., 2008).
8 . 6 O B s E R v AT I O N 299
Another problem is that the participants’ memories of events may be exaggerated or detail
is forgotten; for example, they may remember them as better or worse than they really were
or as taking more or less time than they actually did take. One way of mitigating this problem
is to collect other data in diaries (such as photographs including selfies, audio and video clips,
and so on). Scott Carter and Jennifer Mankoff (2005) considered whether capturing events
through pictures, audio, or artifacts related to the event affects the results of the diary study.
They found that images resulted in more specific recall than other media, but audio was useful
for capturing events when taking a photo was too awkward. Tangible artifacts, such as those
shown in Figure 8.12, also encouraged discussion about wider beliefs and attitudes.
The experience sampling method (ESM) is similar to a diary in that it relies on partici-
pants recording information about their everyday activities. However, it differs from more
traditional diary studies because participants are prompted at random times via email, text
message, or similar means to answer specific questions about their context, feelings, and
actions (Hektner et al., 2006). These prompts have the benefit of encouraging immediate
data capture. Niels van Berkel et al. (2017) provide a comprehensive survey of ESM and its
evolution, tools, and uses across a wide range of studies.
Interaction Logs, Web Analytics, and Data Scraping
Interaction logging uses software to record users’ activity in a log that can be examined later.
A variety of actions may be recorded, such as key presses and mouse or other device move-
ments, time spent searching a web page, time spent looking at help systems, and task flow
Figure 8.12 Some tangible objects collected by participants involved in a study about a jazz festival
Source: Carter and Mankoff (2005). Reproduced with permission of ACM Publications
8 D ATA G AT H E R I N G300
through software modules. A key advantage of logging activity is that it is unobtrusive pro-
vided system performance is not affected, but it also raises ethical concerns about observing
participants if this is done without their knowledge. Another advantage is that large volumes
of data can be logged automatically. Visualization tools are therefore helpful for exploring
and analyzing this data quantitatively and qualitatively. Algorithmic and statistical methods
may also be used.
Examining the trail of activity that people leave behind when they are active on websites,
Twitter, or Facebook is also a form of indirect observation. You can see an example of this by
looking at a Twitter feed to which you have access, for example, that of a friend, president,
prime minister, or some other leader. These trails allow examination of discussion threads on
a particular topic, such as climate change, or reactions to comments made by a public figure
or to a topic that is trending today. If there are just a few posts, then it is easy to see what
is going on, but often the most interesting posts are those that generate a lot of comments.
Examining thousands, tens of thousands, and even millions of posts requires automated tech-
niques. Web analytics and data scraping are discussed further in Chapter 10.
8.7 Choosing and Combining Techniques
Combining data gathering techniques into a single data gathering program is common practice,
for example, when collecting case study data (see Box 8.8). The benefit of using a combination
of methods is to provide multiple perspectives. Choosing which data gathering techniques to
use depends on a variety of factors related to the study goals. There is no right technique or
combination of techniques, but some will undoubtedly be more appropriate than others. The
decision about which to use will need to be made after taking all of the factors into account.
Table 8.1 provides an overview to help choose a set of techniques for a specific project.
It lists the kind of information obtained (such as answers to specific questions) and the
type of data (for example, mostly qualitative or mostly quantitative). It also includes some
advantages and disadvantages for each technique. Note that different modalities can be used
for some of these techniques. For example, interviews and focus groups can be conducted
face-to-face, by phone, or through teleconferencing, so when considering advantages and
disadvantages of the techniques, this should also be taken into account.
In addition, technique choice is influenced by practical issues.
• The focus of the study. What kind of data will support the focus and goal of the study? This
will be influenced by the interaction design activity and the level of maturity of the design.
• The participants involved. Characteristics of the target user group including their location
and availability.
• The nature of the technique. Does the technique require specialist equipment or training,
and do the investigators have the appropriate knowledge and experience?
• Available resources. Expertise, tool support, time, and money.
8 . 7 c H O O s I N G A N D c O M B I N I N G T E c H N I Q u E s 301
Technique Good for Kind of data Advantages Disadvantages
Interviews Exploring
issues
Some
quantitative
but mostly
qualitative
Interviewer can
guide interviewee if
necessary. Encourages
contact between
developers and users.
Artificial environment
may intimidate
interviewee. It also
removes them from
the environment where
work is typically
being done.
Focus groups Collecting
multiple
viewpoints
Some
quantitative
but mostly
qualitative
Highlights areas
of consensus and
conflict. Encourages
contact between
developers and
users.
Possibility of dominant
characters.
Questionnaires Answering
specific
questions
Quantitative
and
qualitative
Can reach many
people with low
resource requirements.
The design is key.
Response rates may
be low. Unless
carefully designed, the
responses may
not provide
suitable data.
Direct
observation
in the field
Understanding
context of
user activity
Mostly
qualitative
Observing
gives insights
that other
techniques
don’t provide.
Very time-consuming.
Huge amounts of data
are produced.
Direct
observation in
a controlled
environment
Capturing the
detail of what
individuals do
Quantitative
and
qualitative
Can focus on the
details of a task
without interruption.
Results may have
limited use in the
normal environment
because the
conditions were
artificial.
Indirect
observation
Observing
users without
disturbing
their activity;
data captured
automatically
Quantitative
(logging) and
qualitative
(diary)
User doesn’t get
distracted by the data
gathering; automatic
recording means
that it can extend
over long periods
of time.
A large amount of
quantitative data needs
tool support to analyze
(logging); participants’
memories may
exaggerate (diary).
Table 8.1 Overview of data gathering techniques and their use
8 D ATA G AT H E R I N G302
AcTIvITy 8.9
For each of the following products, consider what kinds of data gathering would be appropri-
ate and how to use the different techniques introduced earlier. Assume that product develop-
ment is just starting and that there is sufficient time and resources to use any of the techniques.
1. A new software app to support a small organic produce shop. There is a system running
already with which the users are reasonably happy, but it is looking dated and needs upgrading.
2. An innovative device for diabetes sufferers to help them record and monitor their blood
sugar levels.
3. An ecommerce website that sells fashion clothing for young people.
Comment
1. As this is a small shop, there are likely to be few stakeholders. Some period of observation
would be important to understand the context of the new and the old systems. Interview-
ing the staff rather than giving them questionnaires is likely to be appropriate because
there aren’t very many of them, and this will yield richer data and give the developers a
chance to meet the users. Organic produce is regulated by a variety of laws, so looking at
this documentation will help you understand any legal constraints that have to be taken
into account. This suggests a series of interviews with the main users to understand the
positive and negative features of the existing system, a short observation session to under-
stand the context of the system, and a study of documentation surrounding the regulations.
2. In this case, the user group is quite large and spread out geographically, so talking to all
of them is not feasible. However, interviewing a representative sample of potential users,
BOX 8.8
Collecting case study data
Case studies often use a combination of methods, for example, direct and indirect observa-
tions and interviews. Although people frequently use the term case study colloquially to refer
to a study that they are using as a case example, there is also a case study methodology that
collects field study data over days, months, or even years. There is a body of literature
that provides advice on how to do good case studies. Robert Yin (2013), for example, identifies
these data collection sources: documentation, archival records, interviews, direct observations,
participant observation, and physical artifacts. Case studies are good for integrating multiple per-
spectives, for example, studying new technology in the wild, and for giving meaning to first impres-
sions. The data collection process tends to be intensive, concurrent, interactive, and iterative.
In a study of how local communities organize and adapt technology for managing their
local rivers and streams, approaching it as a case study allowed a detailed contextual analysis
of events and relationships that occurred over multiple groups of volunteers during a two-year
period (Preece et al., 2019). From this study, the researchers learned about the volunteers’
needs for highly flexible software to support the diverse groups of participants working on a
wide range of water-related topics.
8 . 7 c H O O s I N G A N D c O M B I N I N G T E c H N I Q u E s 303
In-Depth Activity
The aim of this in-depth activity is to practice data gathering. Assume that you have been
employed to improve the user experience of an interactive product such as a smartphone app,
a digital media player, a Blu-ray player, computer software, or some other type of technology.
This existing product may be redesigned, or a completely new product may be created. To do
the assignment, find a group of people or a single individual prepared to be the user group.
These could be your family, friends, peers, or people in a local community group.
For this assignment:
(a) Clarify the basic goal of improving the product by considering what this means in your
circumstances.
(b) Watch the group (or person) casually to get an understanding of any issues that might cre-
ate challenges for this activity and any information to help refine the study goals.
(c) Explain how you would use each of the three data gathering techniques: interview, ques-
tionnaire, and observation in your data gathering program. Explain how your plan takes
account of triangulation.
(d) Consider your relationship with the user group and decide if an informed consent form is
required. (Figure 8.1 will help you to design one if needed.)
(e) Plan your data gathering program in detail.
• Decide what kind of interview to run and design a set of interview questions. Decide how
to record the data, then acquire and test any equipment needed and run a pilot study.
• Decide whether to include a questionnaire in your data gathering program, and design
appropriate questions for it. Run a pilot study to check the questionnaire.
• Decide whether to use direct or indirect observation and where on the outsider/insider
spectrum should the observers be. Decide how to record the data, then acquire and test
any equipment needed and run a pilot study.
(f) Carry out the study, but limit its scope. For example, interview only two or three people
or plan only two half-hour observation periods.
(g) Reflect on this experience and suggest what you would do differently next time.
Keep the data gathered, as this will form the basis of the in-depth activity in Chapter 9.
possibly at a local diabetic clinic, is feasible. Observing current practices to monitor blood
sugar levels will help you understand what is required. An additional group of stakeholders
would be those who use or have used the other products on the market. These stakeholders
can be questioned about their experience with their existing devices so that the new device
can be an improvement. A questionnaire sent to a wider group in order to confirm the
findings from the interviews would be appropriate, as might a focus group where possible.
3. Again, the user group is quite large and spread out geographically. In fact, the user group
may not be very well defined. Interviews backed up by questionnaires and focus groups
would be appropriate. In this case, identifying similar or competing sites and evaluating
them will help provide information for an improved product.
8 D ATA G AT H E R I N G304
Further Reading
FETTERMAN, D. M. (2010). Ethnography: Step by Step (3rd ed.) Applied Social Research
Methods Series, Vol. 17. Sage. This book introduces the theory and practice of ethnography,
and it is an excellent guide for beginners. It covers both data gathering and data analysis in
the ethnographic tradition.
FULTON SURI, J. (2005) Thoughtless Acts? Chronicle Books, San Francisco. This intriguing
little book invites you to consider how people react to their environment. It is a good intro-
duction to the art of observation.
HEATH, C., HINDMARSH, J. AND LUFF, P. (2010) Video in Qualitative Research: Analyzing
Social Interaction in Everyday Life. Sage. This is an accessible book that provides practical
advice and guidance about how to set up and perform data gathering using video record-
ing. It also covers data analysis, presenting findings, and potential implications from video
research based on their own experience.
summary
This chapter has focused on three main data gathering methods that are commonly used in
interaction design: interviews, questionnaires, and observation. It has described in detail the
planning and execution of each. In addition, five key issues of data gathering were presented,
and how to record the data gathered was discussed.
Key Points
• All data gathering sessions should have clear goals.
• Depending on the study context, an informed consent form and other permissions may be
needed to run the study.
• Running a pilot study helps to test out the feasibility of a planned data gathering session
and associated instruments such as questions.
• Triangulation involves investigating a phenomenon from different perspectives.
• Data may be recorded using handwritten notes, audio or video recording, a camera, or any
combination of these.
• There are three styles of interviews: structured, semi-structured, and unstructured.
• Questionnaires may be paper-based, via email, or online.
• Questions for an interview or questionnaire can be open or closed-ended. Closed-ended
questions require the interviewee to select from a limited range of options. Open-ended
questions accept a free-range response.
• Observation may be direct or indirect.
• In direct observation, the observer may adopt different levels of participation, ranging from
insider (participant observer) to outsider (passive observer).
• Choosing appropriate data gathering techniques depends on the focus of the study, partici-
pants involved, nature of the technique, and resources available.
F u R T H E R R E A D I N G 305
OLSON, J. S. AND KELLOGG, W. A. (eds) (2014) Ways of Knowing in HCI. Springer. This
edited collection contains useful chapters on a wide variety of data collection and analysis
techniques. Some topics that are particularly relevant to this chapter include: ethnography,
experimental design, log data collection and analysis, ethics in research, and more.
ROBSON, C. AND McCARTAN, K. (2016) Real World Research (4th edn). John Wiley &
Sons. This book provides comprehensive coverage of data gathering and analysis techniques
and how to use them. Early books and related books by Robson also address topics discussed
in this chapter.
TOEPOEL, V. (2016) Doing Surveys Online. Sage. This book is a “hands-on guide” for pre-
paring and conducting a wide range of surveys including surveys for mobile devices, opt-in
surveys, panels, polls, and more. It also discusses details about sampling that can be applied
to other data gathering techniques.
Chapter 9
D A T A A N A LY S I S , I N T E R P R E T A T I O N ,
A N D P R E S E N T A T I O N
9.1 Introduction
9.2 Qualitative and Quantitative
9.3 Basic Quantitative Analysis
9.4 Basic Qualitative Analysis
9.5 What Kind of Analytic Framework to Use
9.6 Tools to Support Data Analysis
9.7 Interpreting and Presenting the Findings
Objectives
The main goals of this chapter are to accomplish the following:
• Discuss the difference between qualitative and quantitative data and analysis.
• Enable you to analyze data gathered from questionnaires.
• Enable you to analyze data gathered from interviews.
• Enable you to analyze data gathered from observation studies.
• Make you aware of software packages that are available to help your analysis.
• Identify some of the common pitfalls in data analysis, interpretation, and presentation.
• Enable you to interpret and present your findings in a meaningful and appropriate
manner.
9.1 Introduction
The kind of analysis that can be performed on a set of data will be influenced by the goals
identified at the outset and the data gathered. Broadly speaking, a qualitative analysis approach,
a quantitative analysis approach, or a combination of qualitative and quantitative approaches
may be taken. The last of these is very common, as it provides a more comprehensive account
of the behavior being observed or the performance being measured.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N308
Most analysis, whether it is quantitative or qualitative, begins with the initial reactions
or observations from the data. This may involve identifying patterns or calculating simple
numerical values such as ratios, averages, or percentages. For all data, but especially when
dealing with large volumes of data (that is, Big Data), it is useful to look over the data to
check for any anomalies that might be erroneous. For example, people who are 999 years
old. This process is known as data cleansing, and there are often digital tools to help with the
process. This initial analysis is followed by more detailed work using structured frameworks
or theories to support the investigation.
Interpretation of the findings often proceeds in parallel with analysis, but there are differ-
ent ways to interpret results, and it is important to make sure that the data supports any con-
clusions. A common mistake is for the investigator’s existing beliefs or biases to influence the
interpretation of results. Imagine that an initial analysis of the data has revealed a pattern of
responses to customer care questionnaires that indicates that inquiries from customers routed
through the Sydney office of an organization take longer to process than those routed through
the Moscow office. This result can be interpreted in many different ways. For example, the
customer care operatives in Sydney are less efficient, they provide more detailed responses,
the technology supporting the inquiry process in Sydney needs to be updated, customers
reaching the Sydney office demand a higher level of service, and so on. Which one is correct?
To determine whether any of these potential interpretations is accurate, it would be appropri-
ate to look at other data such as customer inquiry details and maybe to interview staff.
Another common mistake is to make claims that go beyond what the data can support.
This is a matter of interpretation and of presentation. Using words such as many or often
or all when reporting conclusions needs to be carefully considered. An investigator needs to
remain as impartial and objective as possible if the conclusions are to be trusted. Showing
that the conclusions are supported by the results is an important skill to develop.
Finally, finding the best way to present findings is equally skilled, and it depends on the
goals but also on the audience for whom the study was performed. For example, a formal
notation may be used to report the results for the requirements activity, while a summary
of problems found, supported by video clips of users experiencing those problems, may be
better for presentation to the team of developers.
This chapter introduces a variety of methods, and it describes in more detail how to
approach data analysis and presentation using some of the common approaches taken in
interaction design.
9.2 Quantitative and Qualitative
Quantitative data is in the form of numbers, or data that can easily be translated into
numbers. Examples are the number of years’ experience the interviewees have, the number
of projects a department handles at a time, or the number of minutes it takes to perform a
task. Qualitative data is in the form of words and images, and it includes descriptions, quotes
from interviewees, vignettes of activity, and photos. It is possible to express qualitative data
in numerical form, but it is not always meaningful to do so (see Box 9.1).
It is sometimes assumed that certain forms of data gathering can only result in quantitative
data and that others can only result in qualitative data. However, this is a fallacy. All forms
of data gathering discussed in the previous chapter may result in qualitative and quantitative
9 . 2 Q u A N T I TAT I v E A N D Q u A L I TAT I v E 309
data. For example, on a questionnaire, questions about the participant’s age or number of
software apps they use in a day will result in quantitative data, while any comments will
result in qualitative data. In an observation, quantitative data that may be recorded includes
the number of people involved in a project or how many hours someone spends sorting out
a problem, while notes about feelings of frustration, or the nature of interactions between
team members, are qualitative data.
Quantitative analysis uses numerical methods to ascertain the magnitude, amount, or
size of something; for example, the attributes, behavior, or strength of opinion of the partici-
pants. For example, in describing a population, a quantitative analysis might conclude that
the average person is 5 feet 11 inches tall, weighs 180 pounds, and is 45 years old. Qualita-
tive analysis focuses on the nature of something and can be represented by themes, patterns,
and stories. For example, in describing the same population, a qualitative analysis might
conclude that the average person is tall, thin, and middle-aged.
BOX 9.1
Use and Abuse of Numbers
Numbers are infinitely malleable and can make a convincing argument, but it is important
to justify the manipulation of quantitative data and what the implications will be. Before
adding a set of numbers together, finding an average, calculating a percentage, or performing
any other kind of numerical translation, consider whether the operation is meaningful in the
specific context.
Qualitative data can also be turned into a set of numbers. Translating non-numerical data
into a numerical or ordered scale is appropriate at times, and this is a common approach in
interaction design. However, this kind of translation also needs to be justified to ensure that it
is meaningful in the given context. For example, assume you have collected a set of interviews
from sales representatives about their use of a new mobile app for reporting sales queries.
One way of turning this data into a numerical form would be to count the number of words
uttered by each interviewee. Conclusions might then be drawn about how strongly the sales
representatives feel about the app; for example, the more they had to say about the product,
the stronger they felt about it. But do you think this is a wise way to analyze the data? Does
it help to answer the study questions?
Other, less obvious, abuses include translating small population sizes into percentages.
For example, saying that 50 percent of users take longer than 30 minutes to place an order
through an e-commerce website carries a different meaning than saying that two out of four
users had the same problem. It is better not to use percentages unless the number of data
points is at least 10, and even then it is appropriate to use both percentages and raw numbers
to make sure that the claim is not misunderstood.
It is possible to perform legitimate statistical calculations on a set of data and still present
misleading results by not making the context clear or by choosing the particular calculation
that gives the most favorable result (Huff, 1991). In addition, choosing and applying the best
statistical test requires careful thinking (Cairns, 2019), as using an inappropriate test can
unintentionally misrepresent the data.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N310
9.2.1 First Steps in Analyzing Data
Having collected the data, some initial processing is normally required before data analy-
sis can begin in earnest. For example, audio data may be transcribed by hand or by
using an automated tool, such as Dragon; quantitative data, such as time taken or errors
made, is usually entered into a spreadsheet, like Excel. Initial analysis steps for data typi-
cally collected through interviews, questionnaires, and observation are summarized in
Table 9.1.
Interviews
Interviewer notes need to be written up and expanded as soon as possible after the interview
has taken place so that the interviewer’s memory is clear and fresh. An audio or video record-
ing may be used to help in this process, or it may be transcribed for more detailed analysis.
Usual raw data Example
qualitative data
Example
quantitative data
Initial
processing steps
Interviews Audio recordings.
Interviewer notes.
Video recordings.
Responses to open-
ended questions.
Video pictures.
Respondent’s
opinions.
Age, job
role, years of
experience.
Responses to
close-ended
questions.
Transcription
of recordings.
Expansion
of notes.
Entry of answers
to close-ended
questions into
a spreadsheet
Question naires Written responses.
Online database.
Responses to open-
ended questions.
Responses
in “further
comments” fields.
Respondent’s
opinions.
Age, job
role, years of
experience.
Responses to
close-ended
questions.
Clean up data.
Filter into
different data sets.
Synchronization
between data
recordings.
Observation Observer’s notes.
Photographs.
Audio and video
recordings.
Data logs.
Think-aloud
Diaries.
Records
of behavior.
Description of
a task as it is
undertaken.
Copies of informal
procedures.
Demographics of
participants.
Time spent
on a task.
The number of
people involved
in an activity.
How many
different types
of activity are
undertaken.
Expansion of notes.
Transcription of
recordings.
Table 9.1 Data gathered and typical initial processing steps for interviews, questionnaires,
and observation
9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 311
Transcription takes significant effort, as people talk more quickly than most people can type
(or write), and the recording is not always clear. It is worth considering whether to transcribe
the whole interview or just sections of it that are relevant. Deciding what is relevant, however,
can be difficult. Revisiting the goals of the study to see which passages address the research
questions can guide this process.
Close-ended questions are usually treated as quantitative data and analyzed using basic
quantitative analysis (see Section 9.3 “Basic Quantitative Analysis”). For example, a question
that asks for the respondent’s age range can easily be analyzed to find out the percentage of
respondents in each. More complicated statistical techniques are needed to identify relation-
ships between responses that can be generalized, such as whether there is an interaction
between the condition being tested and a demographic. For example, do people of different
ages use Facebook for different lengths of time when first logging on in the morning or at
night before they go to bed? Open-ended questions typically result in qualitative data that
might be searched for categories or patterns of response.
Questionnaires
Increasingly, questionnaire responses are provided using online surveys, and the data is auto-
matically stored in a database. The data can be filtered according to respondent subpopula-
tions (for instance, everyone under 16) or according to a particular question (for example,
to understand respondents’ reactions to one kind of robot personality rather than another).
This allows analyses to be conducted on subsets of the data and hence to draw specific con-
clusions for more targeted goals. To conduct this kind of analysis requires sufficient data
from a large enough sample of participants.
Observation
Observation can result in a wide variety of data including notes, photographs, data logs,
think-aloud recordings (often called protocols), video, and audio recordings. Taken together,
these different types of data can provide a rich picture of the observed activity. The difficult
part is working out how to combine the different sources to create a coherent narrative
of what has been recorded; analytic frameworks, discussed in section 9.5, can help with
this. Initial data processing includes writing up and expanding notes and transcribing ele-
ments of the audio and video recordings and the think-aloud protocols. For observation in
a controlled environment, initial processing might also include synchronizing different data
recordings.
Transcriptions and the observer’s notes are most likely to be analyzed using qualitative
approaches, while photographs provide contextual information. Data logs and some ele-
ments of the observer’s notes would probably be analyzed quantitatively.
9.3 Basic Quantitative Analysis
Explaining statistical analysis requires a whole book on its own (for example, see Cairns,
2019). Here, we introduce two basic quantitative analysis techniques that can be used effec-
tively in interaction design: averages and percentages. Percentages are useful for standard-
izing the data, particularly to compare two or more large sets of responses.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N312
Averages and percentages are fairly well-known numerical measures. However, there
are three different types of average, and using the wrong one can lead to the misinter-
pretation of the results. These three are: mean, median, and mode. Mean refers to the
commonly understood interpretation of average; that is, add together all the figures and
divide by the number of figures with which you started. Median and mode averages are
less well-known but are very useful. The median is the middle value of the data when the
numbers are ranked. The mode is the most commonly occurring number. For example, in
a set of data (2, 3, 4, 6, 6, 7, 7, 7, 8), the median is 6 and the mode is 7, while the mean
is 50/9 = 5.56. In this case, the difference between the different averages is not that great.
However, consider the set (2, 2, 2, 2, 450). Now the median is 2, the mode is 2, and the
mean is 458/5 = 91.6!
Use of simple averages can provide useful overview information, but they need to be used
with caution. Evangelos Karapanos et al. (2009) go further and suggest that averaging treats
diversity among participants as error and proposes the use of a multidimensional scaling
approach instead.
Before any analysis can take place, the data needs to be collated into analyzable data sets.
Quantitative data can usually be translated into rows and columns, where one row equals
one record, such as respondent or interviewee. If these are entered into a spreadsheet such
as Excel, this makes simple manipulations and data set filtering easier. Before entering data
in this way, it is important to decide how to represent the different possible answers. For
example, “don’t know” represents a different response from no answer at all, and they need
to be distinguished, for instance, with separate columns in the spreadsheet. Also, if dealing
with options from a close-ended question, such as job role, there are two different possible
approaches that affect the analysis. One approach is to have a column headed “Job role” and
to enter the job role as it is given by the respondent or interviewee. The alternative approach
is to have a column for each possible answer. The latter approach lends itself more easily to
automatic summaries. Note, however, that this option will be open only if the original ques-
tion was designed to collect the appropriate data (see Box 9.2).
Source: Mike Baldwin / Cartoon Stock
9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 313
BOX 9.2
How Question Design Affects Data Analysis
Different question designs affect the kinds of analyses that can be performed and the kinds
of conclusions that can be drawn. To illustrate this, assume that some interviews have
been conducted to evaluate a new app that lets you try on virtual clothes and see yourself
in real time as a 3D holograph. This is an extension of the Memory Mirror described at
http://memorymirror.com.
Assume that one of the questions asked is: “How do you feel about this new app?”
Responses to this will be varied and may include that it is cool, impressive, realistic, clunky,
technically complex, and so on. There are many possibilities, and the responses would need
to be treated qualitatively. This means that analysis of the data must consider each individual
response. If there are only 10 or so responses, then this may not be too bad, but if there
are many more, it becomes harder to process the information and harder to summarize
the findings. This is typical of open-ended questions; that is, answers are not likely to be
homogeneous and so they will need to be treated individually. In contrast, answers to a close-
ended question, which gives respondents a fixed set of alternatives from which to choose,
can be treated quantitatively. So, for example, instead of asking “How do you feel about the
virtual try-on holograph?” assume that you have asked “In your experience, are virtual try-on
holographs realistic, clunky, or distorted?” This clearly reduces the number of options and the
responses would be recorded as “realistic,” “clunky,” or “distorted.”
When entered in a spreadsheet, or a simple table, initial analysis of this data might look
like the following:
Respondent Realistic Clunky Distorted
A 1
B 1
C 1
. . .
Z 1
Total 14 5 7
Based on this, we can then say that 14 out of 26 (54 percent) of the respondents think
virtual try-on holographs are realistic, 5 out of 26 (19 percent) think they are clunky, and 7
out of 26 (27 percent) think they are distorted. Note also that in the table, respondents’ names
are replaced by letters so that they are identifiable but anonymous to any onlookers. This
strategy is important for protecting participants’ privacy.
Another alternative that might be used in a questionnaire is to phrase the question in
terms of a Likert scale, such as the following one. This again alters the kind of data and hence
the kind of conclusions that can be drawn.
(Continued)
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N314
For simple collation and analysis, spreadsheet software such as Excel or Google
Sheets is often used as it is commonly available, is well understood, and offers a variety
of numerical manipulations and graphical representations. Basic analysis might involve
finding out averages and identifying outliers, in other words, values that are significantly
different from the majority, and hence not common. Producing a graphical representation
provides an overall view of the data and any patterns it contains. Other tools are avail-
able for performing specific statistical tests, such as online t-tests and A/B testing tools.
Data visualization tools can create more sophisticated representations of the data such
as heatmaps.
For example, consider the set of data shown in Table 9.2, which was collected during an
evaluation of a new photo sharing app. This data shows the users’ experience of social media
and the number of errors made while trying to complete a controlled task with the new app.
It was captured automatically and recorded in a spreadsheet; then the totals and averages
were calculated. The graphs in Figure 9.1 were generated using the spreadsheet package.
They show an overall view of the data set. In particular, it is easy to see that there are no
significant outliers in the error rate data.
Adding one more user to Table 9.2 with an error rate of 9 and plotting the new data as a
scatter graph (see Figure 9.2) illustrates how graphs can help to identify outliers. Outliers are
usually removed from the main data set because they distort the general patterns. However,
outliers may also be interesting cases to investigate further in case there are special circum-
stances surrounding those users and their session.
Virtual try-on holographs are realistic:
strongly agree agree neither disagree strongly disagree
□ □ □ □ □
The data could then be analyzed using a simple spreadsheet or table:
Respondent Strongly agree Agree Neither Disagree Strongly disagree
A 1 1
B
C 1
. . .
Z 1
Total 5 7 10 1 3
In this case, the kind of data being collected has changed. Based on this second set, nothing
can be said about whether respondents think the virtual try-on holographs are clunky or dis-
torted, as that question has not been asked. We can only say that, for example, 4 out of 26
(15 percent) disagreed with the statement that virtual try-on holographs are realistic, and of
those, 3 (11.5 percent) strongly disagreed.
9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 315
These initial investigations also help to identify other areas for further investigation. For
example, is there something special about users with error rate 0 or something distinctive
about the performance of those who use the social media only once a month?
Social media use
User More than
once a day
Once a
day
Once a
week
Two or three
times a week
Once a month Number of
errors made
1 1 4
2 1 2
3 1 1
4 1 0
5 1 2
6 1 3
7 1 2
8 1 0
9 1 3
10 1 2
11 1 1
12 1 2
13 1 4
14 1 2
15 1
16 1 1
17 1 1 0
18 1 0
Totals 4 7 2 3 2 30
Mean 1.67
(to 2 decimal places)
Table 9.2 Data gathered during a study of a photo sharing app
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N316
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
1 5 7 9 11 13 15 17
User
N
um
be
r
of
e
rr
or
s
m
ad
e
3
Social media use
> once a day
once a day
once a week
2 or 3 times a week
once a month
(a)
(b)
Figure 9.1 Graphical representations of the data in Table 9.2 (a) The distribution of errors made
(take note of the scale used in these graphs, as seemingly large differences may be much smaller in
reality). (b) The spread of social media experience within the participant group.
9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 317
0
2
4
6
8
10
0 5 10 15 20
User
N
um
be
r
of
e
rr
or
s
m
ad
e
Figure 9.2 Using a scatter diagram helps to identify outliers in your data quite quickly
AcTIvITY 9.1
The data in the following table represents the time taken for a group of users to select and buy
an item from an online shopping website.
Using a spreadsheet application to which you have access, generate a bar graph and a scatter
diagram to provide an overall view of the data. From this representation, make two initial
observations about the data that might form the basis of further investigation.
User A B C D E F G H I J K L M N O P Q R S
Time to
complete
(mins)
15 10 12 10 14 13 11 18 14 17 20 15 18 24 12 16 18 20 26
Comment
The bar graph and scatter diagram are shown here.
Time to complete task
0
5
10
15
20
25
30
User
T
im
e
in
m
in
ut
es
A B C F G H I J K L M N O P Q RE SD
(Continued)
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N318
It is fairly straightforward to compare two sets of results, for instance from the evalua-
tion of two interactive products, using these kinds of graphical representations of the data.
Semantic differential data can also be analyzed in this way and used to identify trends, pro-
vided that the format of the question is appropriate. For example, the following question was
asked in a questionnaire to evaluate two different smartphone designs:
For each pair of adjectives, place a cross at the point between them that reflects the
extent to which you believe the adjectives describe the smartphone design. Please place
only one cross between the marks on each line.
Annoying Pleasing
Di�cult to useEasy to use
ExpensiveValue-for-money
Attractive Unattractive
Not secureSecure
UnhelpfulHelpful
Lo-techHi-tech
FragileRobust
E�cientIne�cient
DatedModern
Time to complete task
A
B
C
D
E F
G
H
I
J
K
L
M
N
O
P
Q
R
S
0
5
10
15
20
25
30
0 5 10 15 20
User
T
im
e
in
m
in
ut
es
From these two diagrams, there are two areas for further investigation. First, the values for user
N (24) and user S (26) are higher than the others and could be looked at in more detail. In addi-
tion, there appears to be a trend that the users at the beginning of the testing time (particularly
users B, C, D, E, F, and G) performed faster than those toward the end of the testing time. This
is not a clear-cut situation, as O also performed well, and I, L, and P were almost as fast, but
there may be something about this later testing time that has affected the results, and it is worth
investigating further.
9 . 3 B A S I c Q u A N T I TAT I v E A N A LY S I S 319
Table 9.3 and Table 9.4 show the tabulated results from 100 respondents. Note that the
responses have been translated into five categories, numbered from 1 to 5, based on where
the respondent marked the line between each pair of adjectives. It is possible that respond-
ents may have intentionally put a cross closer to one side of the box than the other, but it is
acceptable to lose this nuance in the data, provided that the original data is not lost, and any
further analysis could refer back to it.
The graph in Figure 9.3 shows how the two smartphone designs varied according to the
respondents’ perceptions of how modern the design is. This graphical notation shows clearly
how the two designs compare.
1 2 3 4 5
Annoying 35 20 18 15 12 Pleasing
Easy to use 20 28 21 13 18 Difficult to use
Value-for-money 15 30 22 27 6 Expensive
Attractive 37 22 32 6 3 Unattractive
Secure 52 29 12 4 3 Not secure
Helpful 33 21 32 12 2 Unhelpful
Hi-tech 12 24 36 12 16 Lo-tech
Robust 44 13 15 16 12 Fragile
Inefficient 28 23 25 12 12 Efficient
Modern 35 27 20 11 7 Dated
Table 9.3 Phone 1
1 2 3 4 5
Annoying 24 23 23 15 15 Pleasing
Easy 37 29 15 10 9 Difficult to use
Value-for-money 26 32 17 13 12 Expensive
Attractive 38 21 29 8 4 Unattractive
Secure 43 22 19 12 4 Not secure
Helpful 51 19 16 12 2 Unhelpful
Hi-tech 28 12 30 18 12 Lo-tech
Robust 46 23 10 11 10 Fragile
Inefficient 10 6 37 29 18 Efficient
Modern 3 10 45 27 15 Dated
Table 9.4 Phone 2
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N320
Data logs that capture users’ interactions automatically, such as with a website or smart-
phone, can also be analyzed and represented graphically, thus helping to identify patterns
in behavior. Also, more sophisticated manipulations and graphical images can be used to
highlight patterns in collected data.
9.4 Basic Qualitative Analysis
Three basic approaches to qualitative analysis are discussed in this section: identifying
themes, categorizing data, and analyzing critical incidents. Critical incident analysis is a way
to isolate subsets of data for more detailed analysis, perhaps by identifying themes or apply-
ing categories. These three basic approaches are not mutually exclusive and are often used in
combination, for example, when analyzing video material critical incidents may first be iden-
tified and then a thematic analysis undertaken. Video analysis is discussed further in Box 9.3.
As with quantitative analysis, the first step in qualitative analysis is to gain an overall
impression of the data and to start looking for interesting features, topics, repeated obser-
vations, or things that stands out. Some of these will have emerged during data gathering,
and this may already have suggested the kinds of pattern to look for, but it is important to
confirm and re-confirm findings to make sure that initial impressions don’t bias analysis. For
example, you might notice from the logged data of people visiting TripAdviser.com that they
often look for reviews for hotels that are rated “terrible” first. Or, you might notice that a
lot of respondents all say how frustrating it is to have to answer so many security questions
when logging onto an online banking service. During this first pass, it is not necessary to
capture all of the findings but instead to highlight common features and record any surprises
that arise (Blandford, 2017).
For observations, the guiding framework used in data gathering will give some struc-
ture to the data. For example, the practitioner’s framework for observation introduced in
Chapter 8, “Data Gathering,” will have resulted in a focus on who, where, and what, while
Perceptions of Modern
0
5
10
15
20
25
30
35
40
45
50
1 3 5
N
um
be
r
of
r
es
po
nd
en
ts
Phone 1
Phone 2
42
Figure 9.3 A graphical comparison of two smartphone designs according to whether they are per-
ceived as modern or dated
http://TripAdviser.com
9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 321
using the more detailed framework will result in patterns relating to physical objects, people’s
goals, sequences of events, and so on.
Qualitative data can be analyzed inductively, that is, extracting concepts from the data,
or deductively, in other words using existing theoretical or conceptual ideas to categorize
data elements (Robson and McCartan, 2016). Which approach is used depends on the data
obtained and the goal of the study, but the underlying principle is to classify elements of the
data in order to gain insights toward the study’s goal. Identifying themes (thematic analysis)
takes an inductive approach, while categorizing data takes a deductive approach. In practice,
analysis is often performed iteratively, and it is common for themes identified inductively
then to be applied deductively to new data, and for an initial, pre-existing categorization
scheme, to be enhanced inductively when applied to a new situation or new data. One of the
most challenging aspects of identifying themes or new categories is determining meaningful
codes that are orthogonal (that is, codes which do not overlap). Another is deciding on the
appropriate granularity for them, for example at the word, phrase, sentence, or paragraph
level. This is also dependent on the goal of the study and the data being analyzed.
Whether an inductive or deductive approach is used, an objective is to produce a reli-
able analysis, that is, one that can be replicated by someone else if they were to use the same
type of approach. One way to achieve this is to train another person to do the coding. When
training is complete, both researchers analyze a sample of the same data. If there is a large
discrepancy between the two analyses, either training was inadequate or the categorization
is not working and needs to be refined. When a high level of reliability is reached between
the two researchers, it can be quantified by calculating the inter-rater reliability. This is the
percentage of agreement between the analyses of the two researchers, defined as the number
of items of agreement, for example the number of categories or themes arising from the data
that have been identified consistently by both researchers, expressed as a percentage of the
total number of items examined. An alternative measure where two researchers have been
used is Cohen’s kappa, (κ), which considers the possibility that agreement has occurred due
to chance (Cohen, 1960).
Using more sophisticated analytical frameworks to structure the analysis of qualitative
data can lead to additional insights that go beyond the results of these basic techniques.
Section 9.5 introduces frameworks that are commonly used in interaction design.
BOX 9.3
Analyzing Video Material
A good way to start a video analysis is to watch what has been recorded all the way through
while writing a high-level narrative of what happens, noting down where in the video there
are any potentially interesting events. How to decide which is an interesting event will depend
on what is being observed. For example, in a study of the interruptions that occur in an open
plan office, an event might be each time that a person takes a break from an ongoing activity,
for instance, when a phone rings, someone walks into their cubicle, or email arrives. If it is a
study of how pairs of students use a collaborative learning tool, then activities such as turn-
taking, sharing of input devices, speaking over one another, and fighting over shared objects
would be appropriate to record.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N322
9.4.1 Identifying Themes
Thematic analysis is considered an umbrella term to cover a variety of different approaches
to examining qualitative data. It is a widely used analytical technique that aims to identify,
analyze, and report patterns in the data (Braun and Clarke, 2006). More formally, a theme
is something important about the data in relation to the study goal. A theme represents a
pattern of some kind, perhaps a particular topic or feature found in the data set, which is
considered to be important, relevant, and even unexpected with respect to the goals driv-
ing the study. Themes that are identified may relate to a variety of aspects: behavior, a user
group, events, places or situations where those events happen, and so on. Each of these kinds
of themes may be relevant to the study goals. For example, descriptions of typical users may
be an outcome of data analysis that focuses on participant characteristics. Although thematic
analysis is described in this section on qualitative analysis, themes and patterns may also
emerge from quantitative data.
After an initial pass through the data, the next step is to look more systematically for
themes across participants’ transcripts, seeking further evidence both to confirm and discon-
firm initial impressions in all of the data. This more systematic analysis focuses on checking for
consistency; in other words, do the themes occur across all participants, or is it only one or two
people who mention something? Another focus is on finding further themes that may not have
been noticed first time. Sometimes, the refined themes resulting from this systematic analysis
form the primary set of findings for the analysis, and sometimes they are just the starting point.
The study’s goal provides an orienting focus for the identification and formulation of
themes in the first and subsequent passes through the data. For example, consider a survey
to evaluate whether the information displayed on a train travel website is appropriate and
sufficient. Several of the respondents suggest that the station stops in between the origin
and destination stations should be displayed. This is relevant to the study’s goal and would be
reported as a main theme. In another part of the survey, under further comments you might
notice that several respondents say the company’s logo is distracting. Although this too is a
theme in the data, it is not directly relevant to the study’s goals and may be reported only as
a minor theme.
Chronological and video times are used to index events. These may not be the same,
since recordings can run at different speeds from real time and video can be edited. Labels for
certain routine events are also used, for instance lunchtime, coffee break, staff meeting, and
doctor’s rounds. Spreadsheets are used to record the classification and description of events,
together with annotations and notes of how the events began, how they unfolded, and how
they ended.
Video can be augmented with captured screens or logged data of people’s interactions
with a computer display, and sometimes transcription is required. There are various logging
and screen capture tools available for this purpose, which enable interactions to be played
back as a movie, showing screen objects being opened, moved, selected, and so on. These can
then be played in parallel with the video to provide different perspectives on the talk, physical
interactions, and the system’s responses that occur. Having a combination of data streams can
enable more detailed and fine-grained patterns of behavior to be interpreted (Heath
et al., 2010).
9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 323
Once a number of themes have been identified, it is usual to step back from the set
of themes to look at the bigger picture. Is an overall narrative starting to emerge, or are the
themes quite disparate? Do some seem to fit together with others? If so, is there an over-
arching theme? Can you start to formulate a meta-narrative, that is, an overall picture of
the data? In doing this, some of the original themes may not seem as relevant and can be
removed. Are there some themes that contradict each other? Why might this be the case?
This can be done individually, but more often this is applied in a group using brainstorming
techniques with sticky notes.
A common technique for exploring data, identifying themes, and looking for an overall
narrative is to create an affinity diagram. The approach seeks to organize individual ideas
and insights into a hierarchy showing common structures and themes. Notes are grouped
together when they are similar in some fashion. The groups are not predefined, but rather
they emerge from the data. This process was originally introduced into the software quality
community from Japan, where it is regarded as one of the seven quality processes. The affin-
ity diagram is built gradually. One note is put up first, and then the team searches for other
notes that are related in some way.
Affinity diagrams are used in Contextual Design (Beyer and Holtzblatt, 1998; Holtz-
blatt, 2001), but they have also been adopted widely in interaction design (Lucero, 2015).
For example, Madeline Smith et al. (2018) conducted interviews to design a web app for
co-watching videos across a distance, and they used affinity diagramming to identify require-
ments from interviewee transcripts (see Figure 9.4). Despite the prevalence of digital col-
laboration tools, the popularity of physical affinity diagramming using sticky notes drawn
by hand, has persisted for many years (Harboe and Huang, 2015).
Figure 9.4 Section of an affinity diagram built during the design of a web application
Source: Smith (2018). Used courtesy of Madeline Smith
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N324
9.4.2 Categorizing Data
Inductive analysis is appropriate when the study is exploratory, and it is important to let the
themes emerge from the data itself. Sometimes, the analysis frame (the set of categories used)
is chosen beforehand, based on the study goal. In that case, analysis proceeds deductively. For
example, in a study of novice interaction designer behavior in Botswana, Nicole Lotz et al.
(2014) used a set of predetermined categories based on Schön (1983)’s design and reflection
cycle: naming, framing, moving, and reflecting. This allowed the researchers to identify detailed
patterns in the designers’ behavior, which provided implications for education and support.
To illustrate categorization, we present an example derived from a set of studies look-
ing at the use of different navigation aids in an online educational setting (Ursula Armi-
tage, 2004). These studies involved observing users working through some online educational
material (about evaluation methods), using the think-aloud technique. The think-aloud
protocol was recorded and then transcribed before being analyzed from various perspec-
tives, one of which was to identify usability problems that the participants were having with
the online environment known as Nestor Navigator (Zeiliger et al., 1997). An excerpt from the
transcription is shown in Figure 9.5.
I’m thinking that it’s just a lot of information to absorb from the screen. I just I don’t concentrate
very well when I’m looking at the screen. I have a very clear idea of what I’ve read so far . . .
but it’s because of the headings I know OK this is another kind of evaluation now and before it
was about evaluation which wasn’t anyone can test and here it’s about experts so it’s like it’s
nice that I’m clicking every now and then coz it just sort of organizes the thoughts. But it would
still be nice to see it on a piece of paper because it’s a lot of text to read.
Am I supposed to, just one question, am supposed to say something about what I’m reading
and what I think about it the conditions as well or how I feel reading it from the screen, what is
the best thing really?
Observer: What you think about the information that you are reading on the screen . . . you
don’t need to give me comments . . . if you think this bit fits together.
There’s so much reference to all those previously said like I’m like I’ve already forgotten the
name of the other evaluation so it said unlike the other evaluation this one like, there really is
not much contrast with the other it just says what it is may be . . . so I think I think of . . .
Maybe it would be nice to have other evaluations listed to see other evaluations you know
here, to have the names of other evaluations other evaluations just to, because now when
I click previous I have to click it several times so it would be nice to have this navigation,
extra links.
Figure 9.5 Excerpt from a transcript of a think-aloud protocol when using an online educational
environment. Note the prompt from the observer about halfway through.
Source: Armitage (2004). Used courtesy of Ursula Armitage
To read more about the use of affinity diagrams in interaction design, see the fol-
lowing page:
https://uxdict.io/design-thinking-methods-affinity-diagrams-357bd8671ad4
9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 325
This excerpt was analyzed using a categorization scheme derived from a set of negative
effects of a system on a user (van Rens, 1997) and was iteratively extended to accommo-
date the specific kinds of interaction observed in these studies. The categorization scheme is
shown in Figure 9.6.
This scheme developed and evolved as the transcripts were analyzed and more cat-
egories were identified inductively. Figure 9.7 shows the excerpt from Figure 9.5 coded
using this categorization scheme. Note that the transcript is divided up using square
brackets to indicate which element is being identified as showing a particular usabil-
ity problem.
Having categorized the data, the results can be used to answer the study goals. In the
earlier example, the study allowed the researchers to be able to quantify the number of
usability problems encountered overall by participants, the mean number of problems per
participant for each of the test conditions, and the number of unique problems of each type
per participant. This also helped to identify patterns of behavior and recurring problems.
Having the think-aloud protocol meant that the overall view of the usability problems could
take context into account.
Figure 9.6 Criteria for identifying usability problems from verbal protocol transcriptions
Source: Armitage (2004). Used courtesy of Ursula Armitage
1. Interface Problems
1.1. Verbalizations show evidence of dissatisfaction about an aspect of the
interface.
1.2. Verbalizations show evidence of confusion/uncertainty about an
aspect of the interface.
1.3. Verbalizations show evidence of confusion/surprise at the outcome of
an action.
1.4. Verbalizations show evidence of physical discomfort.
1.5. Verbalizations show evidence of fatigue.
1.6. Verbalizations show evidence of difficulty in seeing particular aspects
of the interface.
1.7. Verbalizations show evidence that they are having problems achieving
a goal that they have set themselves, or the overall task goal.
1.8. Verbalizations show evidence that the user has made an error.
1.9. The participant is unable to recover from error without external help
from the experimenter.
1.10. The participant suggests a redesign of the interface of the elec-
tronic texts.
2. content Problems
2.1. Verbalizations show evidence of dissatisfaction about aspects of the
content of the electronic text.
2.2. Verbalizations show evidence of confusion/uncertainty about aspects
of the content of the electronic text.
2.3. Verbalizations show evidence of a misunderstanding of the electronic
text content (the user may not have noticed this immediately).
2.4. The participant suggests re-writing the electronic text content.
Identified problems should be coded as [uP, << problem no. >>].
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N326
AcTIvITY 9.2
The following is a think-aloud extract from the same study of users working through online educa-
tional material. Using the categorization scheme in Figure 9.6, code this extract for usability prob-
lems. It is useful to put brackets around the complete element of the extract that you are coding.
Well, looking at the map, again there’s no obvious start point, there should be something high-
lighted that says ‘start here.’
Ok, the next keyword that’s highlighted is evaluating, but I’m not sure that’s where I want to go
straight away, so I’m just going to go back to the introduction.
Yeah, so I probably want to read about usability problems before I start looking at evaluation. So,
I, yeah. I would have thought that the links in each one of the pages would take you to the next
logical point, but my logic might be different to other people’s. Just going to go and have a look at
usability problems.
Ok, again I’m going to flip back to the introduction. I’m just thinking if I was going to do this
myself I would still have a link back to the introduction, but I would take people through the
logical sequence of each one of these bits that fans out, rather than expecting them to go back
all the time.
Going back . . . to the introduction. Look at the types. Observation, didn’t really want to go there.
What’s this bit [pointing to Types of UE on map]? Going straight to types of . . .
Ok, right, yeah, I’ve already been there before. We’ve already looked at usability problems, yep
that’s ok, so we’ll have a look at these references.
[I’m thinking that it’s just a lot of information to absorb from the screen. uP 1.1][ I just I don’t con-
centrate very well when I’m looking at the screen uP 1.1]. I have a very clear idea of what I’ve read
so far . . . [but it’s because of the headings uP 1.1] I know OK this is another kind of evaluation now
and before it was about evaluation which wasn’t anyone can test and here it’s about experts so it’s
like it’s nice that I’m clicking every now and then coz it just sort of organises the thoughts. [But it
would still be nice to see it on a piece of paper uP 1.10] [because it’s a lot of text to read UP 1.1].
Am I supposed to, just one question, am supposed to say something about what I’m reading
and what I think about it the conditions as well or how I feel reading it from the screen, what is
the best thing really?
Observer: What you think about the information that you are reading on the screen . . . you
don’t need to give me comments . . . if you think this bit fits together.
[There’s so much reference to all those previously said uP2.1] [ like I’m like I’ve already forgotten
the name of the other evaluation so it said unlike the other evaluation this one like, there really is
not much contrast with the other it just says what it is may be . . . so I think I think of . . . UP 2.2]
[Maybe it would be nice to have other evaluations listed to see other evaluations you know
here, to have the names of other evaluations other evaluations uP 1.10] just to, [because now
when I click previous I have to click it several times uP 1.1, 1.7] [so it would be nice to have
this navigation, extra links uP 1.10].
Figure 9.7 The excerpt in Figure 9.5 coded using the categorization scheme in Figure 9.6
Source: Armitage (2004). Used courtesy of Ursula Armitage
9 . 4 B A S I c Q u A L I TAT I v E A N A LY S I S 327
9.4.3 Critical Incident Analysis
Data gathering sessions can often result in a lot of data. Analyzing all of this data in any detail
is very time-consuming and often not necessary. Critical incident analysis is one approach
that helps to identify significant subsets of the data for more detailed analysis. This tech-
nique is a set of principles that emerged from work carried out in the United States Army Air
Forces where the goal was to identify the critical requirements of good and bad performance
by pilots (Flanagan, 1954). It has two basic principles: “(a) reporting facts regarding behav-
ior is preferable to the collection of interpretations, ratings, and opinions based on general
impressions; (b) reporting should be limited to those behaviors which, according to compe-
tent observers, make a significant contribution to the activity” (Flanagan, 1954, p. 355). In
the interaction design context, the use of well-planned observation sessions satisfies the first
principle. The second principle refers to critical incidents, that is, incidents that are significant
or pivotal to the activity being observed, in either a desirable or an undesirable way.
In interaction design, critical incident analysis has been used in a variety of ways, but the
main focus is to identify specific incidents that are significant and then to focus on these and
analyze them in detail, using the rest of the data collected as context to inform interpreta-
tion. These may be identified by the users during a retrospective discussion of a recent event
or by an observer either through studying video footage or in real time. For example, in an
I clicked on the map rather than going back via introduction, to be honest I get fed up
going back to introduction all the time.
comment
Coding transcripts takes practice, but this activity will give you an idea of the kinds of
decisions involved in applying categories. Our coded extract is shown here:
[Well, looking at the map, again there’s no obvious start point uP 1.2, 2.2], [there should
be something highlighted that says ‘start here’ uP 1.1, 1.10].
Ok, the next keyword that’s highlighted is evaluating, but [I’m not sure that’s where I want
to go straight away uP 2.2], so I’m just going to go back to the introduction.
Yeah, so I probably want to read about usability problems before I start looking at evalua-
tion. So, I, yeah. [I would have thought that the links in each one of the pages would take
you to the next logical point, but my logic might be different to other people’s uP 1.3].
Just going to go and have a look at usability problems.
Ok, again I’m going to flip back to the introduction. [I’m just thinking if I was going to
do this myself I would still have a link back to the introduction, but I would take people
through the logical sequence of each one of these bits that fans out, rather than expect-
ing them to go back all the time uP 1.10].
Going back . . . to the introduction. [Look at the types. Observation, didn’t really want
to go there. What’s this bit [pointing to Types of UE on map]? uP 2.2] Going straight to
types of . . .
Ok, right, yeah, I’ve already been there before. We’ve already looked at usability prob-
lems, yep that’s ok, so we’ll have a look at these references.
I clicked on the map rather than going back via introduction, [to be honest I get fed up
going back to introduction all the time. uP 1.1].
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N328
evaluation study, a critical incident may be signaled by times when users were obviously
stuck—usually marked by a comment, silence, looks of puzzlement, and so on. These are
indications only. Whether the incident is significant enough to be worthy of further investiga-
tion depends on the severity of the problem identified.
Tuomas Kari et al. (2017) used the critical incident technique in a study of the
location-based augmented reality game Pokémon GO. They were interested in identifying
the types of behavior change that playing the game induced in players. To do this, they
distributed a survey through social media channels asking experienced players to identify
and describe one outstanding positive or negative experience. The 262 valid responses were
themed and categorized into eight groups. Apart from expected behavior change such as
increased physical activity, they also found that players were more social, found their routines
more meaningful, expressed more positive emotions, and were more motivated to explore
their surroundings. In another study, Elise Grison et al. (2013) used the critical incident
technique to investigate specific factors that influence travelers’ choices of transport mode in
Paris in order to adapt new tools and services for mobility, such as dynamic route planners.
Participants were asked to report on positive and negative real events they experienced in the
context of their route to work or study, and whether they regretted or were satisfied with this
choice of transport. Their findings included that contextual factors have a great influence on
choice, that people were more likely to choose an alternative route to return home than when
setting out, and that emotional state is important when planning a route.
To read more on critical incident analysis in usability studies where the emphasis
is on understanding the cause of problems, see this site:
www.usabilitybok.org/critical-incident-technique.
AcTIvITY 9.3
Assign yourself or a friend the task of identifying the next available theater performance or
movie that you’d like to attend in your area. As you perform this task, or watch your friend
do it, make a note of critical incidents associated with the activity. Remember that a critical
incident may be a positive or a negative event.
Comment
Information about entertainment may be available through the local newspaper, searching
online, looking at social media to see what is recommended by friends, or contacting local
cinemas or theaters directly. When this author asked her daughter to attempt this task, several
critical incidents emerged, including the following:
1. After checking her social media channels, nothing in the recommendations appealed to her
and so she decided to search online.
2. She found that one of her all-time favorite movie classics was playing at the local movie
theater for just one week, and tickets were still available.
3. When trying to buy the tickets, she discovered that she needed a credit card, which she
didn’t have, and so she had to ask me to complete the purchase!
http://www.usabilitybok.org/critical-incident-0439nique
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 329
9.5 Which Kind of Analytic Framework to Use?
There are several different analytical frameworks that can be used to analyze and interpret data
from a qualitative study. In this section, six different approaches are outlined, ordered roughly
in terms of their granularity, that is, the level of detail involved. For example, conversation
analysis has a fine level of granularity, and it allows the details of what is said and how dur-
ing a short fragment of conversation to be examined, while systems-based frameworks take a
broader scope and have a coarser level of granularity, such as what happens when a new digital
technology is introduced into an organization, like a hospital. Conversation analysis may result
in insights related to users’ interactions through a collaboration technology, while systems-
based analyses may result in insights related to changes in work practices, worker satisfac-
tion, improvements in workflow, impact on an office culture, and so on. Table 9.5 lists the six
approaches in terms of the main types data, focus, expected outcomes, and level of granularity.
Framework Data Focus Expected
outcomes
Level of granularity
Conversation
analysis
Recordings
of spoken
conversations
How
conversations
are conducted
Insights into how
conversations
are managed and
how they progress
Word-level, or finer,
for instance, pauses
and inflection
Discourse
analysis
Recordings of
speech or writing
from individuals or
several participants
How words
are used to
convey meaning
Implicit or hidden
meanings in texts
Word, phrase, or
sentence-level
Content
analysis
Any form of “text”
including written
pieces, video and
audio recordings, or
photographs
How often
something is
featured or is
spoken about
Frequency of
items appearing
in a text
A wide range
of levels from
words, to feelings
or attitudes, to
artifacts or people
Interaction
analysis
Video recordings
of a naturally-
occurring activity
Verbal and
non-verbal
interactions
between people
and artifacts
Insights about
how knowledge
and action are
used within
an activity
At the level of
artifact, dialogue,
and gesture
Grounded
theory
Empirical data
of any kind
Constructing a
theory around
the phenomenon
of interest
A theory
grounded in
empirical data
Varying levels,
depending on
the phenomenon
of interest
Systems-
based
frameworks
Large-scale and
heterogeneous data
Large-scale
involving people
and technology,
such as a hospital
or airport
Insights about
organizational
effectiveness and
efficiency
Macro-level,
organizational level
Table 9.5 Overview of analytical frameworks used in interaction Design
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N330
9.5.1 Conversation Analysis
Conversation analysis (CA) examines the semantics of a conversation in fine detail. The focus
is on how a conversation is conducted (Jupp, 2006). This technique is used in sociological
studies, and it examines how conversations start and how turn-taking is structured, together
with other rules of conversation. It has been used to analyze interactions in a range of settings,
and it has influenced designers’ understanding of users’ needs in these environments. It can
also be used to compare conversations that take place through different media, for example,
face-to-face conversations versus those conducted through social media. More recently, it
has been used to analyze the conversations that take place with voice-assisted technologies
and chatbots.
Voice assistants (also called smart speakers), like Amazon Echo, have become increas-
ingly popular in domestic settings, providing a limited kind of conversational interaction,
mainly by answering questions and responding to requests. But how do families orient and
adapt to them? Does using this device change the way they talk, or do they talk to the device
as if it was another human being?
Martin Porcheron et al. (2018) carried out a study examining how such devices were
being used by families in their own homes, and they used conversation analysis with excerpts
from selected conversations. A sample fragment of a conversation they analyzed is presented
in Figure 9.8. This uses a particular type of syntax for marking up the minutiae of interac-
tions and speech that took place during an approximate 10-second period. Square parenthe-
ses are used to show overlapping talk, round parentheses to indicate a pause, and physical
spacing to show temporal sequencing of what is said. This level of detail enables the analysis
to reveal subtle cues and mechanisms that are used during the conversations.
In this fragment, Susan (who is the mother) announces to Liam (her son) and Carl (the
father) her desire to play a particular game (called Beat the Intro) with their Amazon Echo.
Liam does not want to play (expressed by his long “no” cry in response). Susan, however, has
01 SUS i’d like to play beat the intro in a minute
02 LIA [ oh no: ]
03 SUS [ alexa ][ (1.1) ] beat the in[tro
04 CAR [ °yeah°; ]
05 LIA [°no:::…°
06 CAR (0.6) it’s mother’s day? (0.4)
07 SUS it’s ( ) yep (.) listen (.) you need to keep
08 on eating your orange stuff (.) liam
09 (0.7)
10 CAR and your green stuff
11 SUS alexa (1.3) alexa (0.5)=
12 CAR =°and your brown stuff
13 SUS play beat the intro
Figure 9.8 An extract of the conversation between the family and Alexa, marked up for conversa-
tion analysis
Source: Porcheron et al. (2018), fragment 1. Reproduced with permission of ACM Publications
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 331
already called “Alexa” to wake up the device. Carl shows his support for her, as indicated
by his quick “yeah” during the pause after she says “Alexa.” Alexa, however, appears not to
respond. At this point, Susan returns to the ongoing family conversation, telling Liam to keep
eating his “orange stuff.” Carl also chips in after her and says to Liam that he should also
eat the “green stuff.” At the same time, Susan has another go at getting Alexa to wake up.
She calls out “Alexa” twice in a questioning voice. In the pause between Susan’s two calls to
Alexa, Carl tells Liam to keep eating again, but this time the “brown stuff.” Having succeeded
in waking up Alexa, Susan then asks it to play the game.
So, what insights does this fine level of analysis provide? Martin Porcheron et al. (2018)
point out that it demonstrates how a family’s interaction with the Amazon Echo is seamlessly
interwoven with other ongoing activities, in this case, the parents trying to get their child to
eat his food. At a more general level, it illustrates how our conversations with each other and
voice-assisted technologies interleave in nuanced ways rather than being separate conversa-
tions between members of the family or the family and the device, which jump from one to
another. They also show how their analysis led them to think that the term conversational
interaction fails to distinguish between the interactional embeddedness of voice-assisted
interfaces and human conversation. Instead, they suggest that current voice-assisted technol-
ogies be designed using a conceptual model, more akin to instructing rather than conversing.
9.5.2 Discourse Analysis
Discourse analysis focuses on dialogue, in other words, the meaning of what is said and
how words are used to convey meaning. Discourse analysis is strongly interpretive, pays
great attention to context, and views language not only as reflecting psychological and social
aspects but also as constructing them (Coyle, 1995). An underlying assumption of discourse
analysis is that there is no objective scientific truth. Language is a form of social reality that
is open to interpretation from different perspectives. In this sense, the underlying philosophy
of discourse analysis is similar to that of ethnography. Language is viewed as a constructive
tool, and discourse analysis provides a way of focusing on how people use language to con-
struct versions of their worlds (Fiske, 1994).
Small changes in wording can change meaning, as the following excerpts indicate
(Coyle, 1995):
Discourse analysis is what you do when you are saying that you are doing discourse
analysis. . .
According to Coyle, discourse analysis is what you do when you are saying that you are
doing discourse analysis. . .
By adding just three words, “According to Coyle,” the sense of authority changes, depend-
ing on what the reader knows about Coyle’s work and reputation.
Discourse analysis is useful when trying to identify subtle and implicit meaning in what
people are writing about, what is trending, what is fake news, and so on. It can be used with
data from interviews; in social media such as Facebook, Twitter, and WhatsApp; and in
emails. For example, Carlos Roberto Teixeira et al. (2018) proposed a taxonomy for analyz-
ing tweets as a way of understanding the types of conversations that take place online and
identifying patterns of behavior within them. The tweets they analyzed were those posted
during the political scandals that were happening in Brazil during 2016. Using discourse
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N332
analysis and by understanding the cultural background and how technology was adopted
and used by the Brazilian society, they were able to interpret the meaning of the tweets,
beyond the words that were said.
They scraped the raw data of the tweets from the web and ranked them in order of those
most retweeted. Then they selected 100 of the most influential postings, which were messages
that had been retweeted between 9,000 and 47,000 times. Finally, they manually classified
the tweets using Excel spreadsheets, identifying the most dominant discourse characteristics.
They note, however, that since the most retweeted messages are often captured out of context
(that is, outside the conversational thread), they had problems sometimes interpreting what
was the context, such as the particular news story associated with a tweet.
The tweets were then classified into a number of provisional themes that were whittled
down to five general ones. These were (1) “support” (tweets that promote either side of the
political dispute); (2) “criticism and protest” (tweets that showed disapproval or an objec-
tion); (3) “humor” (tweets that were witty and had cartoons or jokes in them); (4) “news”
(tweets that refer to the news that were neutral in tone); and (5) “neutral” (tweets that were
indifferent or their position could not be inferred). For reliability, two different research-
ers classified each tweet using these themes. Once classified, the data was analyzed using
descriptive statistics and simple visualizations, like word (or tag) clouds and pie charts. These
allowed them to draw conclusions about the tweets that they had analyzed, including the
relative size of the different themes and how the size changed over time. Tweets in the criti-
cism and protest theme were the most popular overall, followed by humor. More generally,
they discuss how this kind of discourse analysis shows that humor, protest, and criticism are
highly dominant in this kind of online discourse. They suggest that people who tweet often
express their feelings about ongoing political events, in terms of criticism and humor, in equal
measures. Moreover, they found that tweets including images, videos, and animated GIFs
were most frequently classified as humor.
This kind of analysis of public discourse, when done by hand, is extremely time-consuming.
To help, there are new software tools being developed that can automatically process computer-
mediated discourses (Ecker, 2016). The advantage, as you will see in the next chapter, is that
much larger data sets can be analyzed. The downside is that the analyst is no longer “hands-on”
and loses touch with the surrounding context, meaning different interpretations arise.
9.5.3 Content Analysis
Content analysis typically involves classifying the data into themes or categories and then
studying the frequency of category occurrences (Krippendorff, 2013). The technique can
be used for any text, where “text” refers to a range of media including video, newspapers,
advertisements, survey responses, images, sounds, and so on. It can be used to analyze any
online content, including the text of tweets, links, animated gifs, videos, and images. For
example, Mark Blythe and Paul Cairns (2009) analyzed 100 videos from a YouTube search
by relevance for “iPhone 3G” using content analysis. They categorized the videos into seven
categories: review, reportage, “unboxing,” demonstration, satire, advertisement, and vlog
commentaries (such as, complaints about queues). Similar to discourse analysis, an important
aspect of content analysis is how it considers the wider context.
Content analysis is often used in conjunction with other analysis techniques as well.
For example, Weixin Zhai and Jean-Claude Thill (2017) analyzed social media data from
Weibo (the Chinese equivalent of Twitter) to investigate the emotions, attitudes, and views
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 333
of citizens around a rainstorm that hit Beijing on July 21, 2012, causing 79 deaths. They
used content analysis alongside sentiment analysis, an approach that extracts emotional and
subjective information from natural language. From their results, they found how feelings of
sorrow and sadness were shared across the entire city because of the trauma associated with
entrapment indoors during the deluge.
9.5.4 Interaction Analysis
Interaction analysis was developed by Brigitte Jordan and Austin Henderson (1995) as a
way of investigating and understanding the interactions of human beings with each other
and objects in their environment. The technique focuses on both talk and nonverbal inter-
actions with artifacts and technologies, and it is based on video recordings. An underlying
assumption of this approach is that knowledge and action are fundamentally social. The
goal is to derive generalizations from videos of naturally occurring activities, focusing
on how the people being observed make sense of each other’s actions and their collective
achievements.
Interaction analysis is an inductive process, where teams of researchers suggest state-
ments about general patterns from multiple examples of empirical observations. Rather than
individual researchers conducting separate analyses and then comparing their results for
consistency, interaction analysis is conducted collaboratively; teams discuss together their
observations and interpretations of the videos being watched as they watch them. The first
step involves creating a content log, comprising headings and rough summaries of what has
been observed. No predetermined categories are used during this stage. Instead, they emerge
from repeated playing and discussion of the video material. Hypotheses are also generated
by group members about what they think is happening. This process includes suggesting the
intentions, motivations, and understandings of the people who are being viewed in the vid-
eos. These suggestions have to be tied to the actions of the people rather than being purely
speculative. For example, if an analyst thinks someone’s motivation is to take control during
a board meeting, they need to provide actual examples that demonstrate how the person is
achieving this (for instance, taking over the data projector, as the locus of control, for long
periods of time and presenting their ideas for others to view).
The videos are then cannibalized, as it is called, by extracting interesting materials, reclas-
sifying some of them in terms of what they represent, while removing others. Instances of a
salient event are assembled and played one after another to determine whether a phenomenon
is a robust theme or a one-off incident. An example Brigitte Jordon and Austin Henderson
(1995) use to illustrate this process is their study of people around a pregnant woman who
was having her first contraction. They noticed that, at the point of the first contraction, the
medical staff and family all shifted their attention away from the woman to the monitor-
ing equipment. They were able to find many more examples of this phenomenon, providing
strong evidence that the presence of high-tech equipment changes the practice of caregiving,
specifically that caregiving is mediated by the real-time data presented through the equipment.
An example of where interaction analysis has been used in HCI is Anna Xambo et al.’s
(2013) study of how groups of musicians improvise and learn to play together when using a
novel collaborative tabletop technology. Video data was collected for four groups of musi-
cians using this technology in a number of jamming sessions. Representative video extracts
were repeatedly viewed and discussed by a team of researchers, focusing on verbal commu-
nication and nonverbal communication themes. These themes were categorized into whether
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N334
they were musical, physical, or interface-related. The themes that arose during the repeated
video viewing process were diverse, including error/repair situations, territory-related behav-
iors, emergent coordination mechanisms, and mimicking behaviors. Anna Xambo et al.
(2013) then transformed these into a set of design considerations for developing tabletop
technologies to support hands-on collaborative learning.
9.5.5 Grounded Theory
The goal of grounded theory is to develop theory from a systematic analysis and interpreta-
tion of empirical data; that is, the derived theory is grounded in the data. In this respect, it
is an inductive approach to developing theory. The approach was originally developed by
Barney Glaser and Anselm Strauss (1967) and since then has been adopted by several
researchers, with some adaptations to different circumstances. In particular, both of them
have individually (and with others) developed grounded theory in slightly different ways,
but the objective of this approach remains the same. Barney Glaser (1992) provides further
information about the differences and areas of controversy.
In this context, theory is: “a set of well-developed concepts related through statements of
relationship, which together constitute an integrated framework that can be used to explain
or predict phenomena” (Strauss and Corbin, 1998). Development of a “grounded” theory
progresses through alternating data collection and data analysis: first data is collected and
analyzed to identify themes, then that analysis may lead to further data collection and analy-
sis to extend and refine the themes, and so on. During this cycle, parts of the data may
be reanalyzed in more detail. Data gathering and subsequent analysis are hence driven by
the emerging theory. This approach continues until no new insights emerge and the theory
is well-developed. During this process, the researcher seeks to maintain a balance between
objectivity and sensitivity. Objectivity is needed to maintain accurate and impartial inter-
pretation of events; sensitivity is required to notice the subtleties in the data and identify
relationships between concepts.
The thrust of the analysis undertaken is to identify and define the properties and dimen-
sions of relevant themes called categories in grounded theory. According to Juliet Corbin and
Anselm Strauss (2014), this coding has three aspects, which are iteratively performed through
the cycle of data collection and analysis:
1. Open coding is the process through which categories, their properties, and dimensions are
discovered in the data. This process is similar to our discussion of thematic analysis above,
including the question of granularity of coding (at the word, line, sentence, conversation
level, and so on).
2. Axial coding is the process of systematically fleshing out categories and relating them to
their subcategories.
3. Selective coding is the process of refining and integrating categories to form a larger theo-
retical scheme. The categories are organized around one central category that forms the
backbone of the theory. Initially, the theory will contain only an outline of the categories,
but as more data is collected, they are refined and developed further.
Early books on grounded theory say little about what data collection techniques
should be used but focus instead on the analysis. Some later books place more emphasis
on data collection. For example, Kathy Charmaz (2014) discusses interviewing techniques
and collection and analysis of documents for grounded theory analysis. When analyzing
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 335
data, Juliet Corbin and Anselm Strauss (2014) encourage the use of written records of
analysis and diagrammatic representations of categories (which they call memos and dia-
grams). These memos and diagrams evolve as data analysis progresses. Some researchers
also look to digital tools such as spreadsheets and diagramming tools, but many like
to develop their own physical code books such as the one Dana Rotman et al. (2014)
constructed in a study to understand the motivations of citizens to contribute to citizen
science projects. The data that she analyzed was derived from in-depth semistructured
interviews of 33 citizen scientists and 11 scientists from the United States, India, and
Costa Rica (see Figure 9.9).
The following analytic tools are used to help stimulate the analyst’s thinking and identify
and characterize relevant categories:
1. The Use of Questioning: In this context, this refers to questioning the data, not your
participants. Questions can help an analyst to generate ideas or consider different ways of
looking at the data. It can be useful to ask questions when analysis appears to be in a rut.
2. Analysis of a Word, Phrase, or Sentence: Considering in detail the meaning of an utter-
ance can also help to trigger different perspectives on the data.
3. Further Analysis Through Comparisons: Comparisons may be made between objects or
between abstract categories. In either case, comparing one with the other brings alterna-
tive interpretations.
Grounded theory uses thematic analysis that is, themes are identified from the data, but
as data analysis informs data collection, it also relies on categorizing new data according to
Figure 9.9 Code book used in a grounded theory analysis of citizens’ motivations to contribute to
citizen science
Source: Jennifer Preece
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N336
the existing thematic set and then evolving that set to accommodate new findings. As Victoria
Braun and Victoria Clarke (2006) point out, “Thematic analysis differs from other analytic
methods that seek to describe patterns across qualitative data . . . such as grounded theory
plausible—and useful—theory of the phenomena that is grounded in the data.”
A Grounded Theory Example
So-called idle games are rising in popularity (Cutting et al., 2019). Idle games are minimalist
games that require little or even no interaction in order for the game to progress. For example,
an idle game may involve repeating a simple action like clicking an icon to accumulate
resources. Example games include the Kittens Game, which is a text-based (that is, it has
no graphical user interface) game that involves managing a village of kittens, while Cookie
Clicker involves baking and selling cookies. Idle games also include mechanisms to automate
game play so that progress may continue for extended periods of time without the player
doing anything (Purkiss and Khaliq, 2015). An extreme example studied by Joe Cutting et al.
(2019) is Neko Atsume, a game about collecting cats, in which progress can be made only if
the game is switched off. In their study, they were interested in the notion of “engagement”
and the implications of this new genre of games for current theories of engagement.
To understand more about the idle games genre, Sultan Alharthi et al. (2018) used
grounded theory, specifically to develop a taxonomy and a set of characteristics for them. By
defining the essential features of this genre of games and clustering them, the authors hoped
to produce design implications for each type of game.
The three stages of coding, open, axial, and selective are illustrated in Figure 9.10. Note
that, in this case, the research started by the researchers playing each of the games under study.
Kongregate
57 games
playing sessions
Phase 1
additional idle games
additional non-idle games
record observations,
preliminary coding
open coding,
form concepts
Phase 2
analyze data, discuss
observations, find
relationships,
categories emerged
Phase 3
Almost Idle
61 games
118
146
76
70
14
14
duplicates/non-playable
excluded
final classifications and
definitions
Figure 9.10 The process used by Alharthi et al., showing Phase 2 and Phase 3 using the three
stages of grounded theory coding
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 337
Each game was played by two researchers who recorded their observations in a spread-
sheet. These observations focused on gameplay, game mechanics, rewards, interactivity, pro-
gress rate, and game interface. Then they rated the games using an 11-point interactivity scale
(0–10) where 0 meant that play progressed without any interaction of the player, while 10
meant that the game progressed only slowly without player interaction. Progress through the
levels of the game were also rated on the same scale.
At the end of each game session and observation, the researchers wrote a brief overview
of the game and conducted preliminary open coding of their observations (see Figure 9.11
for an example of preliminary open coding).
Axial and selective coding progressed iteratively. The researchers held several discussion
sessions to explore the relationship between the codes, the emergent concepts, and the initial
categories. During this process, some of the games were re-observed, and related literature
was drawn upon to help refine the concepts. For example, existing literature on game taxon-
omies, prior terms, and definitions related to idle games were incorporated into the analysis.
Based on this analysis, Sultan Alharthi et al. produced a taxonomy with two basic ways to
characterize the games: one based on key features and one based on interactivity. From the
former, they defined incremental games as idle games in which a player selects resources to
generate, waits for them to accumulate, and spends the resources to automate resource gen-
eration. Figure 9.12 illustrates the open codes, resulting concepts, and categories developed
for incremental games. This shows that four categories of incremental games emerge from
this analysis: micromanagement, derivative, single-resource, and multiplayer.
Game Feature Observations
Game name AdVenture Capitalist [G38]
Play description You start clicking on a lemonade stand and collect money. Spend money to
make upgrades, increase production per click. Start hiring workers and
increase production per second. When you have enough money, you can
buy new businesses, automate all your businesses to increment more money,
and leave the game progress.
Game mechanics Click to gain money, automate production, make upgrades to damage/sec.
Rewards One currency, which is money, is rewarded in return.
Interface Graphical
Interactivity level 7
Progress rate 9
Overview This is a single-player game, which requires long cycles ok clicking
at the start, and making a number of upgrades. Production rate reaches
$390/sec in less that 10 minutes and you gain 1M in cash making the game
progress faster.
Figure 9.11 An illustration of preliminary open coding. Words in small capitals are identified by the
researcher as potential codes.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N338
One of the surprises from their analysis is that the resulting taxonomy is based on game
rules and their basic underlying structure rather than mechanisms in the game. This is in contrast
to other gaming taxonomies that feature interactivity and interaction strategies. Of course, idle
games are minimal, and so they don’t have much interaction. However, the grounded theory
approach allowed the development of a taxonomy that reflects the style and purpose of the genre.
9.5.6 Systems-Based Frameworks
For large projects where the researcher is interested in investigating how a new technology
should best be introduced and what its impact is afterward, it is necessary to analyze many
sources of data collected over a long period of time. Conducting analyses of small fragments
of conversation or identifying themes from interviews may be useful for highlighting specific
working practices, but understanding how a whole socio-technical system (for example a
hospital, corporation, local council, or airport) works at scale requires a different kind of
analytical framework. Two such frameworks are introduced next: socio-technical systems
theory (Eason, 1987) and distributed cognition (Hutchins, 1995), as applied through the
Distributed Cognition of Teamwork framework (Furniss and Blandford, 2006).
Socio-technical Systems Theory
Socio-technical systems (STS) theory makes explicit the fact that the technology and the peo-
ple in a work system are interdependent (Klein, 2014). Rather than trying to optimize either
the technical system or the social system independently of each other, STS suggests that this
interdependency be recognized, and the “system” be treated as a whole. The ideas behind
technology tree
economy building
micromanagment
derivative
single-resource
multi-player
increment generators
high interactivity
click-to-manage
click-to-progress
click together
The analysis process that developed the incremental games super-category (each category above is part of incremental games). The
process started with open coding of observations on idle games: multiple codes are created. Concepts are discovered through analyzing
the open codes and identifying common features. This is an iterative process, where new codes are added, combined, or deleted. Each
code is connected to one or more games and can be combined to form new concepts. Concepts are analyzed to find common relationships,
and, thus, categories emerge. In the diagram, coloration is only to aid in reading. The left grouping is to show that all contained codes are
part of click-to-manage and click-to-progress.
waiting
automate
progress
increment per second
upgrades
fight boss
multiple game levels
increment per click
one resource
clicking
increment
internal economy
multiple resources
resource generator
long cycles of clicking
constantly interact
constantly return
shared resource pool
Open codes concepts categories
Figure 9.12 The grounded theory process showing the development of open coding, through
concepts to categories
9 . 5 W h I c h K I N D O F A N A LY T I c F R A m E W O R K T O u S E ? 339
socio-technical theory were first conceptualized around coal mining in the 1950s (see Trist and
Bamford, 1951, for example), but it also has a long history of being applied in hospitals
and healthcare settings (Waterson, 2014) as well as manufacturing and social media systems.
Martin Maguire (2014) highlights the importance of the socio-technical perspective with the
rise of virtual organizations. Ken Eason (2014) identifies five significant and enduring aspects
of STS theory (Eason, 2014):
1. Task interdependencies: If people are focused on one large task, then the division of sub-
tasks between them inevitably sets up interdependencies that are critical to understand.
Understanding these interdependencies is particularly useful for recognizing the implica-
tions of change.
2. Socio-technical systems are “open systems”: STS are influenced by environmental fac-
tors including physical disturbances and financial, market, regulatory, and technical
developments.
3. Heterogeneity of system components: The overall task is undertaken by humans in the
social subsystem using technical resources in the technical subsystem. Both need to be
resilient. Technical components can evolve while humans can learn, develop, and change
the technical components to address challenges of the future.
4. Practical contributions: STS theory is making practical contributions in analysis of existing
systems, summative evaluation of a major change, through potentially predicting challenges
before changes are made, and in designing socio-technical systems that are co-optimized.
5. Fragmentation of design processes: In a complex socio-technical system, there are
different design processes, and these can result in fragmentation. Flexibility in speci-
fication, local focus in design, user-centered design, and system evolution will help over-
come these.
STS is a philosophy rather than a concrete set of methods or analytical tools. But sev-
eral socio-technical design methods provide more concrete tools for using a socio-technical
framework. For example, see Baxter and Sommerville (2011) and Mumford (2006).
Distributed Cognition of Teamwork
Distributed cognition and Distributed Cognition of Teamwork (DiCoT) were introduced
in Chapter 4, “Cognitive Aspects,” as an approach to studying the nature of cognitive phe-
nomena across individuals, artifacts, and internal and external representations. Investigating
how information is propagated through different media is a key goal of this approach, and
while distributed cognition provides a good theoretical framework for analyzing systems, it
can be difficult to apply in practice. The DiCoT framework was developed as a method to
support the application of distributed cognition. It provides a framework of models that can
be constructed from a set of collected data, for example ethnographic, interview transcripts,
artifacts, photographs, and so on. Underlying each model is a set of principles distilled from
the distributed cognition theory. The models are as follows:
• An information flow model that shows how information flows through the system and
is transformed. This model captures the information channels and hubs together with the
sequence of activities and communication between different team roles.
• A physical model that captures how physical structures support communication between
the team roles and facilitates access to artifacts. This model helps to describe the factors
that influence the performance of the system at a physical level.
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N340
• An artifact model that captures how artifacts in this system support cognition. This model
can be used to represent the key characteristics of an artifact and how its design, structure,
and use can support team members.
• A social structure model that examines how cognition is socially distributed. This model
maps the social structures to the goal structures, shows how work is shared, and can be
used to consider the robustness of the system.
• A system evolution model that depicts how the system has evolved over time. This model
provides some explanation for why the work is the way it is. Any design recommendations
need to take this context into account.
While the form of the models is not prescribed, the underlying principles support the
models’ development. For example, underlying the physical model are principles such as
the following:
• Horizon of observation: What an individual can see or hear.
• Perceptual: How spatial representations aid computation.
• Arrangement of equipment: How the physical arrangement of the environment affects
access to information.
DiCoT has been used to understand collaborative work in remote and co-located soft-
ware development teams, as shown in Figure 9.13 (Deshpande et al., 2016; Sharp et al. 2009),
and has been found to be especially useful for studying how medical teams work and man-
age with ever-changing technologies that are introduced into their work environment. For
example, Atish Rajkomar and Ann Blandford (2012) examined how healthcare technologies
are used; specifically, they examined the use of infusion pumps by nurses in an intensive care
PHPStorm
SourceTree
HipChat
Skype for Business
Appear.in
WebEx
Bitbucket
Jira
Co-located
office
worker
Remote
worker
White boards Physical scrumboard
Walls of information
Suggestions, UX
changes, release
schedule, Agile
practices related
information
Non-Line of
Sight
connectivity
Formal
communication
Informal,
ad-hoc
communication
Hatjitsu
Figure 9.13 An information flow diagram from a DiCoT analysis of software development remote
work, based on ethnographic data
9 . 6 T O O L S T O S u P P O R T D ATA A N A LY S I S 341
unit (ICU). They gathered data through ethnographic observations and interviews, which
they analyzed by constructing representational models that focused on information flows,
physical layouts, social structures, and artifacts. They note that “the findings showed that
there was significant distribution of cognition in the ICU: socially, among nurses; physically,
through the material environment; and through technological artefacts.” Based on the results
of this study, they were able to suggest changes that would improve the safety and efficiency
of the nurses’ interactions with the infusion technology.
9.6 Tools to Support Data Analysis
While it is possible to perform these kinds of data analysis using only manual techniques,
most people would agree that it is quicker, easier, and more accurate to use a software tool
of some kind in the majority of cases. Using a simple spreadsheet application is surprisingly
effective, but there are other more sophisticated tools available to support the organization,
coding, and manipulation of data, and to perform statistical tests.
Tools in the former category (to support the organization of data) include facilities for cat-
egorization, theme-based analysis, and quantitative analysis. These typically provide facilities to
associate labels (categories, themes, and so on) with sections of data, search the data for key
words or phrases, investigate the relationships between different themes or categories, and help
to develop the coding scheme further. Some tools can also generate graphical representations. In
addition, some provide help with techniques such as content analysis and sometimes mechanisms
to show the occurrence and co-occurrence of words or phrases. In addition, searching, coding,
project management, writing and annotating, and report generation facilities are common.
Two well-known tools that support some of these data analysis activities are Nvivo
and Dedoose. For example, Nvivo supports the annotation and coding of data includ-
ing PDF documents, photos, and video and audio files. Using Nvivo, field notes can be
searched for key words or phrases to support coding or content analysis; codes and data
can be explored, merged, and manipulated in several ways. The information can also be
printed in a variety of forms such as a list of every occasion a word or phrase is used in the
data, and a tree structure showing the relationships among codes. Like all software pack-
ages, Nvivo has advantages and disadvantages, but it is particularly powerful for handling
large sets of data and can generate output for statistical packages such as SAS and SPSS.
Statistical Analysis Software (SAS) and Statistical Package for the Social Sciences (SPSS)
are popular quantitative analysis packages that support the use of statistical tests. SPSS, for
example, is a sophisticated package offering a wide range of statistical tests such as frequency
distributions, rank correlations (to determine statistical significance), regression analysis, and
cluster analysis. SPSS assumes that the user knows and understands statistical analysis.
Additional tools to support the analysis of very large sets of data are discussed in
Chapter 10, “Data at Scale.”
More information about software tools designed to support the analysis of quali-
tative data can be found through the CAQDAS Networking Project, based at the
University of Surrey.
https://www.surrey.ac.uk/computer-assisted-qualitative-data-analysis
https://www.surrey.ac.uk/computer-assisted-qualitative-data-analysis
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N342
9.7 Interpreting and Presenting the Findings
Previous sections in this chapter have illustrated a range of different ways to present find-
ings—as tables of numbers and text, through various graphical devices and diagrams, as a
set of themes or categories, and so on. Choosing an appropriate way to present the findings
of a study is as important as choosing the right analytical approach. This choice will depend
on the data gathering and analysis techniques used as well as the audience and the original
goals of the study. In some situations, the details of data collection and analysis will be
needed, for example, when working with others to make sense of a large collection of data,
or when trying to convince an audience about a controversial conclusion. This detail may
include snippets of data such as photographs of the context of use or videos of participants
using the product. In other situations, only the salient trends, headlines, and overall implica-
tions are needed, so the style of presentation can be leaner. Where possible, a set of different
complementary representations will be chosen to communicate the findings since any one
representation will emphasize some aspects and de-emphasize others.
This section focuses on three kinds of presentation styles that haven’t yet been empha-
sized to this point: using structured notations, stories, and summarizing.
AcTIvITY 9.4
Consumer organizations and technology companies regularly conduct investigations about
technology use. One such report is from DScout investigating smartphone use, available from
https://blog.dscout.com/mobile-touches. Note that the report is a downloadable PDF from
this web page. Using this report or another report that you find online, look to see how the
results are reported.
1. What kinds of presentation are used?
2. What is left out of the report?
3. What is the effect of presenting findings this way?
Comment
The study from DScout found that “users tapped, swiped, and clicked a whopping 2,617 times
each day on average.” Note that the web page and the report are presented differently. The
web page includes more text and an edited video of participants’ responses to the findings,
while the PDF report is presented in a style similar to a PowerPoint presentation. Focusing on
the PDF report:
1. Graphs and pie charts are used to present demographics and app use, along with lists,
tables, and typographic styles to emphasize certain findings. The report uses two other rep-
resentations (see Figure 9.14): a bubble diagram to show the relative use of different apps;
and a timeline to represent a “heavy user” and a “average user” and how their touches
are spread across the day. These two representations illustrate that developing new (or
modifying old) representations to communicate the right findings you want to highlight
is acceptable.
https://blog.dscout.com/mobile-touches
9 . 7 I N T E R P R E T I N g A N D P R E S E N T I N g T h E F I N D I N g S 343
Chrome
5%
Textra
4%
Messages
11%
Home screen
9% Messenger
(FB)
3%
Instagram
3%
Google app
3%
dscout
3%
Facebook
15%
WhatsApp
3%
Word
Streak
1%
Twitter
1%
reddit
1%Email
(native)
1%Internet
1%
You-
Tube
1%
Hangouts
2%
Gmail
3%
Gallery
(photo)
1%
Snap-
chat
1%
All other apps
12
AM
Phone sessions: Average vs. heavy user
Elizabeth B.
25, Chicago, IL
Lori L.
45, Grandville, MI
1 2 3 4 5 6 7 8 9 10 11 12
PM
1 2 3 4 5 6 7 8 9 10 11 12
AM
(a)
(b)
Figure 9.14 Results representation styles used in DScout’s report about smartphone use: (a) a
bubble diagram in which the size of the circle represents the number of uses and (b) a timeline
across the day of touches from a average user and a heavy user
(Continued)
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N344
9.7.1 Structured Notations
A number of structured notations have been developed to analyze, capture, and present infor-
mation for interaction design. These notations follow a clear syntax and semantics, which
have been developed to capture particular viewpoints. Some are relatively straightforward,
such as the work models promoted in contextual design (Beyer and Holtzblatt, 1998) that
use simple conventions for representing flows, breakdowns, individual roles, and so on.
Others, such as the modeling language Unified Modeling Language (UML), have stricter
and more precise syntax to be followed and are often used to represent requirements (see
Chapter 11, “Discovering Requirements”); the activity diagrams, for example, are very
expressive when detailed interactions need to be captured.
Advantages of using a structured notation are that the meaning of different symbols is
well-defined, and so it provides clear guidance on what to look for in the data and what to
highlight and that it enforces precision in expression. Disadvantages include that by high-
lighting specific elements, it inevitably de-emphasizes or ignores other aspects, and the preci-
sion expressed by the notation may be lost on an audience if they don’t know the notation
well. Producing diagrams or expressions in these notations may require further analysis of
the findings in order to identify the specific characteristics and properties that the notation
highlights. To overcome these disadvantages, structured notations are usually used in combi-
nation with stories or other easily accessible formats.
9.7.2 Using Stories
Storytelling is an easy and intuitive approach for people to communicate ideas and experi-
ences. It is not surprising then that stories (also called narratives) are used extensively in
interaction design, both to communicate findings of investigative studies and as the basis for
further development, such as product design or system enhancements.
Storytelling may be employed in three different ways. First, participants (such as inter-
viewees, questionnaire respondents, and those you have observed) may have told stories of
their own during data gathering. These stories can be extracted, can be compared, and may
be used to communicate findings to others, for example, to illustrate points.
Second, stories (or narratives) based on observation, such as ethnographic field studies,
may be employed for further data gathering. For example, Valeria Righi et al. (2017) used
stories as the basis of co-design workshops in their study to explore the design and use of
technologies to support older people. The scenarios were developed on the basis of ethno-
graphic studies and previous co-design activities and were presented through storytelling to
facilitate understanding. Note that, in this case, the audience was a group of participants in
the ongoing study.
2. There is very little text or description in the PDF report and minimal details about the way
in which the data was collected or analyzed. Further details are included on the web page,
but not in the report.
3. The way in which the findings are presented in the report has quite an impact. The bold
and clear graphical images together with minimal but highlighted text mean that the mes-
sages are communicated in a straightforward fashion.
9 . 7 I N T E R P R E T I N g A N D P R E S E N T I N g T h E F I N D I N g S 345
Including specific stories gives authenticity to the findings, and it can add to its cred-
ibility provided the conclusions are not overstated. Making a multimedia presentation of
the story by adding video or audio excerpts and photographs will illustrate the story further.
This kind of approach can also be effective if presenting data from an evaluation study that
involves observation, as it is hard to contest well-chosen video excerpts of users interacting
with technology or extracts from interview transcripts.
Third, stories may be constructed from smaller snippets or repeated episodes that are
found in the data. In this case, stories provide a way of rationalizing and collating data to
form a representative account of a product’s use or a certain type of event.
Any stories collected through data gathering may be used as the basis for construct-
ing scenarios that can then be used for requirements and design activities. See Chapters 11
and 12 for more information on scenarios.
9.7.3 Summarizing the Findings
Presentation styles will usually be used in combination to produce a summary of the findings;
for instance, a story may be expanded with graphical representations of activity or demo-
graphics, and data excerpts from transcripts or videos may be used to illustrate particular
points. Tables of numerical data may be represented as graphs, diagrams, or rigorous nota-
tions, together with workflows or quotations.
Careful interpretation and presentation of the study results is just as important as choos-
ing the right analysis technique so that findings are not over-emphasized, and evidence is not
misrepresented. Over-generalizing results without good evidence is a common pitfall, especially
with qualitative analyses’ for example, think carefully before using words such as most, all,
majority, and none, and be sure that the justifications reflect the data. As discussed in Box 9.1,
even statistical results can be interpreted in a misleading way. For example, if 8 out of 10 users
preferred design A over design B, this does not mean that design A is 80 percent more attrac-
tive than design B. If you found 800 out of 1,000 users preferred design A, then you have more
evidence to suggest that design A is better, but there are still other factors to consider.
AcTIvITY 9.5
Consider each of the following findings and the associated summary statement about it. For
each one, comment on whether the finding supports the statement.
1. Finding: Two out of four people who filled in the questionnaire checked the box that said
they prefer not to use the ring-back facility on their smartphone.
Statement: Half of the users don’t use the ring-back facility.
2. Finding: One day, Joan who works in the design department was observed walking for 10
minutes to collect printouts from the high-quality color printer.
Statement: Significant time is wasted by designers who have to walk a long distance to
collect printouts.
3. Finding: A data log of 1,000 hours of interaction with a website recorded during January,
February, and March records eight hours spent looking at the help files.
Statement: The website’s help files were used less than 1 percent of the time during the first
quarter of the year.
(Continued)
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N346
Comment
1. The questionnaire didn’t ask if they use the ring-back, just whether they preferred to use
the ring-back facility. In addition, two users out of four is a very small number of partici-
pants, and it would be better to state the actual numbers.
2. Observing one designer on one day having to walk to get printouts does not mean that this
is a general problem. There may be other reasons why this happened on this day, and other
information is needed to make a clear statement.
3. This statement is justified as the log was recorded for a significant period of time and using
percentages to represent this finding is appropriate as the numbers are large.
In-Depth Activity
The goal of this in-depth activity is to practice data analysis and presentation. Assume that
you are assigned to analyze and present the findings of your data gathering in-depth activity
from Chapter 8 to a group of peers, for instance, via a seminar.
1. Review the data that you gathered and identify any qualitative data and any quantitative
data in the data set.
2. Is there any qualitative data that could sensibly and helpfully be translated into quantita-
tive measures? If so, do the translation and add this data to your quantitative set.
3. Consider your quantitative data.
(a) Decide how best to enter it into spreadsheet software, for example, how to handle
answers to close-ended questions. Then enter the data and generate some graphical
representations. As the data set is likely to be small, think carefully about what, if any,
graphical representations will provide meaningful summaries of the findings.
(b) Is there any data for which simple measures, such as percentages or averages, will be
helpful? If so, calculate the three different types of average.
4. Consider your qualitative data.
(a) Based on your refinement of the study question “improving the product,” identify some
themes in the qualitative data, for example, what features of the product cause people
difficulties? Did any of the participants suggest alternative designs or solutions? Refine
your themes and collate extracts of data that support the theme.
(b) Identify any critical incidents in the data. These may arise from interviews, question-
naire responses, or observation. Describe these incidents carefully and choose one or
two to analyze in more depth, focusing on the context in which they occurred.
5. Collate your findings as a presentation and deliver them to a group of peers.
6. Review the presentation and any questions from the audience. Consider how to improve
the analysis and presentation.
347
Further Reading
BLANDFORD, A., FURNISS, D. and MAKRI, S. (2017) Qualitative HCI Research: Going
Behind the Scenes. Morgan Claypool Publishers. This book in the form of a lecture dis-
cusses the practical details behind qualitative analysis in HCI. Using the analogy of making
a documentary film, the authors point out that, as with movies, qualitative analysis is often
presented as a finished product while the work “behind the scenes” is rarely discussed.
Summary
This chapter described in detail the difference between qualitative and quantitative data and
between qualitative and quantitative analysis.
Quantitative and qualitative data can be analyzed for patterns and trends using simple
techniques and graphical representations. Qualitative data may be analyzed inductively or
deductively using a variety of approaches. Thematic analysis (an example of inductive anal-
ysis) and data categorization (an example of deductive analysis) are common approaches.
Analytical frameworks include conversation analysis, discourse analysis, content analysis,
interaction analysis, grounded theory, and systems-based approaches.
It was noted that presenting the results is just as important as analyzing the data, hence
it is important to make sure that any summary or claim arising from the analysis is carefully
contextualized, and that it can be justified by the data.
Key Points
• The kind of data analysis that can be done depends on the data gathering techniques used.
• Qualitative and quantitative data may be collected from any of the main data gathering
techniques: interviews, questionnaires, and observation.
• Quantitative data analysis for interaction design usually involves calculating percentages
and averages.
• There are three different kinds of average: mean, mode, and median.
• Graphical representations of quantitative data help in identifying patterns, outliers, and the
overall view of the data.
• Analysis of qualitative data analysis may be inductive, in which themes or categories are
extracted from the data, or deductive, in which pre-existing concepts are used to interro-
gate the data.
• In practice, analysis often proceeds in iterative cycles combining inductive identification of
themes and deductive application of categories and new themes.
• Which analysis approach is used is tightly coupled to the data that is collected and depends
on the goals of the study.
• Several analytical frameworks exist that focus on different levels of granularity with differ-
ent purposes.
F u R T h E R R E A D I N g
9 D ATA A N A LY S I S , I N T E R P R E TAT I O N , A N D P R E S E N TAT I O N348
BRAUN, V. and CLARKE, V. (2006) Using Thematic Analysis in Psychology. Qualitative
Research in Psychology, 3(2), pp. 77–101. This paper focuses on thematic analysis, how it
relates to other qualitative analysis approaches, and how to conduct it in a rigorous fashion.
It also discusses advantages and disadvantages of the approach.
CHARMAZ, K. (2014) Constructing Grounded Theory (2nd ed.). Sage Publications. This
popular book provides a useful account of how to do grounded theory.
CORBIN, J. M. and STRAUSS, A. (2014) Basics of Qualitative Research: Techniques and
Procedures for Developing Grounded Theory. Sage. This presents a readable and practical
account of applying the grounded theory approach. It is not tailored specifically to inter-
action design, and therefore it requires some interpretation. It is a good discussion of the
basic approach.
HUFF, D. (1991) How to Lie with Statistics. Penguin. This wonderful little book illustrates
the many ways in which numbers can be misrepresented. Unlike some (many) books on sta-
tistics, the text is easy to read and amusing.
ROGERS, Y. (2012) HCI Theory: Classical, Modern, and Contemporary. Morgan and Clay-
pool Publishers. This short book, in the form of a lecture, charts the theoretical developments
in HCI, both past and present, reflecting on how they have shaped the field. It explains how
theory has been conceptualized, the different uses it has in HCI, and which theory has made
the most impact.
Chapter 10
D A T A A T S C A L E
10.1 Introduction
10.2 Approaches for Collecting and Analyzing Data
10.3 Visualizing and Exploring Data
10.4 Ethical Design Concerns
Objectives
The main goals of the chapter are to accomplish the following:
• Provide an overview of some of the potential impacts of data at scale on society.
• Introduce key methods for collecting data at scale.
• Discuss how data at scale becomes meaningful.
• Review key methods for visualizing and exploring data at scale.
• Introduce design principles for making data at scale ethical.
10.1 Introduction
How do you start your day? How much data do you encounter when first looking at your
smartphone, switching on your laptop, or turning on another device? How much do you
knowingly create and how much do you create unknowingly? Upon waking up, many people
routinely will ask their personal assistant, something like, “Alexa, what is the weather today?”
or “Alexa, what is the news?” or “Alexa, is the S-Bahn train to Schönefeld Airport running on
time?” Or, they will ask Siri, “What is my first meeting?” or “Where is the meeting?”
Having oriented themselves for the day, people will walk a few blocks to the subway
entrance, dip their Metro Card in the turnstile to pay the fare, exit the station at their stop,
grab their favorite morning beverage at a nearby cafe, and proceed to their office where they
check in with the employee card at a security gate and take the elevator to their floor.
These are just a few of the things that many of us do to start our workdays. Each activity
involves creating, searching, and storing data in some way or another. We may know that this
is happening, we may suspect that it is happening, or we may be totally unaware of the data
that we are generating and with which we are interacting.
1 0 D ATA AT S C A L E350
There is also increasing concern about exactly what data is collected about us through
personal assistants such as Amazon Echo, Google Home, Cortana, and Siri. We also know
that many large cities, such as New York and London, have an enormous number of sur-
veillance cameras (CCTV) spread around, especially in busy places such as subway stations
and shopping malls. The video footage from these sources is kept for two weeks or more.
Similarly, we experience being checked into an office, so we know that our movements are
being tracked by security personnel. Our activities are also being tracked more surrepti-
tiously through the technology that we use such as smartphones and credit cards.
What happens to all the data collected about us? How does it improve the services pro-
vided by society? Does it make traveling more efficient? Does it reduce traffic congestion?
Does it make the streets safer? Moreover, how much of the data collected from our smart-
cards, smartphone Wi-Fi signals, and CCTV footage can be tracked back to us and pieced
together to reveal a bigger picture of who we are and where we go? What might that data
reveal about us?
Data at scale, or as it is often called big data, describes all kinds of data including data-
bases of numbers, images of people, things and places, footage of conversations recorded,
videos, texts, and environmentally sensed data (such as air quality). It is also being col-
lected at an exponential rate; for example, 400 new YouTube videos are uploaded every min-
ute, while millions of messages circulate through social media. Furthermore, sensors collect
billions of bytes of scientific data.
Data at scale has huge potential for grounding and elucidating problems, and it can be
collected, used, and communicated in a wide variety of ways. For example, it is increasingly
being used for improving a whole range of applications in healthcare, science, education,
city planning, finance, world economics, and other areas. It can also provide new insights
into human behavior by analyzing data collected from people, such as their facial expres-
sions, movements, gait, and tone of voice. These insights can be enhanced further by using
machine learning and machine vision algorithms to make inferences. This includes people’s
emotions, their intent, and well-being, which can then be used to inform technology inter-
ventions aimed at changing or improving people’s health and well-being. However, beyond
societal benefits, data can also be used in potentially harmful ways.
As mentioned in Chapter 8, “Data Gathering,” and Chapter 9, “Data Analysis,” data can
be either qualitative or quantitative. Some of the methods and tools used to collect, analyze,
and communicate data can be carried out manually or using quite simple tools. What makes
this chapter on data at scale different is that it considers how huge volumes of data can be
analyzed, visualized, and used to inform new interventions. While having access to large vol-
umes of data enables analysts, designers, and researchers to address large, important issues
such as climate change and world economic issues, assuming that there are tools to do this,
they also raise a number of user concerns. These include whether someone’s privacy is being
violated by the data being collected about them and whether the data corpora being used
to make decisions about people, such as the provision of insurance and loans, are fair and
transparent.
Furthermore, the combination of vast amounts of data from many sources and the avail-
ability of increasingly powerful data analytic tools to analyze that data is now making it
possible to discover new information that is not available from any single data source. This
is enabling new kinds of research to be conducted for understanding human behavior and
environmental problems.
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 351
10.2 Approaches to Collecting and Analyzing Data
Collecting data has never been easier. What is challenging is knowing how best to analyze,
collate, and act upon the data in ways that are socially acceptable, beneficial to society, and
ethically sound. Are there certain rules or policies in place on what to reveal about people or
when certain patterns, anomalies, or thresholds are reached in a data stream? For example,
if people-tracking technology is used at an airport, how is that revealed to those at the air-
port? Is it enough only to show data that can help manage people flows and bottlenecks? For
example in an airport terminal showing a public display in which one section of the terminal
is detected to be much busier than another (Figure 10.1), do travelers ever stop and wonder
how this data is being collected? What else is being collected about them? Do they care?
Another technique for analyzing what people are doing on websites and social media is
to examine the trail of activity that they leave behind. You can see this by looking at your
own Twitter feed or by looking at someone else’s whom you are following, for example, a
friend’s, a political leader, or a celebrity. You can also examine discussions about a particular
topic such as climate change, reactions to comments made by comedians like John Oliver or
Stephen Colbert, or a topic that is trending on a particular day. If there are just a few posts,
then it is easy to see what is going on, but often the most interesting posts are those that gen-
erate lots of comments. When examining thousands or tens of thousands of posts, analysts
use automated techniques to do this (Bostock et al., 2011; Hansen et al., 2019).
Figure 10.1 Heathrow Airport Terminal 5 Public Display in top-right corner of image showing the
relative level of activity using an infographic of North vs. South Security
Source: Marc Zakian / Alamy Stock Photo
1 0 D ATA AT S C A L E352
10.2.1 Scraping and “Second Source” Data
One way to extract data is by “scraping” it from the web (assuming that this is allowed by
the application). Once the data is scraped, it can be entered into a spreadsheet for study and
analyzed using data science tools. The focus from an interaction design perspective is how
one can interact with that data and the way it is displayed rather than the actual scraping
process per se, so that it can be analyzed and sense can be made of it.
In addition, the openly available big data that Google and other companies now provide
for researchers to mine offers a “second source” methodology, meaning search terms, Face-
book posts, Instagram comments, and so on. Analysis of this data can indirectly reveal new
insights about the users’ concerns, desires, behaviors, and habits. For example, the Google
Trends tool can be used for exploring and examining the motivation behind what people ask
when they type something into Google Search. Seth Stephens-Davidowitz (2018) has used it
extensively to reveal what people are interested in finding out. From his analysis of Google
Search data, he discovered that people type into the search box all sorts of intimate ques-
tions about their health among other topics. Moreover, he found that his analysis of search
data revealed things that people would not freely admit to when asked using other research
methods, such as surveys and interviews. He also makes an important assertion: to obtain
new insights from big data, it requires asking the right questions of the data. Furthermore, it
is not how much data can be collected or mined but what is done with the new data that has
been made available. Simply mining it because there is a tool available may yield surprising
results, but well-honed questions that guide and are used to interpret the data that is found
will be more valuable (see Chapter 8, “Data Gathering”).
How do researchers know what are the right questions to ask of this data? This is par-
ticularly pertinent for HCI researchers to understand, especially in terms of how users will
relate, trust, and confide in the next generation of technologies, including domestic robots,
bots, and virtual agents.
ACTiViTy 10.1
What insights do Google Trends searches tell us about ourselves?
Go to Google Trends (https://trends.google.com). Then try typing into the search box
statements such as “I hate my boss,” “I feel sad,” or “I eat too much.” See how many people
have typed this into Google over the last month, year, and for different countries. Then type
in the opposite statements: “I love my boss,” “I am happy,” or “I never eat enough.” How
do the results compare? Which is asked more often? Then type in your name and see what
Google returns.
Comment
It is surprising how many people confide such personal statements in Google. Some people
will tell it anything. Google Trends provides a way of comparing the search data across time,
country, and other topics. When you type in your name (unless you have the same name as a
famous person), it often comes back with “hmm, your search doesn’t have enough data to
show here.”
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 353
10.2.2 Collecting Personal Data
Personal data collection started becoming popular through the quantified-self (QS) move-
ment in 2008 where monthly “show and tell” meetings were organized to enable people to
come together to share and discuss their various self-tracking projects. Nowadays, many
apps and wearable devices exist that people can buy off the shelf, which can collect all sorts
of personal data and visualize it. These results can be matched against targets reached, and
recommendations, hints, or tips can also be provided about how to act upon it. Many apps
now come prebundled on a smartphone or smartwatch, including those that quantify health,
screen time, and sleep. Some also allow multiple activities to be tracked, aggregated, and cor-
related. The most common types of apps are for physical and behavioral tracking, including
mood changes, sleep patterns, sedentary behavior, time management, energy levels, mental
health, exercise taken, weight, and diet. A common motivation for deciding to embark on
tracking some aspect of one’s personal data over time is to see how well they are doing com-
pared to a set threshold or level (that is, a set target, a comparison with the week before, and
so on). The aggregate data may raise awareness and be revealing to the extent that some-
one feels compelled to act upon it (for example, changing their sleeping habits, eating more
healthily, or going to the gym more regularly).
Self-tracking is also increasingly being used by people who have a condition or disease
as a form of self-care, such as monitoring blood glucose levels for those who have diabetes
(O’Kaine et al., 2015) and the occurrence of migraine triggers (Park and Chen, 2015). This
kind of self-care monitoring has been found to help people engage in reflection when look-
ing at their data and then learning to associate specific indicators with patterns of behavior.
Making these connections can increase self-awareness and provide them with early warning
signs. It can also lead them to avoid certain events or adjust their behavior accordingly. Many
people are also happy to share their tracked data with others in their social networks, which
has been found to enhance their social networking and motivation (Gui et al., 2017).
Quantified-self projects generate lots of data. New kinds of health data can now be col-
lected by mobile health monitors, such as heart rate, generating masses of data per person
each month, which was simply heretofore unavailable. This raises questions as to how much
data should be kept and for how long? Also, how can this data be used to best effect? Should
it signal to the user when their heart rate deviates from normal levels? Given that masses of
data are being collected from many individuals using the same devices, it is possible to col-
late all of the data. Would it be useful for health clinicians and the individuals alike to have
access to all of this data in order to see trends and comparisons? How can this be made to be
both informative and reassuring? Translating someone’s heart rate data that is sampled many
times per second along with their electroencephalogram (EEG) streamed brainwave data into
an early warning sign with an appropriate form of intervention is challenging (Swan, 2013).
It can easily lead to increased anxiety. Much thought needs to go into providing information
into an interface that will not cause unnecessary panic. New tools should also provide flex-
ibility in how the user might want to customize or annotate their data to meet their specific
needs (Ayobi et al., 2018).
10.2.3 Crowdsourcing Data
Increasingly, people crowdsource information or work together using online technologies to
collect and share data. The idea of a crowd working together has been taken one step further
in Crowd Research, where many researchers from all over the world come together to work
1 0 D ATA AT S C A L E354
on large problems, such as climate science (Vaish et al., 2018). The goal of this approach is
potentially to enable hundreds of people to contribute, through collecting data, ideating, and
critiquing each other’s designs and research projects. Conducting research on a massive scale
enables potentially hundreds or thousands of people to work on a single project, which can
help address large problems, such as migration or climate change at scale.
There are also many citizen science and citizen engagement projects (see Chapter 5,
“Social Interaction”) that crowdsource data at scale and in doing so amass billions of differ-
ent types of data (photos, sensor readings, comments, and discussion), which are collected by
millions of people across the world. Most of this data is stored in the cloud as well as on local
machines. Examples of large citizen science projects include iSpotNature, eBird, iNaturalist,
Figure 10.2 Abundance map for the common raven. The darkest area indicates where ravens are
most abundant.
Source: https://ebird.org/science/status-and-trends/comrav. This link will allow you to see how abundance
changes during each week of the year (purple indicates high abundance and yellow indicates low abundance).
https://ebird.org/science/status-and-trends/comrav
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 355
and the Zooniverse. There are also thousands of much smaller projects that together generate
huge amounts of data.
eBird.org, for example, collects data about bird sightings that is contributed by naturalists—
amateurs ranging from beginning birders to highly experienced expert birders and professional
scientists. The site was launched in 2002 as a collaboration effort between Cornell University’s
Lab of Ornithology and the National Audubon Society. The data includes bird species data, the
abundance of each species, geolocation data indicating where observations are made, profiles
of the people who contribute, comments, and discussion. There are also smartphone apps and a
website with links to many resources, including identification guides, data analysis tools, maps
and visualizations, reports, and scientific articles. As of June 2018, there were more than 500
million bird observations recorded in a global database. eBird feeds data into aggregator sites
such as the Global Biodiversity Information Facility (GBIF) so that it is available for scientists.
It also provides several maps, many of which are interactive (see Figure 10.2) and other graphic
representations of its data that are available for anyone to access.
Crowd projects raise a number of issues as to who owns and manages the data. This is
especially pertinent when the data collected can be mined to unearth details about the people
who contribute the data as well as about the endangered species, for example. For researchers
and UX designers, there are interesting questions about how to balance making data avail-
able for education and research while protecting the privacy of those contributing the data
and the location of endangered species in this example. Box 10.1 discusses how one citizen
science project, iNaturalist.org, tries to manage this balance.
BoX 10.1
Citizen Science and UX Design for Privacy
How privacy is interpreted by citizen scientists and their desire and need for privacy regula-
tions differs across projects (Bowser et al., 2017). Being able to share citizen science data has
many advantages as well as some privacy concerns. For example, participants can see what
others are observing in their area. Bird enthusiasts also often like to share first sightings, for
instance, when the first swallows appear in spring or the first snow geese arrive in winter. They
may also want to check identifications with each other. The downside of this community inter-
action is that personal profile and location data can be used to identify particular contributors
and their patterns of behavior. The latter can be especially problematic, as many participants
visit the same places regularly. It is, therefore, important to ask how important is privacy in
citizen science compared with the benefits of community engagement? And how might UX
design help protect both participants and rare species, while supporting open engagement?
Many citizen science projects and societies post privacy policies like the one shown in the
following link. Other strategies involve making images and locations fuzzy so that they are
not exact. This is also a good strategy for keeping the location of rare species’ observations
confidential. For example, iNaturalist.org has a geoprivacy setting that can be set to “open,”
“obscure,” or “private.” Obscured observations are used to hide the exact location of endan-
gered species, as shown in Figure 10.3.
http://ebird.org
http://inaturalist.org
http://iNaturalist.org
1 0 D ATA AT S C A L E356
The European Citizen Science Organization’s Data and Privacy Policy can be
seen at https://ecsa.citizen-science.net/about-us/privacy-policy.
BoX 10.2
A Human-Data Design Approach to Sensing Data
There are a number of off-the-shelf sensor toolkits available now that can be placed in some-
one’s home or local community to measure air quality or other aspects of the environment.
One of the earliest open platforms developed was Smart Citizen (Diez and Posada, 2013).
A compact device was built with a number of embedded sensors in it that could measure
nitrogen dioxide (NO2), carbon monoxide (CO), sunlight, noise pollution, temperature, and
humidity levels. The data being collected from the platform was connected to a live website
that could be accessed by anyone. The various data streams were presented via a dashboard
using canonical types of visualisations, such as time-series graphs (see Figure 10.4a). Data
streams from other Smart Citizen devices, set up throughout the world, could also be viewed
via the dashboard (https://smartcitizen.me/) making it easy to compare data collected from
Figure 10.3 iNaturalist.org geoprivacy obscures the location of an observation.
In the above example: 1. EN indicates that the organism is endangered, so its location needs to be
obscured, 2. indicates that obscuring is done by randomly placing the marker for the location within
the broader area, and 3. allows the contributor to verify that this observation has been observed within
iNaturalist.
Source: https://www.inaturalist.org
http://inaturalist.org
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 357
different locations. While the mass of environmental data accumulating was fascinating to
data scientists and researchers, this was not the case for many of the householders who had
set up a smart citizen device in their home. They found the visualizations presented via the
dashboard to be difficult to understand and were unable to connect the data being sensed with
what was happening in their homes (Balestrini et al., 2015). As a result, they did not find it
useful and quickly stopped looking at it.
The Physikit project took this user problem as its starting point (Houben et al., 2016).
A human-data design approach was adopted; the goal is to transform sensed data being col-
lected into something that is meaningful to the general public. The goal is to provide a way of
enabling users to engage with their sensed data, by giving it a physical ambient presence in the
location where it is being collected. A set of colorful physical cubes were designed that could
light up, move parts or vibrate, depending on how they were configured (see Figure 10.4c).
Householders could easily configure a set of rules to decide which cubes to connect with
which data streams and what each cube should do depending on levels or thresholds being
sensed. This was intended to let them select aspects of their home that they were interested in
knowing more about.
For example, one of the PhysiCubes had a rotating disk on the top that could be set to
move clockwise or counterclockwise and at different speeds. One household decided to use it
to measure the level of humidity in their kitchen throughout the day. They placed a basil plant
on top of the cube (see Figure 10.4b) as a way of visibly showing the level of humidity in the
kitchen. The rule they set up for the cube was for it to rotate only if the humidity level detected
was below 60 percent. At the end of each day, they could tell how humid it had been in the
room by the extent to which the plant remained upright. If it was leaning toward the window,
this suggested to them that the humidity level in the kitchen had been high throughout the
day because the disk had not moved the plant around for the leaves to get an even amount
of light. The household, had in effect, created a naturally growing physical visualization that
held historical data.
(a)
(b)
(c)
1 0 D ATA AT S C A L E358
10.2.4 Sentiment Analysis
Sentiment analysis is a technique that is used to infer the effect of what a group of people
or a crowd is feeling or saying. The phrases that people use when offering their opinions or
views are scored as being negative, positive, or neutral. The scales used vary along a con-
tinuum from negative to positive, for example, –10 to +10 (where –10 is the most negative,
0 is neutral, and +10 is the most positive). Some sentiment systems provide more qualitative
measures by identifying if the positive or negative sentiment is associated with a specific feel-
ing, for example anger, sadness, or fear (negative feelings) or happiness, joy, or enthusiasm
(positive feelings). The scores are extracted from people’s tweets and texts, online reviews,
and social media contributions. Their facial expressions (see Chapter 6, “Emotional Inter-
action”) when looking at ads, movies, and other digital content and customer’s voices can
also be analyzed and classified using the same scales. Algorithms are then applied to the
labeled data in order to identify and classify them in terms of the level of effect that has been
(a)
(b)
(c)
Figure 10.4 (a) Smart Citizen’s dashboard and visualization, (b) a PhysiMove cube set up in
a householder’s home, and (c) the components of the Physikit toolkit (Houben et al., 2016).
Source: (a) https://www.citizenme.com, (b) and (c) Yvonne Rogers
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 359
expressed. There are a number of online tools that can be used to do this, such as DisplayR
and CrowdFlower. MonkeyLearn provides a detailed tutorial on sentiment analysis (https://
monkeylearn.com/sentiment-analysis/).
Sentiment analysis is commonly used by marketing and advertising companies to decide
on what types of ads to design and place. In addition, it is increasingly being used in research
to study social science phenomena. For example, Veronikha Effendy (2018) used sentiment
analysis to study people’s opinions about the use of public transportation from their tweets.
In particular, she was interested in determining what were the positive and negative opin-
ions toward it, which could then be used as evidence for making a case on how to improve
public transportation to increase its use in Indonesia, where there are huge traffic conges-
tion problems.
However, sentiment analysis as a technique is not an exact science and should be viewed
more as a heuristic than as an objective evaluation method. Giving a word a score from
−10 to +10 is quite a crude way to measure. To assess how good sentiment analysis is as a
method, Nicole Watson and Henry Naish (2018) compared human judgment with computer-
based sentiment analysis for evaluating positive articles about the U.S. economy. They found
that the computer was more often wrong than right compared with the human participants.
Their analysis indicates that humans express their optimism about a topic in much richer
ways. Moreover, it also shows that by focusing on emotive words in phrases, sentiment
analysis misses the nuances of expression that humans understand intuitively. For example,
how would sentiment analysis score the phrase written by a teen in a text to their friends that
said, “I am weak”? It would probably give it a negative score. In fact, the phrase is teen slang
for “That is funny,” which is completely the opposite.
10.2.5 Social Network Analysis
Social network analysis (SNA) is a method based on social network theory (Wellman and
Berkovitz, 1988; Hampton and Wellman, 2003) for analyzing and evaluating the strength
of social ties within a network. While understanding social ties has been a strong interest of
sociologists for many years, (for example, Hampton and Wellman, 2003; Putnam, 2000), as
social media became increasingly successful, it also became a key interest for computer and
information scientists (for example, Wasserman and Faust, 1994; Hansen et al., 2019). They
want to understand the relationships that form among people and groups within and across
different social media platforms, and with offline social networks too. Online, trillions of
messages, tweets, pictures, and videos are posted and responded to every second of every
day via Weibo, Tencent, Baidu, Facebook, Twitter, Instagram, and YouTube. Some examples
include families posting pictures of their kids’ birthday parties and family outings, discussion
of hot political issues, and friends and colleagues chatting and keeping in contact with each
other’s travel experiences, hobbies, life’s challenges, and successes.
Social network analysis enables these relationships to be seen more clearly. It helps to
reveal who is most active in a group, who belongs to which groups, and how the groups do
or do not interact and relate to each other. Analyses can also show which topics are hot and
throw light on when, how, and why some topics go viral. Managers, marketers, and politi-
cians are especially interested in how these activities can influence them, their companies, and
their constituents. Many other people like to try to make their posts or YouTube videos go
viral, as discussed in Chapter 5.
https://monkeylearn.com/sentiment-analysis/
https://monkeylearn.com/sentiment-analysis/
1 0 D ATA AT S C A L E360
So, how does social network analysis work? It is a big topic, but broadly, as the name
suggests, a network is a collection of things and their relationships to each other. A social net-
work is a network of people and groups with relationships to each other. Human beings, like
other primates, have formed networks for as long as our species has existed. Many other spe-
cies, such as elephants, wolves, and meerkats, to name just a few, also rely on social networks
for safety, for collaboratively rearing their young, and when foraging or hunting for food.
Two main entities make up a social network. Nodes, which are also sometimes called enti-
ties or vertices, represent people and topics. The connections between the nodes are called edges,
which are also known as links or ties. The edges show the connections among nodes, for exam-
ple, the members of a family. They can show the direction of relationships; for instance, parents
may have a line with an arrow-head that points to their children, indicating the direction of the
relationship between the two nodes. Similarly, an arrow in the opposite direction indicates that
children have parents. These are known as directional edges. Edges can also indicate relation-
ships in both directions by having arrows at each end. Edges that do not have an arrowhead are
nondirectional; that is, the direction of the relationship between two nodes is not shown.
Drawing on algorithms and based in statistics, social network analysis offers a range of
metrics for describing the properties of networks. One of the most important sets of metrics
for visualizing networks in big data is measures of centrality. Several different measures of
centrality exist based on different statistical formulae. These and other metrics are used to
create visualizations like the ones shown in Figure 10.5 that show overlapping clusters. The
clusters indicate voting patterns by members of the U.S. Senate in 1989, 1997, and 2013.
Red represents Republicans, and blue represents Democrats. The graphs indicate how, over
almost 25 years, the voting behavior of members of the two parties has become increasingly
siloed, with fewer and fewer members voting with members from the other party. The nodes
representing members on the far right and far left are only connected to the nodes of sena-
tors in their own party, indicating that they don’t vote with members from the other party.
The nodes in the middle indicate that those members sometimes vote with members of the
other party. From being united on some issues back in 1989, bipartisan voting behavior has
decreased over the years as indicated by the social network graphs. By 2013, few members of
the two parties voted with members from the other party.
Some other topics that have been studied using social network analysis include com-
munication during the flood in Louisiana, where Jooho Kim and Makarand Hastak (2018)
examined the role of social media in flood victims’ communication, both with each other
and with emergency services. They found that Facebook was used particularly effectively
to disseminate information. Other studies include one by Dinah Handel and her colleagues
on teachers’ tweets on Twitter (Handel et al, 2016), and Diane Harris Cline has used social
network analysis for a number of studies to examine the relationships between historical
characters (Cline, 2012). In addition, there are many other examples related to a diversity
of topics, including business communication, and even the relationships and activities of
characters in Shakespeare’s plays (Hansen et al., 2019). Revealing as many of these social net-
work graphs are, using the tools effectively to separate and display clusters, outliers, and other
network features takes practice. However, increasing attention to the UX design and support
provided by such tools enables beginners to do straightforward analyses. Two of the most
well-known social network analysis tools are NodeXL (Hansen et al., 2019), which runs
on Windows-based machines, and Gephi, which runs on both Windows and macOS. Many
YouTube videos are available that describe how to use these tools.
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 361
US Senator Voting Relationships
1989
1997
2013
Figure 10.5 Voting behavior of U.S. Senators in 1989, 1997, and 2013. Red represents Republi-
cans, and blue represents Democrats.
Source: Forbes Inc.
1 0 D ATA AT S C A L E362
This video is an introductory tutorial about Gephi (2016) by Jen Golbeck, profes-
sor at the University of Maryland. It is one of a series, so if you continue watching
at the end of the video, the next one progresses to describe more advanced fea-
tures of Gephi, including how to use color to highlight particular features of inter-
est in the network graphs: https://www.youtube.com/watch?v=hJ4hcq3yX4k.
In this YouTube video, Marc Smith, a sociologist and director of the Social Media
Foundation, shares with the relationship mapping workgroup how he has used
NodeXL for social media network analysis and visualization: https://www.youtube
.com/watch?v=Ftssu_5x7zk.
DiLEMMA
How to Probe People’s Reactions to Tracking
There is often a gulf between the benefits provided to society through tracking and the level of
individual privacy that is being sacrificed. It is important, therefore, to have an open debate about
the costs versus the benefits of using future tracking and monitoring technologies. Ideally, this
should take place before any deployment of the new technology. However, just asking people what
they think about a future tracking technology may not reveal the true extent of their concerns and
feelings. What other methods could be used? One approach is to use a provocative probe.
For example, a project called the Quantified Toilets did this by setting up a fake service
in a public place to disrupt the accepted state of affairs. The team was interested in how a
community would react to having their urine analyzed in a public toilet with the goal of
improving public health. They pretended to be a commercial company called the Quantified
Toilets, which had created a new urine analysis technology infrastructure and installed it in
the public toilets at a convention center. Signage was placed throughout the public toilets at a
convention center explaining the rationale for the initiative (see Figure 10.6). In addition, the
Figure 10.6 Signage posted in the convention center Quantified Toilets
Source: Used courtesy of Quantified Toilets
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 363
team created a website (quantifiedtoilets.com) that presented fake real-time data feeds from
each of the toilets in the convention center showing the results of the urine analysis, including
details such as blood alcohol levels, drugs detected, pregnancy, and odor (see Figure 10.7).
All sampled data were anonymized. In addition, a link to a survey was added, and the general
public was invited to give their feedback.
The goal was to observe people’s reactions when coming across this new service on going
to the toilet. Would they become upset, surprised, or outraged, or wouldn’t they mind? Would
they question the reality of the situation and tell others?
So, what happened? A diverse range of responses were observed. These included disap-
proval (for example, “Health advice? It does not get any creepier.”); approval (“Privacy is
important. But I would like to know if I was sick and this is a good way to do so.”); concern
(for instance, “Imagine if your employer could find out how hard you had partied last night.”;
resignation (“I am sure the government has been collecting this data for years.”; voyeurism
(“I just spent the last 10 minutes watching the pee-pee logs. Can’t stop watching them.”); and
even humor, where some people tried to match people entering and exiting the toilets with the
data appearing on the website.
Within an hour of the project going live, #quantifiedtoilets went viral on social media,
triggering a snowball of tweets and retweets. Many face-to-face discussions took place at
the convention center, and articles and blogs were written, some appearing in magazines and
newspapers. Some visitors were duped and tweeted how incensed they were. Arguably, this
range of responses and level of discussion would never have happened if the researchers had
just asked people in the street would they mind if their urine were analyzed in a public toilet.
What do you think of this type of study? Do you think it is a good way to open up debate
about data tracking in society, or is it a step too far?
Figure 10.7 The real-time data provided on the quantifiedtoilets.com website
Source: Used courtesy of Quantified Toilets
http://quantifiedtoilets.com
http://quantifiedtoilets.com)
1 0 D ATA AT S C A L E364
10.2.6 Combining Multiple Sources of Data
A number of researchers have started collecting data from multiple sources by combining
automatic sensing and subjective reporting. The goal is to obtain a more comprehensive pic-
ture about a domain, such as a population’s mental health, than if only one or two sources of
data were used (for instance, interviews or surveys). One of the first comprehensive studies to
do this was Studentlife (Harari et al., 2017), which was concerned with learning more about
students’ mental health. In particular, the research team wanted to know why some students
do better than others under times of stress, why some students burn out, and still others drop
out. They were also interested in the effect of stress, mood, workload, sociability, sleep, and
mental health on the students’ academic performance. They were especially interested in how
the students’ moods change in response to their workload (such as their assignments, mid-
terms, finals).
During a 10-week term, the researchers collected masses of data about a cohort of 48
students studying at Dartmouth College in the United States (Harari et al., 2017). They devel-
oped an app that ran on the students’ phones, without the students needing to do anything,
to measure the following:
• Wake up time, bed time, and sleep duration
• The number of conversations and duration of each conversation per day
• The kind and amount of physical activity (walking, sitting, running, standing, and so on)
• Where they were located and how long they stayed there (that is, in the dorm, in class, at
a party, in the gym, and so forth)
• The number of people around the student throughout the day
• Student mobility outdoors and indoors (in campus buildings)
• Their stress levels throughout the day, week, and term
• Positive affects (how good they felt about themselves)
• Eating habits (where and when they ate)
• App usage
• Their comments on campus about national events (for example, the Boston bombing)
They also used a number of pre- and post-mental health surveys and collected the stu-
dents’ grades. These were used as ground truth for evaluating mental health and academic
performance, respectively. The researchers went to great lengths to ensure that all of the
data stored was anonymized in a dataset to protect the privacy of the participants. Having
achieved this, the researchers then opened up the dataset for others to examine and use to
conduct further analyses (http://studentlife.cs.dartmouth.edu/dataset.html).
The researchers were able to mine the data that they had collected automatically from the
students’ smartphones and learn several new things about their behavior. In particular, they
found that a number of the behavioral factors that had been tracked from their smartphones
were correlated to their grades, including activity, conversational interaction, mobility, class
attendance, studying, and partying.
Figure 10.8 shows a graph indicating the relationship between activity, deadlines, attend-
ance, and sleep. It shows that students are very active at the beginning of the term and get
very little sleep. This suggests that they are out partying a lot. They also have high attend-
ance rates at the beginning of term. As the term progresses, however, their behavior changes.
Toward the end of term, sleep, attendance, and activity all drop off dramatically!
http://studentlife.cs.dartmouth.edu/dataset.html
1 0 . 2 A p p r o A C h E S T o C o L L E C T i n g A n D A n A Ly z i n g D ATA 365
spring term (days)
activity0.16 2.5
2
1.5
1
0.5
0
0.14
0.12
0.1
7 14 21 28 35 42 49 56 63
deadlines
mid-term
sleep
attendance
Figure 10.8 Student’s activity, sleep, and attendance levels against deadlines during a term
Source: StudentLife Study
ACTiViTy 10.2
From the two graphs shown in Figure 10.9, what can you say about the students’ activity, their
stress levels, and their level of socializing in relation to deadlines over the course of the term?
spring term (days)
conversation
frequency
0.15 2.5
2
1.5
1
0.5
0
0.1333
0.1167
0.1
7 14 21 28 35 42 49 56 63
deadlines
mid-term
conversation
duration
spring term (days)
gym
mood
0.16 2.5
2
1.5
1
0.5
0
0.14
0.12
0.1
7 14 21 28 35 42 49 56 63
deadlines
mid-termstress level
Figure 10.9 Student behavioral measures over the course of a term
Source: StudentLife Study
1 0 D ATA AT S C A L E366
10.3 Visualizing and Exploring Data
Every day, people interact with different kinds of visualizations, including road signs, maps,
medical images, mathematical abstractions, tables of figures, schematic diagrams, graphs,
scatter plots, and many more. These representations are intended to help us make sense of the
world we live in, but for them to be useful, they have to be presented in ways that are under-
standable for the people who use them. Being able to take meaning from data involves being
able to see it and understand the way that it is represented and its context. What kind of data
is it? What is the data about? Why was it collected? Why was it analyzed and represented in
a particular way? The skills needed to understand and interpret visualizations are referred
to as visual literacy. As with any skill, different people exhibit different levels of visual lit-
eracy, depending on their experience of using visual representations (Sarikaya et al., 2018).
Figure 10.10 shows a simplified path that is followed when data is meaningful. Starting
with the analyzed data, which is represented in some way, the user perceives and interprets
the data representation taking into account the context of the data. The user is then able to
understand and communicate what the data shows to others.
Cognition and
Perception
User
Interaction
and User-
Experience
Context
CommunicationPresentationData
Analysis
Figure 10.10 A simplified path for data to be meaningful
Source: Lugmayr et al. (2017). Used courtesy of Dr Artur Lugmayr
Comment
The top figure shows that students start the term by having long social conversations. This
begins to tail off as mid-term approaches. Students resort to having fewer and shorter conver-
sations. After the deadlines are met, students switch back to having many more and longer
conservations. The bottom figure shows students started out all upbeat, having returned from
vacation feeling good about themselves. They appear relaxed (high mood level) and are active
(going to the gym a lot). These attributes all start going downhill as the term comes to an
end—presumably as their stress levels rise because of looming deadlines.
1 0 . 3 V i S u A L i z i n g A n D E X p L o r i n g D ATA 367
Even graphical representations of small amounts of data (for example 20–100 items)
can be hard to interpret if the people trying to make sense of them don’t understand the way
that the data is being displayed. Furthermore, sometimes representations, such as bar graphs,
line graphs, and scatter plots, are displayed in misleading ways. Danielle Szafir (2018), for
example, asks, “How can we craft visualizations that effectively communicate the right infor-
mation from our data?” She describes how data displays can mislead users when designers
show axes with truncated scales, or they show data in 3D bars making it hard to read exact
values from the bar because it isn’t obvious which side of the 3D column is the place to
read. Interactive visualizations typically include all of the canonical forms of representations
(for instance, bar charts or pie charts) along with tree maps and advanced visualization
techniques that enable users to interact with the data online by panning and zooming in and
out of the displays. With the increased tendency to develop more complex visualizations to
display increasingly large volumes of data, the question about how to craft the data represen-
tations and tools to develop and explore the data is even more relevant.
As Stu Card and his colleagues explained two decades ago, the goal of data visualiza-
tion tools is to amplify human cognition so that users can see patterns, trends, correlations,
and anomalies in the data that lead them to gain new insights and make new discoveries
(Card et al., 1999). Many of the data visualizations and tools that have been developed
since then are now being used by practitioners and researchers from fields including health
and wellness, finance, business, science, educational analytics, decision-making, and personal
exploration. For example, millions of people use interactive maps to find their way, benefit-
ting from their integration into car navigation and car-sharing apps. Physicians and radiolo-
gists compare images from thousands of patients, and financiers examine trends in the stocks
of hundreds of companies. Data visualization tools can help users change and manipulate
variables to see what happens; for example, they can zoom in and out of the data to see an
overview or to get details. Ben Shneiderman (1996) summarizes this behavior in his mantra
“overview first, zoom and filter, and then details on demand.”
While the early UX research on information visualizations still guides UX designers in
their pursuit of designing new interactive visualizations, tools are needed for interacting with
large volumes of data (Whitney, 2012). Many of these tools require expertise beyond that
of most casual users in order to use them effectively. Typically, the data displays consist of
many of the common techniques mentioned earlier (such as graphs and scatter plots) coupled
with 3D interactive maps, time-series data, trees, heat maps, and networks (Munzner, 2014).
Sometimes these visualizations were developed for uses other than those for which they
are used today. For example, tree maps were originally developed to visualize file systems,
enabling users to understand why they are running out of disk space on their hard drives
by seeing how much space different applications and files were using (Shneiderman, 1992).
However, tree maps were soon adopted by media and financial reporters for communicating
changes in the stock market, and they became known as “market maps” (see Figure 10.11).
Like interactive maps, tree maps have become a general-purpose tool embedded in most
widely used applications, such as Microsoft’s Excel (Shneiderman, 2016).
The ability to collect and store large amounts of data easily has encouraged the develop-
ment of visualizations that display different types of data. For example, Figure 10.12 shows
segments of sounds recorded from birds and other organisms collected by Jessie Oliver and
her colleagues (2018). These researchers wanted to see how people investigated and anno-
tated this data and in turn how this approach can be used to find and identify birds and other
1 0 D ATA AT S C A L E368
animals in the wild by recording their songs and calls. When the visualizations, known as
spectrograms, were shown to birders, the researchers were intrigued to see how they evoked
memories of hearing the birds in the field. The birders also found this data visualization to be
helpful in corroborating their identifications of birds with other birders. From a UX design
Figure 10.11 A market map of the S&P 500, which is a financial index for stocks. Green indicates
stocks that increased in value, and red indicates stocks that decreased in value that day.
Source: Used courtesy of FINVIZ
Figure 10.12 Visualization of different sounds, including birds, owls, and insects, from three areas
of Australia that are displayed so they can be interpreted and compared
Source: Oliver et al. (2018). Reproduced with permission of ACM Publications
1 0 . 3 V i S u A L i z i n g A n D E X p L o r i n g D ATA 369
perspective, Jessie Oliver and her colleagues faced the challenge of how to display long sound
recordings visually. They used a technique developed by Michael Towey and his colleagues
(2014) in which algorithms compress the spectrograms so that one pixel represents one min-
ute of sound recording. The resulting spectrograms enable birders to get an overview of the
recordings that in turn allows them to see patterns in the bird songs.
ACTiViTy 10.3
This video by Jeff Heer (2017) from University of Washington gives an overview of dif-
ferent types of data visualizations and data visualization tools: https://www.youtube.com/
watch?v=hsfWtPH2kDg.
Watch this video and then describe (a) some of the benefits of using interactive visualiza-
tions and (b) some of the UX challenges in designing interactive visualizations.
Comment
1. By working with interactive visualizations, users can interact with data to explore aspects
of interest by going deeper into particular parts of the data. This is demonstrated in the
visualization of airline on-time performance in which a user can filter portions of the data
to view which flights arrive late. From this exploration, the user will discover that flight
delays are associated with it being late in the day. In other words, the data reveals that as
time goes by, the actual arrival times of flights tend to fall further behind the scheduled
arrival times. Also, by being able to filter and manipulate particular parts of the data,
users can answer questions that would be difficult to investigate without data visualization
tools, such as what causes flights to arrive early?
2. In the video, Jeff Heer talks about some of the human perceptual and cognitive issues
about which UX designers must be aware when they create visualizations. For example,
he mentions the importance of using color appropriately in a visualization of arteries. He
also talks about the challenge of knowing how much detail to include in the visualization
about the structure of the arteries.
In addition, Jeff mentions the power of many current tools for investigating many
different variables, but he notes that using some of these tools proficiently requires pro-
gramming and data analytics skills. UX visualization tool designers therefore need to find
ways to support users who may not have these skills. He describes how some designers are
tempted to get around this problem by automating the analyses. He points out, however,
that a careful balance is needed in deciding how much automation should be provided and
how much control should be left in the hands of users. Making this judgment is challenging
for designers.
Jeff also mentions that there is much more to analyzing data than to visualizing it.
Data has to be cleaned and prepared, a task referred to as data wrangling, which can take
up to 80 percent of a data scientist’s time. Issues of privacy also need to be considered. As
a UX data visualization tool designer, Jeff suggests that all of these issues are challenges
that must be considered when designing visualizations and data viz tools.
1 0 D ATA AT S C A L E370
Powerful tools and platforms for analyzing and making predictions from large volumes
of data have been designed for marketing, scientific and medical research, finance, business,
and other kinds of professional use. To use these tools typically requires data analytic skills and
statistical knowledge, which makes the potential benefits they offer out of reach for many
people (Mittlestatd, 2012).
Many of these tools have been developed by large companies and research labs (Sakr
et al., 2015). Some examples include Tableau, Qlik, Datapine, Voyager 2, Power BI, Zoho,
and D3. To use these tools effectively, business managers often partner with analysts who
assist them in interactive explorations that can lead to new insights. Together the analysts
and managers identify widgets in the form of icons that represent their underlying function-
ality, from those made available in the tool, and then they create a customized “interactive
dashboard” for use by the manager.
The dashboard is an interactive panel of control widgets that contains sliders, check-
boxes, radio buttons, and coordinated multiple window displays of different kinds of graphi-
cal representations, such as bar and line graphs, heat maps, tree maps, infographics, word
clouds, scatterplots, and other kinds of visualizations. Managers can then use these custom-
ized dashboards to explore the data and make informed decisions. All of the items in the
dashboard are coordinated and draw from the same data selected to investigate particular
questions of interest. In other words, the components of the dashboard are interactive and
linked together so that they are coordinated (see Figure 10.13). This enables their users to
benefit from seeing data displayed in different ways and to explore how these representations
change as they manipulate sliders and other controls. The displays produced by tools like
Microsoft’s Power BI, Tableau, and similar products are used by managers who can make the
same dashboards available to other employees across their company. Everyone can then see,
discuss, and interact with the same data.
Figure 10.13 A dashboard that was created to show changes in sales information
Source: https://www.zoho.com/analytics/tour.html
https://www.zoho.com/analytics/tour.html
1 0 . 3 V i S u A L i z i n g A n D E X p L o r i n g D ATA 371
Another technique for creating interactive visualizations is Data-Driven Documents
(D3) (Bostock et al., 2011). This tool is used to create web-based interactive displays. It is a
powerful, specialist tool that extends JavaScript, and it requires programming expertise to
use it effectively. It is used by journalists to create displays that appear in traditional news
print and you can also interact with it online (see Figure 10.14).
A challenge is how to make powerful tools available to people who want to explore such
topics as personal finance and health data but who are not trained as analysts and who do
not want to employ or work with an analyst. Furthermore, some products are expensive and
are unaffordable for many individuals and nonprofit organizations.
In a recent study, Alper Sarikaya and colleagues (2018) pointed out that the term
dashboard requires more precise description and a deeper understanding of how the con-
text of its use can impact the UX design of dashboards. They challenge UX designers to
develop dashboards for different types of use cases for a wide range of users. In their study,
For an overview of Tableau that shows how it is used and how Tableau dashboards
are created, watch this video clip: https://www.tableau.com/#hero-video.
The Overall Picture
Effective tax rate
2007-12
S.&P. 500
companies
The View by Industry
0% 10% 20% 30%
OVERALL
29.1%
40% 50% ≥ 60% N.A.
Find a company or industry…
Figure 10.14 An interactive graphic produced using D3 for the New York Times. It shows the tax
rate paid by the different kinds of companies that form the S&P 500 financial index.
Source: Reproduced with permission of PARS International
Watch this New York Times graphic for an article entitled “Across U.S. Compa-
nies, Tax Rates Vary Greatly.” (Navigate to the following link to interact with the
graphic. Try panning over the display.)
https://archive.nytimes.com/www.nytimes.com/interactive/2013/05/25/
sunday-review/corporate-taxes.html.
https://archive.nytimes.com/www.nytimes.com/interactive/2013/05/25/sunday-review/corporate-taxes.html
1 0 D ATA AT S C A L E372
they analyzed a range of dashboards, first by reviewing published papers written by other
researchers and then through a qualitative study in which they classified the features of dif-
ferent dashboards and how they are used.
They characterized the dashboards according to their design goals, levels of interaction,
and the ways in which they are used. Figure 10.15 shows examples of the seven kinds of
dashboards that they identified. Each type is named according to how it is used: strate-
gic decision-making, quantified self, static operational, static organizational, operational
decision-making, communication, and dashboards evolved, which was a catchall category
that included features that did not fit into other categories. They state that many of these
examples visually appeared as dashboards but may not fit the strictest definition of dash-
board functionality.
Sarikaya et al. also advocate ways of supporting users by telling stories that can help
illustrate the context that the data visualizations represent. They point out the challenges
that users encounter when interacting with visualizations such as enabling them to have more
control over how they configure and use dashboards. A further challenge involves finding
ways to support users in developing data and visual and analytic literacy. They also point out
that there is a new opportunity for UX designers that involves finding ways to support users
in making choices about which data and representations to use in different contexts. This
includes understanding the broader social impact of dashboards.
Strategic Decision-Making1
Operational Decision-Making5 Communication6 Dashboards Evolved7
Static Operational3 Static Organizational4Quantified Self2
Figure 10.15 Exemplar dashboards (Sarikaya et al., 2018). Dashboard 1 and Dashboard 5 spe-
cifically target decision-making, while Dashboard 3 and Dashboard 4 target consumer awareness.
Dashboard 2 represents the quantified self (such as a smart home), while Dashboard 6 represents
those dashboards targeting communication. Dashboard 7 captures some novel extensions of tra-
ditional dashboards.
Source: Sarikaya et al. (2018), Graph 1. Reproduced with permission of IEEE.
1 0 . 3 V i S u A L i z i n g A n D E X p L o r i n g D ATA 373
ACTiViTy 10.4
Study Figure 10.16(a) from the weather site https://www.wunderground.com. It shows
weather data for December 19, 2018, at Washington D.C. in the United States. Particularly
take note of the temperature, precipitation, and wind data. What information do they provide?
Now compare this visualization with that depicted in the “wundermap” (see Figure 10.16b).
How do the two displays differ, and which do you prefer?
(a)
(b)
Figure 10.16 (a) Actual weather data and (b) a wundermap of the same area and time
Source: https://www.wunderground.com
1 0 D ATA AT S C A L E374
Comment
The first display in Figure 10.16(a) contains representations that are fairly standard for con-
veying weather information. The green ring shows the maximum and minimum temperatures
for now and what it feels like. A diagram of a sun indicates that it is a sunny day with some
clouds, even though it is quite cold. It is also easy to see that the wind is from the south,
and presumably the circle represents a compass and the pointed wedge indicates the wind
direction.
The display in Figure 10.16(b) provides similar data, but it is harder to get an overview
of weather in the Washington D.C. area. It uses conventional meteorological symbols to show
temperature and wind. It is easier to see local effects but harder to get an overview of weather
of the entire area. (If you are able to access the website, try clicking “layer” and selecting other
options not shown in the figures.) Which display is preferable probably depends on how much
detail you want—an overview or detail about a specific area in the Washington D.C. region—
and your tolerance for clutter.
BoX 10.3
Visualizing the Same Sensor Data by Using Different Kinds of
Representations for Environmentalists and the General Public
One of the parks in London that was used to stage the Olympics in 2012 has been trans-
formed into a “smart park.” By this it is meant that a number of sensors were placed through-
out the park to measure the health and use of the park. One type used throughout the park
was bat call sensors. The goal was to ensure that the park’s bat conservation program was
effective, as well as connecting visitors and residents to the wildlife around the park (https://
naturesmartcities.com/). Monitoring bat calls is also a technique that was used to assess the
general health of the park.
The data collected was primarily provided to the scientists in the form of spectrograms
(see Figure 10.17b), but it was also presented in a more accessible form to the public via
an interactive display (see Figure 10.17a). As part of a public kiosk, a schematic map was
provided that showed where in the park the bat call data had been collected (Matej et al.,
2018). A slider was provided to enable visitors to interact with the data: moving it to the left
showed bat call data from the night before, while moving it to the right showed bat call data
from the previous 10 nights. The LEDs on the map changed in color and intensity, represent-
ing the varying levels of bat calls. The total number was also shown in the digital display.
The kiosk was deployed in the park, and many passersby stopped for a considerable length
of time to learn about bats and interact with the data. The physical act of using the slider
provided an engaging way of exploring the data rather than just looking at a static visualiza-
tion or dashboard.
1 0 . 4 E T h i C A L D E S i g n C o n C E r n S 375
10.4 Ethical Design Concerns
In the introduction to this chapter, we mentioned how masses of data are now regularly being
collected from people for a variety of reasons, including improving public services, reduc-
ing congestion, and enhancing security measures. It is usually anonymized and sometimes
aggregated to make it publicly available, for example showing the energy consumption data
for a given space such as a floor of a building. Figure 10.18 shows a floor-by-floor compari-
son for a University of Melbourne building, where the red bar for the basement is the worst
(a)
(b)
Figure 10.17 The same bat call data was made accessible (a) to the general public via an
interactive visualization and (b) as a spectrogram intended for environmental scientists.
Source: (a) Used courtesy of Matej Kaninsky and (b) Used courtesy of Sarah Gallacher
1 0 D ATA AT S C A L E376
performer in terms of energy usage and the green bar for Level 1 is the best performer. The
idea is to provide feedback on energy consumption in the building to increase awareness
among the inhabitants to encourage them to reduce their energy consumption. However,
what if localized occupancy rates or energy consumption for each office were shown? It
would not take much to figure out who was in that space. Would that be a step too far and
an invasion of their privacy? Would people mind?
When deciding on how to analyze data and act upon data that has been automatically
collected from different sensors, it is important to consider how ethical the data collection
and storage processes are and how the data analysis will be used. By “ethics,” this is usually
taken to mean “the standards of conduct that distinguish between right and wrong, good and
bad, and so on” (Singer, 2011, p. 14). There are many codes of ethics available from official
bodies that provide guidance. For example, the ACM (2018) and IEEE (2018) have both
developed sets of ethics (https://ethics.acm.org/2018-code-draft-1/; IEEE Ethically Aligned
Design, 2018). They point out that central to any ethical discussion is the importance of pro-
tecting fundamental human rights and to respect the diversity of all cultures. They also state
the need to be fair, honest, trustworthy, and respectful of privacy.
To use data ethically, researchers and companies can limit the data they collect in the first
place. Rather than trying to collect as much data as possible (as has often been the situation
Figure 10.18 Average daily energy consumption depicted on a public display for a building at the
University of Melbourne. Green is best performer, yellow is in the middle, and red is the worst
performer.
Source: Helen Sharp
https://ethics.acm.org/2018-code-draft-1/;
1 0 . 4 E T h i C A L D E S i g n C o n C E r n S 377
in research—just in case it might be useful for subsequent analysis), it has been proposed that
researchers and data practitioners follow an approach called privacy by design (Crowcroft
et al., 2018). That way, they can avoid collecting excessive data that might be sensitive but
not needed (see also Chapter 8 and Chapter 14). Furthermore, it may be possible to collect
and analyze the data on the device itself, rather than uploading it to the cloud (Lane and
Georgiev, 2015).
ACTiViTy 10.5
Watch the following TEDx talk by Jen Golbeck (the author of the Atlantic article on the
Quantified Toilets) where she discusses why social media “likes” say more than you think.
The talk was given in 2013 and since then has had more than 2.25 million views. Even though the
TEDx talk is a few years old, the issues raised in it are still relevant today. In particular, she
discusses how people’s behavior online enables companies to predict what they like, what they
might be interested in buying, and even their political views.
https : / /www.ted.com/talks / jennifer_golbeck_the_curly_fry_conundrum_
why_social_media_likes_say_more_than_you_might_think?language=en&utm_
campaign=tedspread&utm_medium=referral&utm_source=tedcomshare
What do you think the privacy issues are here?
Comment
Jen Golbeck provides two compelling examples in her talk. The first is the well-known exam-
ple of how a teenage girl’s pregnancy was predicted from her online purchases of things like
vitamins. The second example was how data on liking crinkly fries coupled with a knowledge
of the theory of homophily was used to predict that a group of people have above average
intelligence. By understanding that the theory of homophily explains that people who are
similar tend to like the same things, trust each other, and seek out each other’s company, Jen
Golbeck was able to look for relationships in data about “liking” crinkly fries. The crinkly
fries example indicates that even though it is absurd that liking crinkly fries is a predictor
of above average intelligence, in this particular example, the person who created the post
attracted “likes” from friends who were also of above average intelligence. It is an amusing
example, but the main point is to illustrate that information that people contribute in social
media, often unknowingly, can be used to infer all kinds of things about them, such as their
ethnicity, age, gender, shopping behavior, and what they like.
The concerns highlighted in the video are prescient for politicians and others looking for
ways to protect the general public by controlling what social media companies can and cannot
do with personal data. For example, GDPR is a law introduced by the European Union that
seeks to protect data privacy (discussed in Chapter 8, “Data Analysis,” and Chapter 14,
“Introducing Evaluation”). Within the United States and in other countries, the need for con-
trolling how personal data is used is being debated by governments and consumer protection
groups, such as the Electronic Privacy Information Center (EPIC; www.epic.org), with the
goal of finding ways to protect data privacy.
1 0 D ATA AT S C A L E378
An ethical strategy that can be adopted for systems that analyze data is to have an
explicit agreement in place as to how it will be used and acted upon. Such an agreement can
include in what way is the analysis trustworthy. Trustworthiness is usually taken to mean
how credible are the analyses being performed on the data (see Davis, 2012). When a deci-
sion is made on behalf of a human based on classifying their data using some machine learn-
ing algorithm, can the user be sure that the decision made is trustworthy?
Another ethical concern is whether the form of data analysis being used by a system is
socially acceptable (Harper et al., 2008). Have clear boundaries been established between
what is acceptable and not acceptable, especially when it is personal data that is being ana-
lysed and classified, such as health data or criminal history? Is there a clear understanding
about the data that is being analyzed to provide information about a phenomenon versus
that which is performed to make a decision about someone’s future? Are there agreed poli-
cies in place? How much do the boundaries shift and change over time as new technology
becomes more mainstream and authorities change?
ACTiViTy 10.6
Shoplifting cost U.S. retailers $44 billion in 2014. To help combat shoplifting, DeepCam
developed an intelligent system that passively monitors people coming into a store by using
CCTV video footage that identifies potential suspects (see Figure 10.19). To do this, it uses AI
algorithms and facial recognition software. Do you think this practice is socially acceptable?
What might be the privacy concerns? To find out more about their system, check out their
website at https://deepcamai.com/.
Figure 10.19 DeepCam’s face-tracking software used in a store
Source: https://deepcamai.com
1 0 . 4 E T h i C A L D E S i g n C o n C E r n S 379
The Open Data Institute (https://www.theodi.org) has provided a set of questions to
help researchers, system developers, and data scientists to formulate a set of ethical ques-
tions to begin to address these concerns. These are grouped as sets of questions as part of a
framework called the data ethics canvas. For example, two subsets are about the positive and
negative effects that a project can have on people. The questions include “Which individu-
als, demographics, and organizations will be positively affected by the project?” and “How
is positive impact being measured?” The negative questions include “Could the manner in
which the data is collected, shared, and used cause harm?” and “Could people perceive it to
be harmful?” Working through each set of questions is intended to help researchers identify
potential ethical issues for a data project or activity and to encourage explicit reflection and
debate within a project team as to who the project will impact and the steps needed to ensure
that the project is ethical. The framework provided on the ODI website is intended to help
organizations identify potential ethical issues associated with their data project.
In Chapter 1, “What Is Interaction Design?” we outlined a number of usability and UX
design principles that were transformed into questions, criteria, and examples showing how
to use them in the design process. Here, we introduce some other principles that relate to the
ethics of collecting and using data at scale and that are often talked about in the literature
on ethics, data science, HCI, and AI (see Cramer et al., 2008; Molich et al., 2001; Crowcroft
et al., 2018; Chuang and Pfeil, 2018; and van den Hoven, 2015). We call these data eth-
ics principles (see Box 10.4). Four principles that often appear in the reports, handbooks,
and articles on ethics and interaction design are fairness, accountability, transparency, and
explainability (FATE). They are also included as key principles that lie at the heart of the
general data protection regime (GDPR). For example, Article 5 of the GDPR requires that
personal data shall be “processed lawfully, fairly, and in a transparent manner in relation to
individuals.” Within the context of HCI, Abdul et al. (2018) have proposed an agenda for
how HCI researchers can help to develop more accountable intelligent systems that don’t just
explain their algorithms but are also usable and useful to people.
It should be noted that the ethics principles are not mutually exclusive but interrelated
as described next.
Comment
To address privacy concerns, the company developed its system so that it does not identify
customers or link them to any sensitive information such as name, address, or date of birth. It
only recognizes faces and identifies patterns of behavior that potentially are worth investigat-
ing. The video footage is indexed and structured similar to how web pages are set up for quick
searching. This enables store detectives to be able to notice potential threats in real time.
Many people might find this form of data analysis creepy, knowing that their faces are being
matched to a database each time they enter a store. Others might find it more socially accept-
able because it has the potential to reduce crime considerably.
https://www.theodi.org
1 0 D ATA AT S C A L E380
BoX 10.4
Data Ethics Principles (FATE)
Fairness Fairness refers to impartial and just treatment or behavior without favoritism or
discrimination. While this is something to which organizations adhere in areas such as pro-
motion and hiring, in the context of data analysis, it refers to how fair a dataset is and what
the impact will be from using the results. For example, sometimes, the dataset is biased
toward a particular demographic that results in unfair decisions being made. If these could
be identified and revealed by the system, it would make it possible to rectify them while also
developing new algorithms that can make the system fairer.
Accountability Accountability refers to whether an intelligent or automated system that uses
AI algorithms can explain its decisions in ways that enable people to believe they are accu-
rate and correct. This involves making clear how decisions are made from the datasets that
are used. A question that arises is who is accountable for doing this? Is it the person provid-
ing the data, the company coding the algorithms, or the organization that is deploying the
algorithms for its own purposes?
Transparency Transparency refers to the extent to which a system makes its decisions visible
and how they were derived (see Maurya, 2018). There has been much debate about whether
AI systems, which typically depend on large datasets when making a decision, should be
designed to be more transparent (see Brkan, 2017). Examples include medical decision-
making systems that can diagnose types of cancer and media service providers (for instance,
Netflix) that suggest new content for you to watch based on their machine learning algo-
rithms. Currently, many are black-box in nature; that is, they come up with solutions and
decisions without any explanation as to how they were derived. Many people think this
practice is unacceptable, especially as AI systems are given more responsibility to act on
behalf of society, for example, deciding who goes to prison, who gets a loan, who gets the
latest medical treatment, and so on. Some of the rules of the GDPR on automated decision-
making are also concerned with how to ensure the transparency of decisions made by
machine learning algorithms (Brkan, 2017).
Explainability Explainability refers to a growing expectation in HCI and AI that systems,
especially those that collect data and make decisions about people, provide explanations
that laypeople can understand. Research into what is a good explanation to provide has
been the subject of much research since expert systems came into being in the 1980s.
Following this early work, there was research into what context-aware systems should pro-
vide. For example, Brian Lim et al. (2009) conducted a study that provided different kinds
of explanations for a system that made automated decisions. They found that explanations
describing why a system behaved in a certain way resulted in a better understanding and
stronger feelings of trust. In contrast, explanations describing why the system did not
behave a certain way resulted in lower understanding. More recently, research has investi-
gated the kinds of explanations that are appropriate and helpful for users of automated
systems (see Binnes et al., 2018).
The FATE framework suggests that the design of future systems, which use AI algorithms
in combination with personal or societal data, should ensure that they are fair, transparent,
accountable, and explainable. Achieving this goal is complex, and it involves being aware of
the potential for bias, discrimination in big data and algorithms, ethics in big data, legal and
policy implications, data privacy, and transparency (Abdul et al., 2018).
1 0 . 4 E T h i C A L D E S i g n C o n C E r n S 381
Achieving this objective is inevitably difficult. For example, as pointed out by Cynthia
Dwork at a panel on big data and transparency (transcribed by Maurya, 2018), it is difficult
to know what a good explanation of a decision might be for human beings. She uses the exam-
ple of what should a system say when a user asks, “Why was I turned down for the loan?” to
illustrate this. The system might be able to reply, “There is a classifier, we feed your data into
it, and the outcome was that you were turned down.” However, that is of little help to a user,
and it is likely to be more annoying than not having any explanation.
Reuben Binnes et al. (2018) conducted an experiment to determine what kinds of expla-
nations users found to be fair, accountable, and transparent for an automated system. In
particular, they compared four different styles of explanation, ranging from being largely
numerical scores to more comprehensive ones that provided a breakdown of the statistics
used for certain demographic categories, including age, gender, income level, or occupation.
The different styles were presented for scenarios in which a decision had been made about
individuals automatically, such as applying for a personal financial loan and where passengers
on over-booked airline flights were selected for rerouting. The results of their experiment
showed that some of the participants found that they engaged with the explanations to assess
the fairness of the decisions being made, but at times they found them impersonal, and even
dehumanizing. What constitutes a fair explanation may need to be more than providing an
account of the processes used by the algorithms. From an interaction design perspective, it
might help if the explanations were interactive, enabling the user to interrogate and negotiate
with the system, especially if a decision that has been made is contrary to what they expected
or had hoped.
Jure Leskowec (2018) comments on how the consequences of a system making a deci-
sion on behalf of a human can vary. This will determine whether an explanation is needed to
support a decision made by a system and what it should include. For example, if a decision
is made to pop up an ad for slippers in a user’s browser, based on an analysis of their tracked
online app usage (a common practice used in targeted advertising), it might be mildly annoy-
ing, but it is unlikely to upset them. However, if it means a person is going to be denied a
loan or a visa based on the outcome of an automated algorithm, it may have more dire con-
sequences for someone’s life, and they would want to know why the particular decision was
made. Jure suggests that humans and algorithms need to work together for system decisions
that implicate more important societal concerns.
Another reason why ethics and data have become a big concern is that automated sys-
tems that rely on existing datasets can sometimes make decisions that are erroneous or biased
toward a particular set of criteria. In doing so, they end up being unfair. An example that
caused a public outcry was the misidentification of people with dark skin. Traditional AI
systems have been found to have much higher error rates for this demographic. In particular,
35 percent of darker-skinned females were misidentified compared with 7 percent of lighter-
skinned females (Buolamwini and Gebru, 2018). This difference was exacerbated by the error
rate found for lighter-skinned males, which was less than one percent. One of the main rea-
sons for this large discrepancy in misidentification is thought to be due to the make-up of the
images in the datasets used. One widely used collection of images was estimated to have more
than 80 percent white images of which most were male.
(Continued)
1 0 D ATA AT S C A L E382
BoX 10.5
The Living Room of the Future: An Ethical Approach to Using
Personal Data
There are now more than 250 smart cities projects throughout the world in nearly 200 cities.
Each has different aspirations, but a major goal is to make cities more energy efficient, to
make them safer, and to improve the quality of life. Most work in partnership with tech
companies, central councils, and local communities to realize the benefits of new economic
opportunities. IoT technologies and big data are often a central concern. One aspect has been
to develop new approaches and toolkits to empower local individuals with the tools to meas-
ure and act upon what they want to find out about their city. For example, a citizen-sensing
approach, adopted in the city of Bristol, brought together local citizens, community workers,
and volunteers who together developed an innovative and affordable DIY sensing tool that
people could use to record and collect data about the level of dampness in their homes (www
.bristolapproach.org/).
In addition to the many smart cities projects, there are others that focus on how people
adopt, accept, and approach new sensing technologies in a particular building or home. For
example, the Living Room of the Future project is investigating how people will live in the
future in a home that has been embedded with a range of IoT devices (https://www.bbc.co.uk/
rd/projects/living-room-of-the-future). The project is researching how to make personal data
transparent and trustworthy, while at the same time respecting people’s privacy. It is also con-
cerned with designing methods that can offer explicit awareness and transparency of those
individuals’ data.
A particular challenge that they are addressing is how to enable people to be in control
of their own personal data while at the same time letting it be used by the home system to
adapt their experiences, for example, choosing what media to play and from which device(s).
The devices, like sensors and everyday RFID-enabled objects, have the potential of being able
to collect a diversity of personal data, including music preferences, history of where everyone
is sitting, what they are doing, and how they are feeling. To ensure privacy, the data is col-
lected and stored, not in the cloud but on a home-fenced data server called a databox. The box
collates, curates, and mediates access to each person’s data. It only uses verified and audited
This bias is clearly unacceptable. The challenge facing companies that want to use or
provide this data corpora is to develop fairer, more transparent, and accountable facial analy-
sis algorithms that can classify people more accurately regardless of demographics, such as
skin color or gender. A number of AI researchers have begun addressing this problem. Some
have started developing 3D facial algorithms that continuously learn multiracial
characteristics from 2D pictures. Others have introduced new face datasets that are more bal-
anced (see Buolamwini and Gebru, 2018).
www.bristolapproach.org/
www.bristolapproach.org/
1 0 . 4 E T h i C A L D E S i g n C o n C E r n S 383
in-Depth Activity
Go to labinthewild.com, and select the test “What is your privacy profile?” This test has been
designed to tell you what you think about data privacy and how different you are compared
to what others think about this topic. It should take about 10–15 minutes to complete. At the
end of the test, it will provide you with your results and classify you in terms of whether you
are not concerned, somewhat concerned, or very concerned.
1. Do you consider this to be an accurate reflection of how you view privacy?
2. Did you think the video shown was effective at raising potential problems of what data is
collected in a smart building? If not, what other scenario could be used in a video to ask
people to consider privacy concerns?
3. What impact do you think the context chosen for the scenario might have on your reac-
tions? For example, if the scenario involved a doctor’s surgery, might you have reacted
differently and if so, why?
4. What do you think of labinthewild.com as a platform for conducting large-scale online
experiments from volunteers?
5. Did you find any other information on the website interesting?
Summary
This chapter described how data at scale involves bringing together large volumes of data
from different sources that is then analyzed to address new questions and provide insights that
could not be gained by analyzing data from a single source. The chapter explains techniques
and tools for collecting and analyzing large volumes of data. It also raises some concerns
about how data at scale is used, particularly as to the need for personal data privacy. UX
designers are encouraged to consider the impact of their designs on how data is used and to
ensure that it is used ethically. Four core principles are advocated for ethical design: fairness,
accountability, transparency, and explainability (FATE).
third-party services to ensure that it cannot be accessed by anyone else. As such, the data never
leaves the living room and ensures that the IoT services provided can be deemed trustworthy.
It also provides a platform that allows personalization to occur without needing to ask the
users if they are OK with any suggested changes.
The home of the future project is also investigating how data can be collected and how it
is being used in the house, for example, to control home heating and lighting.
http://labinthewild.com
http://labinthewild.com,
1 0 D ATA AT S C A L E384
Further Reading
HANSEN, D., SHNEIDERMAN, B., SMITH, M. A. AND HIMELBOIN, I., (2019) Analyzing
Social Media Networks with NodeXL. Insights from a Connected World (2nd ed.). Morgan
Kaufmann. This book provides an introduction to social network analysis. It focuses on
NodeXL, but much of the discussion is helpful when using any social network analysis tool.
SCHRAEFEL, M.C. GOMER, R. ALAN, A. GERDING, E., AND MAPLE, C. (2017)
The Internet of Things: Interaction Challenges to Meaningful Consent at Scale. Interac-
tions, 24, 6 (October 2017), 26–33. This short article discusses how HCI researchers can be
involved in helping users manage their privacy and personal data especially in view of IoT.
SZAFIR, D. (2018) The good, the bad, and the biases: Five Ways Visualizations Can Mislead
and How to Fix Them. Interactions. xxv.4. As the title suggests, this article discusses some of
the well-known problems and design flaws with visualizations and suggests ways to fix them.
SARIKAYA, A., CORELL, M., BARTRAM, L, TOREY, M, FISHER, D. (2018) What Do We
Talk About When We Talk About Dashboards? IEEE Trans Vis Comput Graph. This paper
characterizes dashboards, and it reviews and critiques their design and how they are used.
SHILTON, K (2018) Values and Ethics in Human-Computer Interaction. Foundations and
Trends in Human-Computer Interaction: Vol. 12, No. 2, 107–171. This article provides a
good overview of issues being debated in HCI about ethics, data, and HCI.
Key Points
• Data at scale concerns very large volumes of data, which is also known as big data.
• A defining feature of data at scale is that it includes different types of data collected from
different sources that are analyzed to address particular questions.
• Data at scale can be quantitative and qualitative data; it consists of social media messages,
sentiment and facial recognition data, documents, sensor, sound and sonic data, and video
surveillance data.
• Analyzing data from different sources is powerful because it provides different perspectives
on people’s behavior.
• Analyzing data at scale can have positive outcomes, such as understanding people’s health
problems, but there are also dangers if personal data is revealed and then misused.
• Data at scale is collected and analyzed in many different ways including data scraping,
monitoring oneself and others, crowdsourcing, and sentiment and social network analysis.
• Data visualization provides tools and techniques for representing, understanding, and
exploring data.
• Ethical design principles suggest ways that UX designers can create designs and interaction
processes that make clear how data is being used.
• Ensuring that artificial intelligence systems are transparent is a particularly important eth-
ical design principle.
Chapter 11
D I S C O V E R I N G R E Q U I R E M E N T S
11.1 Introduction
11.2 What, How, and Why?
11.3 What Are Requirements?
11.4 Data Gathering for Requirements
11.5 Bringing Requirements to Life: Personas and Scenarios
11.6 Capturing Interaction with Use Cases
Objectives
The main goals of the chapter are to accomplish the following:
• Describe different kinds of requirements.
• Allow you to identify different kinds of requirements from a simple description.
• Explain additional data gathering techniques and how they may be used to discover
requirements.
• Enable you to develop a persona and a scenario from a simple description.
• Describe use cases as a way to capture interaction in detail.
11.1 Introduction
Discovering requirements focuses on exploring the problem space and defining what will
be developed. In the case of interaction design, this includes: understanding the target users
and their capabilities; how a new product might support users in their daily lives; users’
current tasks, goals, and contexts; constraints on the product’s performance; and so on.
This understanding forms the basis of the product’s requirements and underpins design and
construction.
It may seem artificial to distinguish between requirements, design, and evaluation activi-
ties because they are so closely related, especially in an iterative development cycle like the
one used for interaction design. In practice, they are all intertwined, with some design taking
place while requirements are being discovered and the design evolving through a series of
evaluation—redesign cycles. With short, iterative development cycles, it’s easy to confuse the
1 1 D I S C O V E R I N G R E Q U I R E M E N T S386
purpose of different activities. However, each of them has a different emphasis and specific
goals, and each of them is necessary to produce a quality product.
This chapter describes the requirements activity in more detail, and it introduces some
techniques specifically used to explore the problem space, define what to build, and charac-
terize the target audience.
11.2 What, How, and Why?
This section briefly considers what is the purpose of the requirements activity, how to capture
requirements, and why bother at all.
11.2.1 What Is the Purpose of the Requirements Activity?
The requirements activity sits in the first two phases of the double diamond of design, intro-
duced in Chapter 2, “The Process of Interaction Design.” These two phases involve explor-
ing the problem space to gain insights about the problem and establishing a description of
what will be developed. The techniques described in this chapter support these activities,
and they capture the outcomes in terms of requirements for the product plus any supporting
artifacts.
Requirements may be discovered through targeted activities, or tangentially during
product evaluation, prototyping, design, and construction. Along with the wider interaction
design lifecycle, requirements discovery is iterative, and the iterative cycles ensure that the les-
sons learned from any of these activities feed into each other. In practice, requirements evolve
and develop as the stakeholders interact with designs and learn what is possible and how
features can be used. And, as shown in the interaction design lifecycle model in Chapter 2,
the activity itself will be repeatedly revisited.
11.2.2 How to Capture Requirements Once They Are Discovered?
Requirements may be captured in several different forms. For some products, such as an
exercise monitoring app, it may be appropriate to capture requirements implicitly through
a prototype or operational product. For others, such as process control software in a fac-
tory, a more detailed understanding of the required behavior is needed before prototyping
or construction begins, and a structured or rigorous notation may be used to investigate the
product’s requirements. In all cases, capturing requirements explicitly is beneficial in order to
make sure that key requirements aren’t lost through the iterations. Interactive products span
a wide range of domains with differing constraints and user expectations. Although it may
be disappointing if a new app to alert shoppers about offers on their favorite purchases turns
out to be unusable or slightly inaccurate, if the same happens to an air traffic control system,
the consequences are far more significant and could threaten lives.
As we discuss in this section, there are different kinds of requirements, and each can
be emphasized or de-emphasized by different notations because notations emphasize differ-
ent characteristics. For example, requirements for a product that relies on processing large
amounts of data will be captured using a notation that emphasizes data characteristics. This
means that a range of representations is used including prototypes, stories, diagrams, and
photographs, as appropriate for the product under development.
1 1 . 3 W h aT a R E R E Q U I R E M E N T S ? 387
11.2.3 Why Bother? Avoiding Miscommunication
One of the goals of interaction design is to produce usable products that support the way that
people communicate and interact in their everyday and working lives. Discovering and com-
municating requirements helps to advance this goal, because defining what needs to be built
supports technical developers and allows users to contribute more effectively. If the product
turns out to be unusable or inappropriate, then everyone will be disappointed.
User-centered design with repeated iteration and evaluation along with user involvement
mitigates against this from happening. The following cartoon illustrates the consequences of
misunderstanding or miscommunication. The goal of an iterative user-centered approach is
to involve different perspectives and make sure that there is agreement. Miscommunication
is more likely if requirements are not clearly articulated.
11.3 What Are Requirements?
A requirement is a statement about an intended product that specifies what it is expected to
do or how it will perform. For example, a requirement for a smartwatch GPS app might be
that the time to load a map is less than half a second. Another, less precise requirement might
be for teenagers to find the smartwatch appealing. In the latter example, the requirements
1 1 D I S C O V E R I N G R E Q U I R E M E N T S388
activity would involve exploring in more detail exactly what would make such a watch
appealing to teenagers.
One of the goals of the requirements activity is to identify, clarify, and capture the require-
ments. The process of discovering requirements is iterative, allowing requirements and their
understanding to evolve. In addition to capturing the requirements themselves, this activity
also involves specifying criteria that can be used to show when the requirements have been
fulfilled. For example, usability and user experience criteria can be used in this way.
Requirements come in different forms and at different levels of abstraction. The exam-
ple requirement shown in Figure 11.1(a) is expressed using a generic requirements struc-
ture called an atomic requirements shell (Robertson and Robertson, 2013); Figure 11.1(b)
describes the shell and its fields. Note the inclusion of a “fit criterion,” which can be used
to assess when the solution meets the requirement, and also note the indications of “cus-
tomer satisfaction,” “dissatisfaction,” and “priority.” This shell indicates the information
about a requirement that needs to be identified in order to understand it. The shell is from
a range of resources, collectively called Volere (http://www.volere.co.uk), which is a generic
requirements framework. Although not specifically designed for interaction design, Volere
is widely used in many different domains and has been extended to include UX analytics
(Porter et al, 2014).
An alternative way to capture what a product is intended to do is via user stories. User
stories communicate requirements between team members. Each one represents a unit of
customer-visible functionality and serves as a starting point for a conversation to extend and
clarify requirements. User stories may also be used to capture usability and user experience
goals. Originally, user stories were normally written on physical cards that deliberately con-
strained the amount of information that could be captured in order to prompt conversations
between stakeholders. While these conversations are still highly valued, the use of digital sup-
port tools such as Jira (https://www.atlassian.com/software/jira) has meant that additional
information to elaborate the requirement is often stored with user stories. As an example, this
additional information might be detailed diagrams or screenshots.
A user story represents a small chunk of value that can be delivered during a sprint
(a short timebox of development activity, often about two weeks’ long), and a common and
simple structure for user stories is as follows:
• As a
Example user stories for a travel organizer might be:
• As a
• As a
.
User stories are most prevalent when using an agile approach to product development.
User stories form the basis of planning for a sprint and are the building blocks from which
the product is constructed. Once completed and ready for development, a story consists of
a description, an estimate of the time it will take to develop, and an acceptance test that
determines how to measure when the requirement has been fulfilled. It is common for a
user story such as the earlier ones to be decomposed further into smaller stories, often
called tasks.
http://www.volere.co.uk
https://www.atlassian.com/software/jira
1 1 . 3 W h aT a R E R E Q U I R E M E N T S ? 389
(a)
(b)
Figure 11.1 (a) An example requirement expressed using an atomic requirements shell from Volere
(b) the structure of an atomic requirements shell
Source: Atlantic Systems Guild
1 1 D I S C O V E R I N G R E Q U I R E M E N T S390
During the early stages of development, requirements may emerge in the form of epics.
An epic is a user story that may take weeks or months to implement. Epics will be broken
down into smaller chunks of effort (user stories), before they are pulled into a sprint. Exam-
ple epics for a travel organizer app might be the following:
• As a
• As a
that
• As a
• As a
11.3.1 Different Kinds of Requirements
Requirements come from several sources: from the user community, from the business com-
munity, or as a result of the technology to be applied. Two different kinds of requirements
have traditionally been identified: functional requirements, which describe what the prod-
uct will do, and nonfunctional requirements, which describe the characteristics (sometimes
called constraints) of the product. For example, a functional requirement for a new video
game might be that it will be challenging for a range of user abilities. This requirement
might then be decomposed into more specific requirements detailing the structure of chal-
lenges in the game, for instance, levels of mastery, hidden tips and tricks, magical objects,
and so on. A nonfunctional requirement for this same game might be that it can run on a
variety of platforms, such as the Microsoft Xbox, Sony PlayStation, and Nintendo Switch
game systems. Interaction design involves understanding both functional and non functional
requirements.
There are many more different types of requirements, however. Suzanne and James
Robertson (2013) suggest a comprehensive categorized set of requirements types (see
Table 11.1), while Ellen Gottesdiener and Mary Gorman (2012) suggest seven product
dimensions (see Figure 11.2).
Project Drivers 1. The Purpose of the Product
2. The Stakeholders
Project Constraints 3. Mandated Constraints
4. Naming Conventions and Terminology
5. Relevant Facts and Assumptions
Functional Requirements 6. The Scope of the Work
7. Business Data Model and Data Dictionary
8. The Scope of the Product
9. Functional Requirements
1 1 . 3 W h aT a R E R E Q U I R E M E N T S ? 391
Nonfunctional Requirements 10. Look and Feel Requirements
11. Usability and Humanity Requirements
12. Performance Requirements
13. Operational and Environmental Requirements
14. Maintainability and Support Requirements
15. Security Requirements
16. Cultural Requirements
17. Compliance Requirements
Project Issues 18. Open Issues
19. Off-the-Shelf Solutions
20. New Problems
21. Tasks
22. Migration to the New Product
23. Risks
24. Costs
25. User Documentation and Training
26. Waiting Room
27. Ideas for Solutions
Table 11.1 A comprehensive categorization of requirements types
Source: Atlantic Systems Guild, Volere Requirements Specification Template, Edition 18 (2017), http://www
.volere.co.uk/template.htm
User
Users
interact
with the
product
The product
connects
to users,
systems,
and devices
The product
provides
capabilities
for users
The product
includes a
repository of
data and
useful
information
The product
enforces
constraints
The product
conforms
to physical
properties and
technology
platforms
The product
has certain
properties
that qualify its
operation and
development
Interface Action Data
The 7 Product Dimensions
Control Environment Quality
Attribute
Figure 11.2 The seven product dimensions
Source: Gottesdiener and Gorman (2012), p. 58. Used courtesy of Ellen Gottesdiener
http://www.volere.co.uk/template.htm
http://www.volere.co.uk/template.htm
1 1 D I S C O V E R I N G R E Q U I R E M E N T S392
In this section, six of the most common types of requirements are discussed: functional,
data, environment, user, usability, and user experience.
Functional requirements capture what the product will do. For example, a functional
requirement for a robot working in a car assembly plant might be that it is able to place and
weld together the correct pieces of metal accurately. Understanding the functional require-
ments for an interactive product is fundamental.
Data requirements capture the type, volatility, size/amount, persistence, accuracy, and
value of the required data. All interactive products have to handle some data. For example,
if an application for buying and selling stocks and shares is being developed, then the data
must be up-to-date and accurate, and it is likely to change many times a day. In the personal
banking domain, data must be accurate and persist over many months and probably years,
and there will be plenty of it.
Environmental requirements, or context of use, refer to the circumstances in which the
interactive product will operate. Four aspects of the environment lead to different types of
requirements. First is the physical environment, such as how much lighting, noise, move-
ment, and dust is expected in the operational environment. Will users need to wear protec-
tive clothing, such as large gloves or headgear that might affect the choice of interface type?
How crowded is the environment? For example, an ATM operates in a very public physical
environment, thus using a speech interface is likely to be problematic.
The second aspect of the environment is the social environment. Issues regarding the social
aspects of interaction design, such as collaboration and coordination, were raised in Chapter 5,
“Social Interaction.” For example, will data need to be shared? If so, does the sharing have to be
synchronous (for instance, viewing the data at once) or asynchronous (for example, two people
authoring a report taking turns to edit it)? Other factors include the physical location of fellow
team members, such as collaborators communicating across great distances.
The third aspect is the organizational environment, for example, how good is user sup-
port likely to be, how easily can it be obtained, and are there facilities or resources for train-
ing, how efficient or stable is the communications infrastructure, and so on?
Finally, the technical environment will need to be established. For example, what tech-
nologies will the product run on or need to be compatible with, and what technological
limitations might be relevant?
User characteristics capture the key attributes of the intended user group, such as the
users’ abilities and skills, and depending on the product, also their educational background,
preferences, personal circumstances, physical or mental disabilities, and so on. In addition,
a user may be a novice, an expert, a casual user, or a frequent user. This affects the ways in
which interaction is designed. For example, a novice user may prefer step-by-step guidance.
An expert, on the other hand, may prefer a flexible interaction with more wide-ranging pow-
ers of control. The collection of characteristics for a typical user is called a user profile. Any
one product may have several different user profiles.
Usability goals and user experience goals are another kind of requirement, and they
should be captured together with appropriate measures. Chapter 2 briefly introduced
To see how the seven product dimensions can be used to discover requirements,
see https://www.youtube.com/watch?v=x9oIpZaXTDs.
1 1 . 3 W h aT a R E R E Q U I R E M E N T S ? 393
usability engineering, an approach in which specific measures for the usability goals of the
product are agreed upon early in the development process and are used to track progress as
development proceeds. This both ensures that usability is given due priority and facilitates
progress tracking. The same is true for user experience goals. Although it is harder to iden-
tify quantifiable measures that track these qualities, an understanding of their importance is
needed during the requirements activity.
Different interactive products will be associated with different requirements. For example,
a telecare system designed to monitor an elderly person’s movements and alert relevant care
staff will be constrained by the type and size of sensors that can be easily worn by the users as
they go about their normal activities. Wearable interfaces need to be light, small, fashionable,
preferably hidden, and not get in the way. A desirable characteristic of both an online shopping
site and a robotic companion is that they are trustworthy, but this attribute leads to different
nonfunctional requirements—in the former, security of information would be a priority, while
in the latter behavioral norms would indicate trustworthiness. A key requirement in many sys-
tems nowadays is that they be secure, but one of the challenges is to provide security that does
not detract from the user experience. Box 11.1 introduces usable security.
BOX 11.1
Usable Security
Security is one requirement that most users and designers will agree is important, to some
degree or another, for most products. The wide range of security breaches, in particular of indi-
viduals’ private data, that have occurred in recent years has heightened everyone’s awareness
of the need to be secure. But what does this mean for interaction design, and how can security
measures be suitably robust, while not detracting from the user experience? As long ago as
1999, Anne Adams and Angela Sasse (1999) discussed the need to investigate the usability of
security mechanisms and to take a user-centered approach to security. This included informing
users about how to choose a secure password, but it also highlighted that ignoring a user-
centered perspective regarding security will result in users circumventing security mechanisms.
Many years later, usable security and the role of users in maintaining secure practices is
still being discussed. Users are now bombarded with advice about how to choose a password,
but most adults interact with so many systems and have to maintain a wide variety of login
details and passwords that this can be overwhelming. Instead of improving security, this can
lead to users developing coping strategies to manage their passwords, which may end up com-
promising rather than strengthening security. In their study, Elizabeth Stobert and Robert
Biddle (2018) identify a password lifecycle that shows how passwords are developed, reused,
adapted, discarded, and forgotten. Users are not necessarily ignoring password advice when
they create weak passwords or write them down, but instead they are carefully managing their
resources and expending more effort to protect the most valued accounts. Chapter 4,
“Cognitive Aspects,” highlighted issues around memory and passwords and the move toward
using biometrics instead of passwords. The need to identify usable security requirements,
however, will still exist even with biometrics.
1 1 D I S C O V E R I N G R E Q U I R E M E N T S394
aCTIVITY 11.1
Suggest some key requirements in each category (functional, data, environmental, user charac-
teristics, usability goals, and user experience goals) for each of the following situations:
1. An interactive product for navigating around a shopping center.
2. A wearable interactive product to measure glucose levels for an individual with diabetes.
Comment
You may come up with alternative suggestions. These are merely indicative answers.
1. Interactive product for navigating around a shopping center.
Functional The product will locate places in the shopping center and provide routes for
the user to reach their destination.
Data The product needs access to GPS location data for the user, maps of the shopping
center, and locations of all the places in the center. It also requires knowledge about the
terrain and pathways for people with different needs.
Environmental The product design needs to take into account several environ-
mental aspects. Users may be in a rush, or they may be more relaxed and wandering
about. The physical environment will be noisy and busy, and users may be talking
with friends and colleagues while using the product. Support or help with using the
product may not be readily available, but the user can probably ask a passerby for
directions if the app fails to work.
User Characteristics Potential users are members of the population who have their own
mobile device and for whom the center is accessible. This suggests quite a wide variety of
users with different abilities and skills, a range of educational backgrounds and personal
preferences, and different age groups.
Usability Goals The product needs to be easy to learn so that new users can use it
immediately, and it should be memorable for more frequent users. Users won’t want
to wait around for the product to display fancy maps or provide unnecessary detail,
so it needs to be efficient and safe to use; that is, it needs to be able to deal easily with
user errors.
User Experience Goals Of the user experience goals listed in Chapter 1, “What Is Inter-
action Design?” those most likely to be relevant here are satisfying, helpful, and enhanc-
ing sociability. While some of the other goals may be appropriate, it is not essential for
this product to, for example, be cognitively stimulating.
2. A wearable interactive product to measure glucose levels for an individual with diabetes.
Functional The product will be able to take small blood samples and measure glucose
readings from them.
Data The product will need to measure and display the glucose reading—but possibly
not store it permanently—and it may not need other data about the individual. These
questions would be explored during the requirements activity.
Environmental The physical environment could be anywhere the individual may be—at
home, in hospital, visiting the park, and so on. The product needs to be able to cope with
a wide range of conditions and situations and to be suitable for wearing.
1 1 . 4 D aTa G aT h E R I N G F O R R E Q U I R E M E N T S 395
11.4 Data Gathering for Requirements
Data gathering for requirements covers a wide spectrum of issues, including who are the
intended users, the activities in which they are currently engaged and their associated
goals, the context in which the activities are performed, and the rationale for the current
situation. The goals for the data gathering sessions will be to discover all of the types of
requirements relevant for the product. The three data gathering techniques introduced in
Chapter 8, “Data Gathering” (interviews, observation, and questionnaires), are commonly
used throughout the interaction design lifecycle. In addition to these techniques, several
other approaches are used to discover requirements.
For example, documentation, such as manuals, standards, or activity logs, are a good
source of data about prescribed steps involved in an activity, any regulations governing a task,
or where records of activity are already kept for audit or safety-related purposes. Studying
documentation can also be good for gaining background information, and it doesn’t involve
stakeholder time. Researching other products can also help identify requirements. For exam-
ple, Jens Bornschein and Gerhard Weber (2017) analyzed existing nonvisual drawing support
packages to identify requirements for a digital drawing tool for blind users. Xiangping Chen
et al. (2018) propose a recommender system for exploring existing app stores and extracting
common user interface features to identify requirements for new systems.
It is usual for more than one data gathering technique to be used in order to provide
different perspectives. Examples are observation to understand the context of the activity,
interviews to target specific user groups, questionnaires to reach a wider population, and
focus groups to build a consensus view. Many different combinations are used in practice,
and Box 11.2 includes some examples. Note that the example from Orit Shaer et al. (2012)
also illustrates the development of an interactive product for a specialist domain, where users
join the development team to help them understand the domain complexities.
User Characteristics Users could be of any age, nationality, ability, and so forth, and
may be novice or expert, depending on how long they have had diabetes. Most users will
move rapidly from being a novice to becoming a regular user.
Usability Goals The product needs to exhibit all of the usability goals. You wouldn’t
want a medical product being anything other than effective, efficient, safe, easy to learn
and remember how to use, and with good utility. For example, outputs from the product,
especially any warning signals and displays, must be clear and unambiguous.
User Experience Goals User experience goals that are relevant here include the device
being comfortable, while being aesthetically pleasing or enjoyable may help encourage
continued use of the product. Making the product surprising, provocative, or challenging
is to be avoided, however.
1 1 D I S C O V E R I N G R E Q U I R E M E N T S396
BOX 11.2
Combining Data Gathering in Requirements Activities
The following describes some examples where different data gathering techniques have been
combined to progress requirements activities.
Direct Observation in the Field, Indirect Observation Through Log Files,
Interviews, Diaries, and Surveys
Victoria Hollis et al. (2017) performed a study to inform the design of reflective systems that
promote emotional well-being. Specifically, they wanted to explore the basis for future recom-
mendations to improve a person’s well-being and the effects of reflecting on negative versus
positive past events. They performed two direct observation studies in the field with 165 partici-
pants. In both studies, surveys were administered before and after the field study period to assess
emotional well-being, behaviors, and self-awareness. The first study also performed interviews.
In the first study (60 participants, over 3 weeks), they investigated the relationship between past
mood data, emotional profiles, and different types of recommendations to improve future well-
being. In the second study (105 participants, over 28 days), using a smartphone diary applica-
tion, they investigated the effects of reflection and analysis of past negative and positive events
on well-being. Together, these studies provided insights into requirements for systems to sup-
port the promotion of emotional well-being. Figure 11.3 shows the visualization displayed to
emotion-forecasting participants in week 3 of the first study. The leftmost two points in the line
graph indicate average mood ratings on previous days, and the center point is the average rating
for the immediate day. The two rightmost points indicate predicted mood for upcoming days.
Activity
Plans
Updated
Prediction
Original
Prediction
(a)
(b)
1 1 . 4 D aTa G aT h E R I N G F O R R E Q U I R E M E N T S 397
Diaries and Interviews
Tero Jokela et al. (2015) studied how people currently combine multiple information devices
in their everyday lives to inform the design of future interfaces, technologies, and applications
that better support multidevice use. For the purpose of this study, an information device is
any device that can be used to create or consume digital information, including personal com-
puters, smartphones, tablets, televisions, game consoles, cameras, music players, navigation
devices, and smartwatches. They collected diaries over a one-week period and interviews from
14 participants. The study indicates that requirements for the technical environment needed
to improve the user experience of multiple devices, including being able to access any content
with any device and improved reliability and performance for cloud storage.
Interview, Think-Aloud Evaluation of Wireframe Mock-Up,
Questionnaire, and Evaluation of Working Prototype
Carole Chang et al. (2018) developed a memory aid application for traumatic brain injury
(TBI) sufferers. They initially conducted interviews with 21 participants to explore memory
impairments after TBI. From these, they identified common themes in the use of external
memory aids. They also learned that TBI sufferers do not want just another reminder system
but something that helps them to remember and hence can also train their memory, and that
their technology requirements were for something simple, customizable, and discreet.
(Continued)
Activity
Plans
Updated
Prediction
Original
Prediction
(a)
(b)
Figure 11.3 (a) The image shown to participants in the first field study (b) the smartphone
diary app for the second study
In Figure 11.3 (b) The left panel shows the home screen: Participants record a new experience by clicking
the large plus sign (+) in the upper left. The center panel shows a completed event record, which consists
of a header, textual entry, emotion rating, and image. The right panel shows participant reflection by rat-
ing their current emotional reaction to the initial record and providing a new textual reappraisal.
Source: (a) Hollis et al. (2017), Figure 1. Used courtesy of Taylor & Francis and (b) Hollis et al. (2017),
Figure 9. Used courtesy of Taylor & Francis
1 1 D I S C O V E R I N G R E Q U I R E M E N T S398
11.4.1 Using Probes to Engage with Users
Probes come in many forms and are an imaginative approach to data gathering. They are
designed to prompt participants into action, specifically by interacting with the probe in
some way, so that the researchers can learn more about users and their contexts. Probes rely
on some form of logging to gather the data—either automatically in the case of technology
probes or manually in the case of diaries or design probes.
The idea of a probe was developed during the Presence Project (Gaver et al., 1999), which
was investigating novel interaction techniques to increase the presence of elderly people in
their local community. They wanted to avoid more traditional approaches, such as question-
naires, interviews, or ethnographic studies, and developed a technique called cultural probes.
These probes consisted of a wallet containing eight to ten postcards, about seven maps, a dis-
posable camera, a photo album, and a media diary. Recipients were asked to answer questions
associated with certain items in the wallet and then return them directly to the researchers.
For example, on a map of the world, they were asked to mark places where they had been.
Participants were also asked to use the camera to take photos of their home, what they were
wearing today, the first person they saw that day, something desirable, and something boring.
Inspired by this original cultural probe idea, different forms of probes have been adapted
and adopted for a range of purposes (Boehner et al., 2007). For example, design probes
are objects whose form relates specifically to a particular question and context. They are
intended to gently encourage users to engage with and answer the question in their own
context. Figure 11.4 illustrates a Top Trumps probe; the participant was given six cards and
asked to describe objects which were powerful to them and to rate the object’s powers using
numerical values out of 100 (Wallace et al., 2013).
Studying Documentation, Evaluating Other Systems, User Observations, and Group Interviews
Nicole Costa et al. (2017) describe their ethnographic study of the design team for the user
interface of a ship’s maneuvering system (called a conning display). The design team started by
studying the accident and incident reports to identify requirements for things to avoid, such
as mixing up turn-rate meter with rudder indicator. They used Nielsen’s heuristics to evaluate
other existing systems, and specifically how to represent the vessel on the display. Once a suit-
able set of requirements had been discovered, sketching, prototyping, and evaluating with the
help of users was used to produce the final design.
Ethnographic Study, Interviews, Usability Tests, and User Participation
Orit Shaer et al. (2012) report on the design of a multitouch tabletop user interface for col-
laborative exploration of genomic data. In-depth interviews were conducted with 38 molecu-
lar and computational biologists to understand the current work practices, needs, and
workflow of small research groups. A small team of nine researchers investigating gene inter-
action in tuberculosis was studied for eight weeks using an ethnographic approach, and other
labs were also observed. Because the application area was specialized, the design team needed
to be comfortable with the domain concepts. To achieve this, biologists were integrated into
the development team, and other members of the design team regularly visited biology
research group partners, attended courses to teach them relevant domain concepts, and con-
ducted frequent usability tests with users.
1 1 . 4 D aTa G aT h E R I N G F O R R E Q U I R E M E N T S 399
Other types of probes include technology probes (Hutchinson et al., 2003) and
provocative probes (Sethu-Jones et al., 2017). Examples of technology probes include tool-
kits, such as the SenseBoard for developing IoT applications (Richards and Woodthorpe,
2009), mobile phone apps such as Pocketsong, a mobile music listening app (Kirk et al.,
2016), and M-Kulinda, a device that uses sensors to monitor movement that was deployed in
rural Kenya (Chidziwisano and Wyche, 2018). The last of these, M-Kulinda, worked with the
participant’s mobile phones to alert participants of unexpected movement in their homes. By
doing this, the researchers hoped to provide insights into how sensor-based technology may
be used in rural households and to learn about domestic security measures in rural Kenya.
Provocative probes are technology probes designed to challenge existing norms and atti-
tudes in order to provoke discussion. For example, Dimitros Raptis et al. (2017) designed a
provocation called “The Box” to challenge domestic laundry practices. The intention was to
learn about users’ laundry practices and also to provoke users across three dimensions: con-
ceptually, functionally, and aesthetically. Conceptual provocation challenged the assumption
that electricity is always available and that its source is not relevant. Functional provocation
was provided through an emergency override button that could be pressed if the electricity
had been cut off, but its size and color implied that doing so was somehow wrong. Aesthetic
provocation was achieved by designing a separate physical box, rather than a mobile phone
app, and by designing it to be bulky and utilitarian (see Figure 11.5). This kind of provoca-
tion was found to increase participants’ engagement.
Figure 11.4 Top Trumps probe
Source: Wallace et al. (2013), Figure 6. Reproduced with permission of ACM Publications
d
a) electricity status – 12 hour forecast,
b) savings account, c) override button presses,
d) override button, and e) electricity status at the moment
c b
ae
Figure 11.5 The Box provocative probe
Source: Raptis et al. (2017). Reproduced with permission of ACM Publications
1 1 D I S C O V E R I N G R E Q U I R E M E N T S400
11.4.2 Contextual Inquiry
Contextual inquiry was originally developed in the 1990s (Holtzblatt and Jones, 1993) and
has been adapted over time to suit different technologies and the different ways in which
technology fits into daily life. Contextual inquiry is the core field research process for Con-
textual Design (Holtzblatt and Beyer, 2017), which is a user-centered design approach that
explicitly defines how to gather, interpret, and model data about how people live in order
to drive design ideation. However, contextual inquiry is also used on its own to discover
requirements. For example, Hyunyoung Kim et al. (2018) used contextual inquiry to learn
about unresolved usability problems related to devices for continuous parameter controls,
such as the knobs and sliders used by sound engineers or aircraft pilots. From their study,
they identified six needs: fast interaction, precise interaction, eyes-free interaction, mobile
interaction, and retro-compatibility (the need to use their existing expertise with interfaces).
One-on-one field interviews (called contextual interviews) are undertaken by every mem-
ber of the design team, each lasting about one-and-a-half to two hours. These interviews
focus on matters of daily life (work and home) that are relevant for the project scope. Con-
textual inquiry uses a model of master/apprentice to structure data gathering, based on the
idea that the interviewer (apprentice) is immersed in the world of the user (master), creating
an attitude of sharing and learning on either side. This approach shifts the perceived “power”
relationship that can exist in a more traditional interviewer–interviewee relationship. Users
talk as they “do,” and the apprentice learns by being part of the activity while also observ-
ing it, which has all of the advantages of observation and ethnography. Hidden and specific
details that people don’t make explicit, and don’t necessarily realize themselves, emerge this
way and can be shared and learned. While observing and learning, the apprentice focuses on
why, not just what.
Four principles guide the contextual interview, each of which defines an aspect of the
interaction and enhances the basic apprenticeship model. These principles are context, part-
nership, interpretation, and focus.
The context principle emphasizes the importance of going to the user, wherever they are,
and seeing what they do as they do it. The benefits of this are exposure to ongoing experience
instead of summary data, concrete details rather than abstract data, and experienced motives
rather than reports. The partnership principle creates a collaborative context in which the
user and interviewer can explore the user’s life together, on an equal footing. In a traditional
interview or workshop situation, the interviewer or workshop leader is in control, but in
contextual inquiry, the spirit of partnership means that understanding is developed together.
Interpretation turns the observations into a form that can be the basis of a design
hypothesis or idea. These interpretations are developed collaboratively by the user and the
design team member to make sure that they are sound. For example, imagine that during a
contextual interview for an exercise monitor, the user repeatedly checks the data, specifically
looking at the heart rate display. One interpretation of this is that the user is very worried
about their heart rate. Another interpretation is that the user is concerned that the device is
not measuring the heart rate effectively. Yet another interpretation might be that the device
has failed to upload data recently, and the user wants to make sure that the data is saved
regularly. The only way to make sure that the chosen interpretation is correct is to ask the
user and see their reaction. It may be that, in fact, they don’t realize that they are doing this
and that it has simply become a distracting habit.
1 1 . 4 D aTa G aT h E R I N G F O R R E Q U I R E M E N T S 401
The fourth principle, focus, is established to guide the interview setup and tells the inter-
viewer what they need to pay attention to among all of the detail that will be unearthed.
While the apprenticeship model means that the master (user) will choose what to share or
teach, it is also the apprentice’s responsibility to capture information relevant to the project.
In addition, the interviewer will have their own interests and perspectives, and this allows
different aspects of the activity to surface when all members of the team conduct interviews
around the project focus. This leads to a richer collection of data.
Together with the principles that shape how the session will run, the contextual interview
is also guided by a set of “cool concepts.” Cool concepts are an addition to the original con-
textual inquiry idea, and they are derived from a field study that investigated what it is about
technologies that users find “cool” (Holtzblatt and Beyer, 2017, p10). Seven cool concepts
emerged from this study, and they are divided into two groups: four concepts that enhance
the joy of life and three concepts that enhance the joy of use.
The joy of life concepts capture how products make our lives richer and more fulfilling.
These concepts are accomplish (empower users), connection (enhance real relationships),
identity (support users’ sense of self), and sensation (pleasurable moments).
The joy of use concepts describes the impact of using the product itself; they are: direct
in action (provide fulfillment of intent), the hassle factor (remove all glitches and inconven-
iences), and the learning delta (reduce the time to learn). During a contextual interview, cool
concepts are identified as the user does their activity, although often they only emerge retro-
spectively when reflecting on the session.
The contextual interview has four parts: getting an overview, the transition, main inter-
view, and wrap-up. The first part can be performed like a traditional interview, introducing
each other and setting the context of the project. The second part is where the interaction
changes as both parties get to know each other, and the nature of the contextual interview
engagement is set up. The third part is the core data gathering session when the user contin-
ues with their activities and the interviewer observes them and learns. Finally, the wrap-up
involves sharing some of the patterns and observations the interviewer has made.
During the interview, data is collected in the form of notes and initial Contextual Design
models and perhaps audio and video recordings. Following each contextual interview, the
team holds an interpretation session that allows the whole team to talk about the user and
hence establish a shared understanding based on the data. During this session, specific con-
textual design models are also generated or consolidated. There are 10 models suggested
by Contextual Design, and the team can choose the most relevant for the project. Five of
these models are linked to the cool concepts: the day-in-the-life model (representing accom-
plishment), the relationship and collaboration models (representing connection), the identity
model, and the sensation board. Five others provide a complete view of the users’ tasks, but
they are used only for some projects: the flow model, the decision point model, the physi-
cal model, the sequence model, and the artifact model. The affinity diagram, described in
Chapter 2, is produced after several interpretation sessions have taken place. The contextual
design method follows this up with an immersion exercise called the Wall Walk, in which all
of the generated models are hung up on the walls of a large conference room for stakeholders
to read and suggest design ideas. For more detail about these models and how to generate
them, see Holtzblatt and Beyer (2017).
1 1 D I S C O V E R I N G R E Q U I R E M E N T S402
11.4.3 Brainstorming for Innovation
Requirements may emerge directly from the data gathered, but they may also involve inno-
vation. Brainstorming is a generic technique used to generate, refine, and develop ideas. It is
widely used in interaction design specifically for generating alternative designs or for suggest-
ing new and better ideas to support users.
Various rules have been suggested for making a brainstorming session successful, some
of which are listed next. In the context of the requirements activity, two key success factors
are that the participants know the users and that no ideas are criticized or debated. Other
suggestions for successful requirements brainstorming sessions are as follows (Robertson and
Robertson, 2013; Kelley with Littman, 2016):
1. Include participants from a wide range of disciplines with a broad range of experience.
2. Don’t ban silly stuff. Wild ideas often turn into really useful requirements.
3. Use catalysts for further inspiration. Build one idea on top of another, jump back to
an earlier idea, or consider alternative interpretations when energy levels start to flag.
If you get stuck, use a word pulled randomly from a dictionary to prompt ideas related
to the product.
4. Keep records. Capture every idea, without censoring. One suggestion is to number ideas
so that they can be referred to more easily at a later stage. Cover the walls and tables
in paper, and encourage participants to sketch, mind-map, and diagram ideas, including
keeping the flow of ideas, as spatial memory is very strong, and this can facilitate recall.
Sticky notes, each with one idea, are useful for re-arranging and grouping ideas.
aCTIVITY 11.2
How does contextual inquiry compare with the data gathering techniques introduced in
Chapter 8, specifically ethnography and interviews?
Comment
Ethnography involves observing a situation without any a priori structure or framework;
it may include other data gathering techniques such as interviews. Contextual inquiry also
involves observation with interviews, but it provides more structure, support, and guidance
in the form of the apprenticeship model, the principles to which one must adhere, the cool
concepts to look out for, and a set of models to shape and present the data. Contextual inquiry
also explicitly states that it is a team effort, and that all members of the design team conduct
contextual interviews.
Structured, unstructured, and semi-structured interviews were introduced in Chapter 8.
Contextual inquiry could be viewed as a form of unstructured interview with props, but it has
other characteristics as discussed earlier, which gives it added structure and focus. A contex-
tual inquiry interview requires the interviewee to be going about their daily activities, which
may also mean that the interview moves around—something very unusual for a standard
interview.
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 403
5. Sharpen the focus. Start the brainstorm with a well-honed problem. This will get the
brainstorm off to a good start, and it makes it easier to pull people back to the main topic
if the session wanders.
6. Use warm-up exercises and make the session fun. The group will require warming up if
they haven’t worked together before, most of the group doesn’t brainstorm regularly, or
the group is distracted by other pressures. Warm-up exercises might take the form of word
games or the exploration of physical items related or unrelated to the problem at hand,
such as the TechBox in Chapter 2.
11.5 Bringing Requirements to Life: Personas
and Scenarios
Using a format such as those shown in Figure 11.1 or user stories captures the essence of a
requirement, but neither of them is sufficient on their own to express and communicate the
product’s purpose and vision. Both can be augmented with prototypes, working systems,
screenshots, conversations, acceptance criteria, diagrams, documentation, and so on. Which
of these augmentations is required, and how much, will be determined by the kind of system
under development. In some cases, capturing different aspects of the intended product in
more formal or structured representations is appropriate. For example, when developing
safety-critical devices, the functionality, user interface, and interaction of the system need
to be specified unambiguously and precisely. Sapna Jaidka et al. (2017) suggest using the Z
formal notation (a mathematically based specification language) and petri nets (a notation
for modeling distributed systems based on directed graphs) to model the interaction behavior
of medical infusion pumps. Harold Thimbleby (2015) points out that using a formal expres-
sion of requirements for number entry user interfaces such as calculators, spreadsheets, and
medical devices could avoid bugs and inconsistencies.
Two techniques that are commonly used to augment the basic requirements information
and to bring requirements to life are personas and scenarios. Often used together, they com-
plement each other in order to bring realistic detail that allows the developer to explore the
user’s current activities, future use of new products, and futuristic visions of new technolo-
gies. They can also guide development throughout the product lifecycle.
11.5.1 Personas
Personas (Cooper, 1999) are rich descriptions of typical users of the product under develop-
ment on which the designers can focus and for which they can design products. They don’t
describe specific people, but rather they are realistic, and not idealized. Any one persona
represents a synthesis of a number of real users who have been involved in data gathering,
and it is based on a set of user profiles. Each persona is characterized by a unique set of goals
relating to the particular product under development, rather than a job description or a sim-
ple demographic. This is because goals often differ among people within the same job role or
the same demographic.
In addition to their goals, a persona will include a description of the user’s behav-
ior, attitudes, activities, and environment. These items are all specified in some detail. For
instance, instead of describing someone simply as a competent sailor, the persona includes
1 1 D I S C O V E R I N G R E Q U I R E M E N T S404
that they have completed a Day Skipper qualification, have more than 100 hours of sailing
experience in and around European waters, and get irritated by other sailors who don’t fol-
low the navigation rules. Each persona has a name, often a photograph, and some personal
details such as what they do as a hobby. It is the addition of precise, credible details that
helps designers to see the personas as real potential users, and hence as people for whom
they can design.
A product will usually require a small set of personas rather than just one. It may be
helpful to choose a few, or maybe only one, primary personas who represent a large section
of the intended user group. Personas are used widely, and they have proved to be a powerful
way to communicate user characteristics and goals to designers and developers.
A good persona helps the designer understand whether a particular design decision will
help or hinder their users. To this end, a persona has two goals (Caddick and Cable, 2011):
• To help the designer make design decisions
• To remind the team that real people will be using the product
A good persona is one that supports the kind of reasoning that says, “What would Bill
(persona 1) do in this situation with the product?” and “How would Clara (persona 2)
respond if the product behaved this way?” But good personas can be challenging to develop.
The kind of information they include needs to be pertinent to the product being developed.
For example, personas for a shared travel organizer would focus on travel-related behav-
ior and attitudes rather than the newspapers the personas read or where they buy their
clothes. On the other hand, personas for a shopping center navigation system might consider
these aspects.
Steven Kerr et al. (2014) conducted a series of studies to identify user needs and goals in
the kitchen as a way to improve the design of technology to assist with cooking. They con-
ducted observations and interviews with three members each of three user groups, identified
through background research: beginners, older experts, and families (specifically parents).
The interview focused on topics such as cooking experience, meal planning, and grocery
shopping. Two researchers attended each home visit. Notes, video, audio, and photographs
were used to capture the data, including a think-aloud session during some of the activities.
Personas were developed following both inductive and deductive analysis (see Chapter 9,
“Data Analysis, Interpretation, and Presentation”), looking for patterns in the data and com-
monalities that could be grouped into one persona. Three primary and three secondary per-
sonas were developed from the data. Figure 11.6 shows one primary (beginner) persona
and one secondary (older expert) persona that they derived from this work. Note that the
two types (primary and secondary) in this case have different formats, with Ben being more
detailed than Olive.
They also conducted a survey online to validate the personas and to create a list of
requirements for new technology to support cooking.
The style of personas varies widely, but commonly they include a name and photograph,
plus key goals, user quotes, behaviors, and some background information. The examples in
Figure 11.7(a) illustrate the persona format developed and used by the company in Box 11.3,
which has been tailored for their purposes. Figure 11.7(b) shows the user journey associated
with their personas.
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 405
(a) (b)
Figure 11.6 (a) One primary (beginner) persona and (b) one secondary (older expert) persona for
cooking in Singapore
Source: Kerr et al. (2014). Used courtesy of Elsevier
BOX 11.3
Persona-Driven Development in London
Caplin Systems is based in the City of London and provides a framework to investment banks
that enables them to build, or enhance, their single-dealer offering (a platform that integrates
services and information for trading in capital markets) quickly or to create a single-dealer
platform for the first time.
The company was drawn to use personas to increase the customer focus of their prod-
ucts by better understanding for whom they were developing their system. Personas were
seen as a way to provide a unified view of their users and to start building more customer-
focused products.
The first step was to run a workshop for the whole company, introduce personas, show how
other companies were using them, and have employees experience the benefits of using personas
firsthand through some simple team exercises. The following proposition was then put forward:
Should we adopt personas and persona-driven development?
The response was a resounding “yes!” This was a good thing to do. Gaining this “buy in”
was fundamentally important in ensuring that everyone was behind the use of personas and
committed to the change.
Everyone got excited, and work began to define the way forward. Further workshops were
run to refine the first persona, though in hindsight the Caplin team believes that too long a time was
spent trying to get the first persona perfect. Now they are much more agile about persona creation.
Eighteen months after the persona breakthrough workshop, the main persona for Caplin
Trader, Jack, and his “pain points” were the focus of development, design decisions, and team
discussions. Ongoing persona development focused on end users of the software built with
Caplin’s technology, and Narrative Journey Maps captured their interactions and helped to
define goals/motivations and pain points (see Figure 11.7b).
1 1 D I S C O V E R I N G R E Q U I R E M E N T S406
aCTIVITY 11.3
Develop two personas for a group travel organizer app that supports a group of people,
perhaps a family, who are exploring vacation possibilities together. Use the common persona
structure of a photo, name, plus key goals, user quotes, behaviors, and some background
information. Personas are based on real people, so choose friends or relatives that you know
well to construct them.
These can be drawn by hand, or they can be developed in PowerPoint, for example. There
are also several tailorable persona templates available on the Internet that can be used instead.
Comment
The personas shown in Figure 11.8 were developed for a father and his daughter using tem-
plates from https://xtensio.com/templates/).
(a)
(b)
Figure 11.7 (a) Example personas, (b) the narrative journey maps—sad faces show pain
points for the persona
Source: Caplin Systems
https://xtensio.com/templates/
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 407
Figure 11.8 Two personas for the group travel organizer
1 1 D I S C O V E R I N G R E Q U I R E M E N T S408
11.5.2 Scenarios
A scenario is an “informal narrative description” (Carroll, 2000). It describes human
activities or tasks in a story that allows exploration and discussion of contexts, needs,
and requirements. It does not necessarily describe the use of software or other techno-
logical support used to achieve a goal. Using the vocabulary and phrasing of users means
that scenarios can be understood by stakeholders, and they are able to participate fully in
development.
Imagine that you have been asked to investigate how a design team working on a
large building project shares information. This kind of team includes several roles, such
as an architect, mechanical engineer, client, quantity surveyor, and electrical engineer. On
arrival, you are greeted by Daniel, the architect, who starts by saying something like the
following:
Every member of the design team needs to understand the overall purpose, but we
each take a different perspective on the design decisions that have to be made. For
example, the quantity surveyor will keep an eye on how much things cost, the mechan-
ical engineer will want to make sure that the design accounts for ventilation systems,
and so on. When the architect presents a design concept, such as a spiral staircase,
each of us will view that concept from our own discipline and assess whether it will
work as envisioned in the given location. This means that we need to share informa-
tion about the project goals, the reason for decisions, and the overall budget, as well
as drawing on our own discipline expertise to advise the client on options and conse-
quences.
Telling stories is a natural way for people to explain what they are doing, and stakehold-
ers can easily relate to them. The focus of such stories is also naturally likely to be about
what the users are trying to achieve, that is, their goals. Understanding why people do things
as they do and what they are trying to achieve in the process focuses the study on human
activity rather than interaction with technology. Starting with current behavior allows the
designer to identify the stakeholders and artifacts involved in an activity. Repeated reference
to a particular app, drawing, behavior, or location indicates that it is somehow central to
the activity being performed and that it deserves close attention to uncover the role it plays.
Understanding current behavior also allows the designer to explore the constraints, contexts,
irritations, facilitators, and so on, under which people operate. Steven Kerr et al. (2014), who
devised the cooking persona Ben introduced in the previous section, also produced a scenario
for Ben, which is shown in Figure 11.9. Reflect on the persona and read the scenario along-
side to see how the two complement each other to give a fuller picture of the activities related
to cooking in which a user such as Ben would engage.
This article by Jared Spool explains why personas on their own are not enough
and why scenarios also need to be developed:
https://medium.com/user-interface-22/when-it-comes-to-personas-the-real-
value-is-in-the-scenarios-4405722dd55c
https://medium.com/user-interface-22/when-it-comes-to-personas-the-real-value-is-in-the-scenarios-4405722dd55c
https://medium.com/user-interface-22/when-it-comes-to-personas-the-real-value-is-in-the-scenarios-4405722dd55c
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 409
The previous scenario describes existing behavior, but scenarios can also be used to
describe behavior with a potential new technology. For example, a scenario that might
be generated by potential users of a new navigation app for a large shopping center is
given here:
Charlie wants to take his elderly mother, Freia, to his favorite home products store,
ComfortAtHome. He knows that the store has moved within the shopping center, but
he doesn’t know where. He also needs to find a route that is suitable for his mother
who uses a walker but doesn’t like elevators. He opens the navigation app on his
smartphone and enters the name of the store in the search feature. Two different
branches of the store are listed, and Charlie asks for directions to the one nearest to
their current location. A map of the shopping center is displayed, showing their current
location, the location of the nearest store, and the suggested route. This route, however,
includes a series of steps that are unsuitable for his mother. So, he asks for an alterna-
tive route that uses only ramps, which the app displays. They set off, following the new
route provided.
Ben is out with friends, catching up over a meal. A friend asks if he has ever made the dish
they are eating. This reminds Ben that he has not cooked for a while.
Later in the week, Ben sees a TV programme on cooking and again he is reminded that he
has not cooked for a while. Thus he decides to cook later that week. Ben goes online and
Google’s “mee soto”. He looks through the various sites for pictures that not only look good,
but also have few steps and a short time to cook. He spends some time browsing through the
sites for other ideas to use in the future.
Next day at work, he remembers to go to the supermarket on his way home. As he is only
getting a few ingredients (for a simple recipe), he just remembers what he needs to get. He
buys ingredients in whatever size is readily available and is not too concerned about the
freshness. Whilst in the shop, he also makes some spontaneous purchases.
On the day he is cooking, Ben checks out a YouTube video on preparing chicken pieces.
When he gets home he prints out a recipe and asks his mum for some last minute advice. He
had asked her previously about a similar question, but was so long ago that he has forgotten
her advice. His mum looks at the recipe and suggests some alterations to it and writes notes
on the recipe.
Ben places the recipe near the stove and follows it closely in terms of steps (one process at a
time, no preparation) though haphazardly gauges the amount of ingredients to put in. He tries
to gauge one portion’s worth of ingredients (the recipe is for 3 people), only using some of the
chicken, so puts rest back in fridge (he thinks his mum might use it later). He will estimate the
time the mee soto needs for cooking, tasting it when he thinks it is ready. He is initially worried
about hot oil splashing on his face, so he is hesitant when handling the hot pan.
When ready, he serves the dish and finds out he has made a bit too much. Depending on how
much leftover is available, he will either put it in the fridge for later or it will just be wasted.
Afterwards he loads up pictures of the dish on Facebook. He is greatly encouraged when
people ‘like’ his link or leave a comment and it makes him feel good.
Figure 11.9 Scenario developed for Ben, the Beginner persona, for cooking in Singapore
Source: Kerr et al. (2014), Table 1. Used courtesy of Elsevier
1 1 D I S C O V E R I N G R E Q U I R E M E N T S410
Note the following in this limited scenario: the provision of a search facility to find the
store’s location (but what if the user doesn’t know the store name or wants to locate all home
products shops), the facility to display a map, and the importance of different options for the
navigation routes to accommodate different users. These are all indicators of potential design
choices for the new system. The scenario also describes one use of the app—to find a route
to the nearest branch of a specific store.
During the requirements activity, scenarios emphasize the context, the usability and user
experience goals, and the activities in which the user is engaged. Scenarios are often generated
during workshop, interview, or brainstorming sessions to help explain or discuss some aspect
of the user’s goals. They capture only one perspective, perhaps a single use of the product, or
one example of how a goal might be achieved.
The following scenario for the group travel organizer introduced in Activity 11.3 describes
how one function of the system might work—to identify potential vacation options. This
scenario incorporates information about the two personas shown in Figure 11.8. This is the
kind of story that you might glean from a requirements interview:
The Thomson family enjoys outdoor activities and wants to try their hand at sailing
this year. There are four family members: Sky (8 years old), Eamonn (15), Claire (32),
and Will (35).
One evening after dinner, they decide to start exploring the possibilities. They want to
discuss the options together, but Claire has to visit her elderly mother so she will be
joining the conversation from her mother’s house down the road. As a starting point,
Will raises an idea they had been discussing over dinner—a sailing trip for four novices
in the Mediterranean.
The system allows users to log in from different locations using different devices so that
all members of the family can interact easily and comfortably with it wherever they are.
The system’s initial suggestion is a flotilla, where several crews (with various levels of
experience) sail together on separate boats.
Sky and Eamonn aren’t very happy at the idea of going on vacation with a group of
other people, even though the Thomsons would have their own boat. The travel organizer
shows them descriptions of flotilla experiences from other children their ages, and they
are all very positive, so eventually, everyone agrees to explore flotilla opportunities.
Will confirms this recommendation and asks for detailed options. As it’s getting late, he
asks for the details to be saved so that everyone can consider them tomorrow. The travel
organizer emails them a summary of the different options available.
Developing this type of scenario, which focuses on how a new product may be used,
helps to uncover implicit assumptions, expectations, and situations in which the users might
find themselves, such as the need to plan travel when participants are situated in different
locations. These in turn translate into requirements, in this case an environment requirement,
which may be expressed as follows:
As a
Futuristic scenarios describe an envisioned situation in the future, perhaps with a new
technology and new world view. Different kinds of future visions were discussed in Chapter 3,
“Conceptualizing Interaction,” and one approach that is an extension of the scenario idea
and that can be used to discover requirements is design fiction (see Box 11.4).
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 411
BOX 11.4
Design Fiction
Design fiction is a way to communicate a vision about the world in which a future technol-
ogy is situated. It has become popular in interaction design as a way to explore envisioned
technologies and their uses without having to grapple with pragmatic challenges. In a fictional
world, ethics, emotions, and context can be explored in detail and in depth without worrying
about concrete constraints or implementations. The term was first coined by Bruce Sterling in
2005, and its use has gradually increased as different ways of using it have emerged.
For example, Richmond Wong et al. (2017) took a design fiction approach to engage in
issues of privacy and surveillance around futuristic sensing technologies. Their design fictions
were inspired by a near-future science-fiction novel, The Circle by David Eggers (2013). Their
design fictions are visual, and they take the form of a workbook containing conceptual designs.
They draw on three technologies in the novel, such as SeeChange, a small camera about the size
of a lollipop that wirelessly records and broadcasts live video. They also include a new proto-
typed technology designed to detect a user’s breathing pattern and heart rate (Adib et al., 2015).
The design fictions go through three rounds. The first round adapted the technologies
from the novel, for example, by adding concrete interfaces. As there were no photos in The
Circle, the authors were able to design interfaces for these technologies based only on the
textual descriptions. The second round considered privacy concerns and placed the technolo-
gies in an extended world from the novel. The third round considered privacy concerns in
situations that went beyond the novel and the designs they had created up to that point
in time. They suggested that their design fictions could help broaden the design space for
people designing sensing technologies or be used as interview probes in further research. They
also reflect that an existing fictional world is a good starting point from which to develop
design fictions and this helps to explore futures that might otherwise go unnoticed.
Other examples of design fiction include Eric Baumer et al.’s (2018) consideration of
how design fiction can support the exploration of ethics and Steve North’s (2017) approach
to embrace the perspective of a nonhuman user—in this case a horse.
What’s the difference between scenarios and design fiction? Mark Blythe (2017) uses the
“basic plots” of literature to suggest that scenarios employ the plot of “overcoming the
monster,” where the monster is some problem to be solved, while design fiction more fre-
quently takes the form of a “voyage and return” or a “quest.”
aCTIVITY 11.4
This activity illustrates how a scenario of an existing activity can help identify requirements
for a future application to support the same user goal.
Write a scenario of how you would go about choosing a new hybrid car. This should be
a new car, not a secondhand car. Having written it, think about the important aspects of the
task, your priorities and preferences. Then imagine a new interactive product that supports
this goal and takes account of these issues. Write a futuristic scenario showing how this prod-
uct would support you.
(Continued)
1 1 D I S C O V E R I N G R E Q U I R E M E N T S412
Comment
The following example is a fairly generic view of this process. Your scenario will be different,
but you may have identified similar concerns and priorities.
The first thing I would do is to observe cars on the road, specifically hybrid
ones, and identify those whose looks I find appealing. This may take several
weeks. I would also try to identify any consumer reports that include an
assessment of hybrid cars’ performance. Hopefully, these initial activities will
result in identifying a likely car for me to buy.
The next stage will be to visit a car showroom and see at first hand what the
car looks like and how comfortable it is to sit in. If I still feel positive about
the car, then I’ll ask for a test drive. Even a short test drive helps me to under-
stand how well the car handles, if the engine is noisy, how smooth are the gear
changes, and so on. Once I’ve driven the car myself, I can usually tell whether
I would like to own it or not.
From this scenario, it seems that there are broadly two stages involved in the task:
researching the different cars available and gaining firsthand experience of potential pur-
chases. In the former, observing cars on the road and getting expert evaluations of them are
highlighted. In the latter, the test drive has quite a lot of significance.
For many people who are in the process of buying a new car, the smell and touch of the
car’s exterior and interior and the driving experience itself are the most influential factors in
choosing a particular model. Other attributes such as fuel consumption, interior roominess,
colors available, and price may rule out certain makes and models, but at the end of the day,
cars are often chosen according to how easy they are to handle and how comfortable they are
inside. This makes the test-drive a vital part of the process of choosing a new car.
Taking these comments into account, we’ve come up with the following scenario describ-
ing how an innovative “one-stop shop” for new cars might operate. This product makes use
of immersive virtual reality technology that is already in use by other applications, such as
designing buildings and training bomb disposal experts.
I want to buy a hybrid car, so I go down the street to the local “one-stop car
shop.” The shop has a number of booths in it, and when I enter, I’m directed to
an empty booth. Inside, there’s a large seat that reminds me of a racing car
seat, and in front of that there’s a large display screen.
As I sit down, the display screen jumps to life. It offers me the option of browsing
through video clips of new cars that have been released in the last two years, or
of searching through video clips of cars by make, model, or year. I can choose as
many of these as I like. I also have the option of searching through consumer
reports that have been produced about the cars in which I’m interested.
I spend about an hour looking through materials and deciding that I’d like to
experience the up-to-date models of a couple of vehicles that look promising.
Of course, I can go away and come back later, but I’d like to have a go right
now at some of the cars I’ve found. By flicking a switch in my armrest, I can
call up the options for virtual reality simulations for any of the cars in which
1 1 . 5 B R I N G I N G R E Q U I R E M E N T S T O L I F E : P E R S O N a S a N D S C E N a R I O S 413
Scenarios may be constructed as textual descriptions, like those shown previously, but
they can also use audio or video. For example, Alen Keirnan et al. (2015) used animated
scenarios to present early user research findings about wearable emergency alarms for older
people. They found that the animated scenarios helped participants to describe problems
associated with the use of emergency alarm technology and to discuss and evaluate key emo-
tions and themes (see Figure 11.10 and the following links to videos).
I’m interested. These are really great, as they allow me to take the car for a test
drive, simulating everything about the driving experience in this car—from
road holding to windshield display and foot pedal pressure to dashboard lay-
out. It even re-creates the atmosphere of being inside the car.
Note that the product includes support for the two research activities mentioned in the origi-
nal scenario, as well as the important test-drive facility. This would be only a first-cut scenario,
which would then be refined through discussion and further investigation.
Figure 11.10 Screen captures of the animated scenarios used to explore early user research insights
regarding emergency alarm technology for elderly people: I forgot, Dress Code, and Cow Bell
Source: Keirnan et al. (2015), Figure 1. Reproduced with permission of ACM Publications
1 1 D I S C O V E R I N G R E Q U I R E M E N T S414
Here are links to three animated scenarios: “I forgot,” “Dresscode,” and
“Cowbell.”
BOX 11.5
Scenarios and Personas
Writing personas and scenarios can be difficult at first, leading to a set of narratives that
conflate details of the person with details of the scenario. A scenario describes one use of a
product or one example of achieving a goal, while a persona characterizes a typical user of the
product. Figure 11.11 captures this difference graphically.
This image also shows that the scenarios and personas are tightly linked. Each scenario
represents a single experience of using the product from the perspective of one persona. Note
that this figure introduces the notion of a scenario goal. Thinking about the persona’s goal for
the scenario helps to scope the scenario to one use of the product.
Figure 11.11 The relationship between a scenario and its associated persona
Source: http://www.smashingmagazine.com/2014/08/06/a-closer-look-at-personas-part-1/. Repro-
duced with permission of Smashing Magazine
http://www.smashingmagazine.com/2014/08/06/a-closer-look-at-personas-part-1/
1 1 . 6 C a P T U R I N G I N T E R a C T I O N W I T h U S E C a S E S 415
11.6 Capturing Interaction with Use Cases
Use cases focus on functional requirements and capture interaction. Because they focus
on the interaction between a user and the product, they may be used in design to think about
the new interaction being designed, but they may also be used to capture requirements—to
think through details about what the user needs to see, to know about, or to react to. Use
cases define a specific process because they are a step-by-step description. This is in contrast
to a user story, which focuses on outcomes and user goals. Nonetheless, capturing the detail
of this interaction in terms of steps is useful as a way to enhance the basic requirement state-
ment. The style of use cases varies. Two styles are shown in this section.
The first style focuses on the division of tasks between the product and the user. For
example, Figure 11.12 illustrates an example of this kind of use case, focusing on the visa
requirements element of the group travel application. Note that nothing is said about how
the user and product might interact, but instead it focuses on user intentions and product
responsibilities. For example, the second user intention simply states that the user supplies
the required information, which could be achieved in a variety of ways including scanning a
passport, accessing a database of personal information based on fingerprint recognition, and
so on. This style of use case has been called essential use cases, and they were developed by
Constantine and Lockwood (1999).
The second style of use cases is more detailed, and it captures the user’s goal when inter-
acting with the product. In this technique, the main use case describes the normal course, that
is, the set of actions most commonly performed. Other possible sequences, called alternative
courses, are then captured at the bottom of the use case. A use case for retrieving the visa
requirements for the group travel organizer with the normal course being that information
about the visa requirements is available, might be as follows:
1. The product asks for the name of the destination country.
2. The user provides the country’s name.
3. The product checks that the country is valid.
4. The product asks the user for their nationality.
5. The user provides their nationality.
retrieveVisa
USER INTENTION SYSTEM RESPONSIBILITY
find visa requirements
request destination and nationality
supply required information
obtain appropriate visa information
obtain a personal copy of visa information
offer information in different formats
choose suitable format
provide information in chosen format
Figure 11.12 An “essential use case for “retrieve visa” that focuses on how the task is split between
the product and the user
1 1 D I S C O V E R I N G R E Q U I R E M E N T S416
6. The product checks the visa requirements of that country for a passport holder of the
user’s nationality.
7. The product provides the visa requirements.
8. The product asks whether the user wants to share the visa requirements on social media.
9. The user provides appropriate social media information.
Alternative Courses
4. If the country name is invalid:
4.1 The product provides an error message.
4.2 The product returns to step 1.
6. If the nationality is invalid:
6.1 The product provides an error message.
6.2 The product returns to step 4.
7. If no information about visa requirements is found:
7.1 The product provides a suitable message.
7.2 The product returns to step 1.
Note that the number associated with the alternative course indicates the step in the nor-
mal course that is replaced by this action or set of actions. Also note how specific the use case
is about how the user and the product will interact compared with the first style.
In-Depth activity
This activity is the first of five assignments that together go through the complete development
lifecycle for an interactive product.
The goal is to design and evaluate an interactive product for booking tickets for events
such as concerts, music festivals, plays, and sporting events. Most venues and events have
booking websites or apps already, and there are many ticket agencies that also provide reduced
tickets and exclusive options, so there are plenty of existing products to research first. Carry
out the following activities to discover requirements for this product:
1. Identify and capture some user requirements for this product. This could be done in a
number of ways. For example, observing friends or family using ticket agents, thinking
about your own experience of purchasing tickets, studying websites for booking tickets,
interviewing friends and family about their experiences, and so on.
2. Based on the information you glean about potential users, choose two different user pro-
files and produce one persona and one main scenario for each, capturing how the user is
expected to interact with the product.
3. Using the data gathered in part 1 and your subsequent analysis, identify different kinds of
requirements for the product, according to the headings introduced in section 11.3. Write
up the requirements using a format similar to the atomic requirements shell shown in
Figure 11.1 or in the style of user stories.
F U R T h E R R E a D I N G 417
Further Reading
HOLTZBLATT, K. and BEYER, H. (2017) Contextual Design (second edition) Design for life.
Morgan Kaufmann. This book provides a comprehensive treatment of contextual design—
design for life, cool concepts, and all of the models, techniques, principles, and underpinnings
for the complete contextual design method.
COHN, M. (2004) User Stories Applied. Addison-Wesley. This is a practical guide to writing
good user stories.
PRUITT, J. and ADLIN, T. (2006) The Persona Lifecycle: Keeping people in mind throughout
product design. Morgan Kaufmann. This book explains how to use personas in practice—
how to integrate them into a product lifecycle, stories from the field, and bright ideas, as
well as many example personas. It also includes five guest chapters that place personas in the
context of other product design concerns. See also Adlin and Pruitt (2010).
ROBERTSON, S. and ROBERTSON, J. (2013) Mastering the Requirements Process (3rd edn).
Pearson Education. In this book, Suzanne Robertson and James Robertson provide a very
practical and useful framework for software requirements work.
Summary
This chapter examined the requirements activity in greater detail. The data gathering tech-
niques introduced in Chapter 8 are used here in various combinations in the requirements activ-
ity. In addition, contextual inquiry, studying documentation, and researching similar products
are commonly used techniques. Personas and scenarios help to bring data and requirements
to life, and in combination they can be used to explore the user experience and product func-
tionality. Use cases and essential use cases are helpful techniques for documenting the findings
from data gathering sessions.
Key Points
• A requirement is a statement about an intended product that specifies what it is expected
to do or how it will perform.
• Articulating requirements and defining what needs to be built avoids miscommunication
and supports technical developers and allows users to contribute more effectively.
• There are different kinds of requirements: functional, data, environmental (context of use),
user characteristics, usability goals, and user experience goals.
• Scenarios provide a story-based narrative to explore existing behavior, potential use of new
products under development, and futuristic visions of technology use.
• Personas capture characteristics of users that are relevant to the product under develop-
ment, synthesized from data gathering sessions.
• Scenarios and personas together can be used throughout the product lifecycle.
• Use cases capture details about an existing or imagined interaction between users and
the product.
1 1 D I S C O V E R I N G R E Q U I R E M E N T S418
INTERVIEW with
Ellen Gottesdiener
Ellen is an Agile Product Coach and CEO of
EBG Consulting focused on helping prod-
uct and development communities produce
valuable outcomes through product agility.
She is the author of three acclaimed books
on product discovery and requirements in-
cluding Discover to Deliver: Agile Product
Planning and Analysis. Ellen is a frequent
speaker and works with clients globally.
She is Producer of Boston’s Agile Product
Open and Director of Agile Alliance’s Agile
Product Management initiative. Visit these
websites for resources: www.ebgconsulting
.com and www.DiscoverToDeliver.com.
What are requirements?
Product requirements are needs that must
be satisfied to achieve a goal, solve a prob-
lem, or take advantage of an opportunity.
The word “requirement” literally means
something that is absolutely, positively,
without question, necessary. Product
requirements need to be defined in suf-
ficient detail for planning and develop-
ment. But before going to that effort and
expense, are you sure they are not only
must-haves but also the right and relevant
requirements?
To arrive at this level of certainty, stake-
holders ideally start by exploring the prod-
uct’s options. An option represents a po-
tential characteristic, facet, or quality of
the product. Stakeholders, who I like to
refer to as product partners, use expansive
thinking to surface a range of options that
could fulfill the vision. Then they collab-
oratively analyze the options and collec-
tively select options, based on value.
Every product has multiple dimensions,
seven in fact. Discovering options for each
of the seven product dimensions yields a
comprehensive, realistic view of the prod-
uct (see Figure 11.2).
You want to engage diverse stakeholders
across the full product lifecycle, from birth
to retirement and demise.
So how do you know who are the stake-
holders?
Successful teams work hand in hand with
their stakeholders as product partners,
defining value and then actively discover-
ing—and delivering—high-value solutions.
This goes beyond feature requests and
requirements documents—beyond user
stories and product backlogs—beyond
the push-pull of competing interests. It’s a
partnership where the ideas, perspectives,
and experiences of three different stake-
holder groups converge. The result? Prod-
uct partners who collaborate to discover
and deliver value.
A product partnership includes people
from three realms: customer, business, and
technology. Each offers a unique perspective
and has its own ideas of what is valuable.
The customer partners represent users,
buyers, and advisers—people or systems
that interface with the product, choose
to buy it, or influence others to buy it.
www.ebgconsulting.com
www.ebgconsulting.com
www.DiscoverToDeliver.com
I N T E R V I E W W I T H E L L E N G O T T E S D I E N E R 419
They tend to value improved productivity,
heightened efficiency, greater speed, enter-
tainment, and similar benefits.
Business partners represent the people in
your organization who authorize, champi-
on, or support the product or who provide
subject-matter expertise. They find value
in improving market position, complying
with regulations, achieving a business case,
reducing overhead costs, enhancing inter-
nal performance, and so on.
Technology partners (your delivery
team, internal or third parties) design,
deliver, test, and support the product or
advise those who do. They may value
building a high-quality product, offering
smooth, continual delivery, adopting a sta-
ble architecture, and the like.
This mix of partners and perspectives is
essential, no matter what kind of delivery
method you adopt (agile, traditional, hy-
brid, or another approach). For the part-
nership to work, these three disparate
groups must collaborate to reach their
shared goal: discover and deliver value.
How do you go about identifying
requirements?
Requirements discovery is highly pro-
active, interactive, and, well, sometimes
hyperactive! You are engaged in eliciting,
analyzing, specifying, prototyping, and
testing. And in the best practices we’ve
been involved in, you are constantly dis-
covering (aka identifying) product needs.
It’s not a “one-and-done” activity.
Elicitation includes interviews, existing
documentation study, exploratory proto-
types, facilitated workshops, focus groups,
observation (including apprenticing, con-
textual inquiry, and ethnography), surveys
(and other research-based techniques),
and user task analysis (including story-
boarding and scenario analysis). There
are a number of specific techniques within
each of these general categories, and some
techniques overlap. Analyzing involves
using lightweight models, often combined
with specifications, which are often in the
form of acceptance tests or prototypes or
both.
It’s not enough to get the right people
together and ask the right questions. To
communicate efficiently and effectively
about how to deliver, product partners
need a focused way to communicate and
make decisions together.
What we’ve found in our work is that
the most efficient and effective discovery
mechanism is a collaborative approach
called the “structured conversation.” In a
structured conversation, the product part-
ners first explore possible requirements
(options) for their next increment. They do
this within and across the seven product
dimensions. This enables product partners
to collaboratively and creatively explore
a range of possibilities. This expansive
thinking opens up product innovation, ex-
perimentation, and mutual learning.
They then evaluate these many options
in terms of value. This means having
shared understanding of what value real-
ly means at that point in time. Once they
have narrowed the list of options through
the evaluation process, they confirm how
they will verify and validate these can-
didate solutions with unambiguous ac-
ceptance criteria. The validation includes
how to test that they delivered the right
requirements, and that they achieved the
anticipated value from each delivery.
How do you know when you have
collected enough requirements to go on to
the next step?
I often get asked by clients how I know
when I have a complete set of requirements.
(Continued)
1 1 D I S C O V E R I N G R E Q U I R E M E N T S420
I think it’s more important to ask whether
you are going after the right requirements.
I characterize a “right requirement” as
one that is:
1. Just in time, just enough. It is essential for
achieving the business objectives in this time
period.
2. Realistic. It is capable of being delivered
with the available resources.
3. Clearly and unambiguously defined.
Acceptance criteria exist that all partners
understand and will use to verify and
validate the product.
4. Valuable. It is indispensable for achieving
the anticipated outcomes for the next deliv-
ery cycle.
What’s the hardest thing about establish-
ing requirements?
People. Seriously. We humans are nonlinear
creatures. We are unpredictable, fickle, and
(as adults) often inflexible. As requirements
seekers, we swim in a stew of complex,
ever-evolving human systems that interop-
erate as we do our requirements work.
To top that off, most products’ require-
ments are fraught with complexity and
interdependency; there are truly wicked
problems, whereby the problem space
overlaps with solution space. As Frederick
Brooks said [in his essay No Silver Bullet],
“the hardest single part of building a soft-
ware system is deciding precisely what to
build.”
You can’t make those decisions without
trust. And trust is not an easy thing to build.
Do you have any other tips for establishing
requirements?
Employ small, tightly wound cycles of
requirements-build-release. Use interactive
and incremental (aka agile) practices to get
feedback early and often on the smallest
viable releases.
For successful requirements discovery,
you need to keep the focus on value—the
why behind the product and the value
considerations of the product partners.
During discovery work, some people view
a specific option as a “requirement” for
the next delivery cycle, whereas others
consider it a “wish list” item for a future
release.
Such was the case in a recent release
planning workshop. The team wrestled
with a particular option, questioning
if it could deliver enough value to jus-
tify the cost to develop it. The product
champion explained why the option was
a requirement—without it the organiza-
tion was at risk for regulatory noncom-
pliance. Once the others understood this,
they all agreed it would be included in
the release.
In the end, requirements work is human-
centric and central to successful product de-
livery. At the same time, the subject matter
and content of product requirements is
complex. Thus, requirements work is the
hardest part of software and will always be.
To be successful with requirements, engi-
neer collaboration into requirements work.
Personally, I’m excited and grateful for
the growing recognition of the value of
collaboration and the explosion of inter-
est in collaborative practices in the product
and software development community—
because collaboration works!
D E S I G N , P R O T O T Y P I N G , A N D
C O N S T R U C T I O N
Chapter 12
12.1 Introduction
12.2 Prototyping
12.3 Conceptual Design
12.4 Concrete Design
12.5 Generating Prototypes
12.6 Construction
Objectives
The main goals of this chapter are to accomplish the following:
• Describe prototyping and the different types of prototyping activities.
• Enable you to produce simple prototypes from the models developed during the
requirements activity.
• Enable you to produce a conceptual model for a product and justify your choices.
• Explain the use of scenarios and prototypes in design.
• Introduce both physical computing kits and software development kits and their role
in construction.
12.1 Introduction
Design, prototyping, and construction fall within the Develop phase of the double diamond
of design, introduced in Chapter 2, “The Process of Interaction Design,” in which solutions
or concepts are created, prototyped, tested, and iterated. The final product emerges iteratively
through repeated design-evaluation-redesign cycles involving users, and prototypes facilitate
this process. There are two aspects to design: the conceptual part, which focuses on the
idea of a product, and the concrete aspect, which focuses on the details of the design. The
former involves developing a conceptual model that captures what the product will do and
how it will behave, while the latter is concerned with the details of the design, such as menu
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N422
types, haptic feedback, physical widgets, and graphics. The two are intertwined, and concrete
design issues will require some consideration in order to prototype ideas, and prototyping
ideas will lead to an evolution of the concept.
For users to evaluate the design of an interactive product effectively, designers prototype
their ideas. In the early stages of development, these prototypes may be made of paper and
cardboard, or ready-made components pulled together to allow evaluation, while as the
design progresses, they become more polished, tailored, and robust so that they resemble
the final product.
This chapter presents the activities involved in progressing a set of requirements through
the cycles of prototyping and construction. The next section explains the role and techniques
of prototyping and then explores how prototypes may be used in the design process. The
chapter ends by discussing physical computing and software development kits (SDKs), which
provide a basis for construction.
12.2 Prototyping
It is often said that users can’t tell you what they want, but when they see something and get
to use it, they soon know what they don’t want. Prototyping provides a concrete manifesta-
tion of an idea—whether it is a new product or a modification of an existing one—which
allows designers to communicate their ideas and users to try them out.
12.2.1 What Is a Prototype?
A prototype is one manifestation of a design that allows stakeholders to interact with it and
to explore its suitability. It is limited in that a prototype will usually emphasize one set of
product characteristics and de-emphasize others (see Box 12.1). Prototypes take many forms,
for example, a scale model of a building or a bridge, or a piece of software that crashes every
few minutes. A prototype can also be a paper-based outline of a display, a collection of wires
and ready-made components, a digital picture, a video simulation, a complex piece of soft-
ware and hardware, or a three-dimensional mockup of a workstation.
In fact, a prototype can be anything from a paper-based storyboard to a complex piece of
software, and from a cardboard mockup to a molded or pressed piece of metal. For example,
when the idea for the PalmPilot (a line of palmtop computers introduced in 1992) was being
developed, Jeff Hawkin (founder of the company) carved up a piece of wood about the size
and shape of the device he had imagined (see Figure 12.1).
Jeff Hawkin used to carry this piece of wood around with him and pretend to enter
information into it, just to see what it would be like to own such a device (Bergman and
Haitani, 2000). This is an example of a simple (some might even say bizarre) prototype, but
it served its purpose of simulating scenarios of use. Advances in 3D printer technologies,
coupled with reduced prices, have increased their use in design. It is now common practice
to take a 3D model from a software package and print a prototype. Soft toys, chocolate,
dresses, and whole houses may be “printed” in this way (see Figure 12.2 and the follow-
ing links).
1 2 . 2 P R O T O T Y P I N G 423
Figure 12.1 The PalmPilot wooden prototype
Source: https://www.computerhistory.org/revolution/mobile-computing/18/321/1648. © Mark Richards
(a)
(c)
(b)
Figure 12.2 Examples of 3D printing (a) model jet engine, (b) Spider Dress 2.0 by Anouk Wipprecht:
embedded with sensors, the arms of the ‘spider’ will extend to defend the wearer if her breath
becomes heavier, and (c) a teddy bear “printed” from a wireframe design
Source: (a) https://www.thingiverse.com/thing:392115. Licensed under CC-BY-3.0, (b) http://www.arch2o.com,
and (c) Used courtesy of Scott Hudson
https://www.computerhistory.org/revolution/mobile-computing/18/321/1648.
https://www.thingiverse.com/thing:392115. Licensed under CC-BY-3.0
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N424
12.2.2 Why Prototype?
Prototypes are useful when discussing or evaluating ideas with stakeholders; they are a com-
munication device among team members and an effective way for designers to explore design
ideas. The activity of building prototypes encourages reflection in design, as described by
Donald Schön (1983), and it is recognized by designers from many disciplines as an impor-
tant aspect of design.
Prototypes answer questions and support designers in choosing between alternatives.
Hence, they serve a variety of purposes, for example, to test the technical feasibility of an
idea, to clarify some vague requirements, to do some user testing and evaluation, or to check
that a certain design direction is compatible with the rest of product development. The pur-
pose of a prototype will influence the kind of prototype that is appropriate to build. So, for
example, to clarify how users might perform a set of tasks and whether the proposed design
would support them in doing this, a paper-based mockup might be produced. Figure 12.3
shows a paper-based prototype of a handheld device to help an autistic child communicate.
This prototype shows the intended functions and buttons, their positioning and labeling,
and the overall shape of the device, but none of the buttons actually works. This kind of
prototype is sufficient to investigate scenarios of use and to decide, for example, whether
the button images and labels are appropriate and the functions sufficient, but not to test
whether the speech is loud enough or the response fast enough. Another example is the
development of Halo, a new air quality monitor that can detect 10 different allergens and
connects to an air purifier that will remove them (see the following reference). The design of
Halo used a range of prototypes including many sketches (paper-based and electronic) and
working prototypes.
Dan Saffer (2010) distinguishes between a product prototype and a service prototype,
where the latter involves role-playing and people as an integral part of the prototype
as well as the product itself. Service prototypes are sometimes captured as video sce-
narios and used in a similar way to the scenarios introduced in Chapter 11, “Discovering
Requirements.”
A video that shows how soft interactive objects can be printed is available at
https://www.youtube.com/watch?v=8jErWRddFYs.
To see how 3D printing is facilitating fashion and interactive wearables, see the
following articles:
https://interestingengineering.com/high-fashion-meets-3d-printing-9-3d-
printed-dresses-for-the-future
https://medium.com/@scientiffic/designing-interactive-3d-printed-things-
with-tinkercad-circuit-assemblies-518ee516adb6
https://interestingengineering.com/high-fashion-meets-3d-printing-9-3d-printed-dresses-for-the-future
https://interestingengineering.com/high-fashion-meets-3d-printing-9-3d-printed-dresses-for-the-future
https://medium.com/@scientiffic/designing-interactive-3d-printed-things-with-tinkercad-circuit-assemblies-518ee516adb6
https://medium.com/@scientiffic/designing-interactive-3d-printed-things-with-tinkercad-circuit-assemblies-518ee516adb6
1 2 . 2 P R O T O T Y P I N G 425
BISCUIT
CAKE
DINNER
DRINK
TOILET
EXAMPLE
BUTTON PHOTOSYMBOL
Battery Indicator
Communication Mate
8
In
ch
es
4 Inches
Communication
keys—these are
sensitive touch-
panel buttons. On
being triggered, a
recorded message
related to that key
is output from
the speaker
In addition, symbols
and photos familiar
to the user can be
used on the keypads
to enable usability
of device to be
immediate in the
case of some
individuals
Battery indicator
shows amount of
battery left before
recharging is
required
Ring attachment for
belt/trousers. This
enables the device to
hang from a person s
trouser/belt in a similar
way to a key ring
Amplified speaker
provides excellent output
Durable case—the
tough plastic
exterior enables
complete protection
of the device if
dropped, and the
rubberized outer
casing lessens the
impact of shocks.
In addition, the
exterior is
lightweight and
makes the design
ideal for use in
virtually any
environment
Figure 12.3 A paper-based prototype of a handheld device to support an autistic child
Source: Used courtesy of Sigil Khwaja
To see more about the development of the Wynd Halo and Wynd Home Purifier,
see this website:
https://www.kickstarter.com/projects/882633450/wynd-halo-home-purifier-
keep-your-homes-air-health?ref=discovery
https://www.kickstarter.com/projects/882633450/wynd-halo-home-purifier-keep-your-homes-air-health?ref=discovery
https://www.kickstarter.com/projects/882633450/wynd-halo-home-purifier-keep-your-homes-air-health?ref=discovery
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N426
12.2.3 Low-Fidelity Prototyping
A low-fidelity prototype does not look very much like the final product, nor does it provide
the same functionality. For example, it may use very different materials, such as paper and
cardboard rather than electronic screens and metal, it may perform only a limited set of func-
tions, or it may only represent the functions and not perform any of them. The lump of wood
used to prototype the PalmPilot described earlier is a low-fidelity prototype.
Low-fidelity prototypes are useful because they tend to be simple, cheap, and quick to
produce. This also means that they are simple, cheap, and quick to modify so that they sup-
port the exploration of alternative designs and ideas. This is particularly important in the
early stages of development, during conceptual design for example, because prototypes that
are used for exploring ideas should be flexible and encourage exploration and modification.
Low-fidelity prototypes are not meant to be kept and integrated into the final product.
Low-fidelity prototyping has other uses, for example in education. Seobkin Kang et al.
(2018) use low-fidelity prototyping to help children represent creative ideas when designing
and experimenting with complex systems. Their system, called Rainbow, is composed of a
collection of low-fidelity materials such as paper, scissors, and markers that can be used to
create prototypes, a top-down camera that can recognize the prototype, and a monitor
to display augmented reality visualizations.
Storyboarding
Storyboarding is one example of low-fidelity prototyping that is often used in conjunction
with scenarios, as described in Chapter 11, “Discovering Requirements.” A storyboard con-
sists of a series of sketches showing how a user might progress through a task using the prod-
uct under development. It can be a series of screen sketches or a series of scenes showing how a
user can perform a task using an interactive device. When used in conjunction with a scenario,
the storyboard provides more detail and offers stakeholders a chance to role-play with a pro-
totype, interacting with it by stepping through the scenario. The example storyboard shown
in Figure 12.4 depicts a person (Christina) using a new mobile device for exploring historical
Figure 12.4 An example storyboard for a mobile device to explore ancient sites such as The Acropolis
1 2 . 2 P R O T O T Y P I N G 427
sites. This example shows the context of use for this device and how it might support Christina
in her quest for information about the pottery trade at The Acropolis in Ancient Greece.
Sketching
Low-fidelity prototyping often relies on hand-drawn sketches. Many people find it dif-
ficult to engage in sketching because they are inhibited by the quality of their drawing.
As Saul Greenberg et al. (2012) put it, however, “Sketching is not about drawing. Rather,
it is about design” (p. 7). You can get over any inhibition by devising your own symbols
and icons and practicing them—referred to by Saul Greenberg et al. as a sketching vocab-
ulary (p. 85). They don’t have to be anything more than simple boxes, stick figures, and
stars. Elements that might be required in a storyboard sketch, for example, include digi-
tal devices, people, emotions, tables, books, and so forth, and actions such as give, find,
transfer, and write. If you are sketching an interface design, then you might need to draw
various icons, dialog boxes, and so on. Some simple examples are shown in Figure 12.5.
The next activity requires other sketching symbols, but they can still be drawn quite
simply. Mark Baskinger (2008) provides further tips for those new to sketching.
Prototyping with Index Cards
Using index cards (small pieces of cardboard about 3 × 5 inches) is a successful and simple
way to prototype an interaction, and it is used for developing a range of interactive products
including websites and smartphone apps. Each card represents one element of the interac-
tion, perhaps a screen or just an icon, menu, or dialog exchange. In user evaluations, the user
can step through the cards, pretending to perform the task while interacting with the cards.
A more detailed example of this kind of prototyping is provided in Section 12.5.2.
Figure 12.5 Some simple sketches for low-fidelity prototyping
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N428
Wizard of Oz
Another low-fidelity prototyping method called Wizard of Oz assumes that you have a software-
based prototype. With this technique, the user interacts with the software as though interacting
with the product. In fact, however, a human operator simulates the software’s response to the
user. The method takes its name from the classic story of the little girl who is swept away in a
storm and finds herself in the Land of Oz (Baum and Denslow, 1900). The Wizard of Oz is a
small shy man who operates a large artificial image of himself from behind a screen where no
one can see him. The Wizard of Oz style of prototyping has been used successfully for various
applications, including analyzing gestural behavior (Henschke et al., 2015) and when studying
dialogues between children and virtual agents (Fialho and Coheur, 2015). The Wizard of Oz
technique is often used in human-robot interaction studies. One such example is Marionette, a
Wizard of Oz system for performing studies on the road with autonomous vehicles (Wang et al.,
2017). Prototyping AI systems also draws on this style of prototyping, where the designer sketches
the AI for themselves, and as the design matures, implementations of the AI can take its place
(van Allen, 2018).
12.2.4 High-Fidelity Prototyping
A high-fidelity prototype looks more like the final product and usually provides more func-
tionality than a low-fidelity prototype. For example, a prototype of a software system devel-
oped in Python or other executable language is higher fidelity than a paper-based mock-up; a
ACTIVITY 12.1
Produce a storyboard that depicts how to fill a car with fuel.
Comment
Our attempt is shown in Figure 12.6.
Figure 12.6 A storyboard depicting how to fill a car with fuel
1 2 . 2 P R O T O T Y P I N G 429
molded piece of plastic with a dummy keyboard would be a higher-fidelity prototype of the
PalmPilot than the lump of wood. There is a continuum between low- and high-fidelity, and
prototypes used “in the wild,” for example, will have enough fidelity to be able to answer their
design questions and to learn about interaction or technological constraints or contextual
factors. It is common for prototypes to evolve through various stages of fidelity, within the
design-evaluate-redesign cycles. Boban Blazevski and Jean Haslwanter (2017) describe their
successful trials of two fully working prototypes for an assembly line mobile worker assis-
tance system. They developed a smartphone app and a tablet-based app, both of which were
integrated into the production system so that suitable instructions could be provided. Workers
used the two versions for five days, and this allowed them to be evaluated in situ. The paper
concludes that although producing a working prototype takes more effort, being able to try
the prototype in real contexts provided valuable feedback for this kind of environment.
High-fidelity prototypes can be developed by modifying and integrating existing
components—both hardware and software—which are widely available through various
developer kits and open source software, for example. In robotics, this approach has been
called tinkering (Hendriks-Jansen, 1996), while in software development it has been referred
to as Opportunistic System Development (Ncube et al, 2008). For example, Ali Al-Humairi
et al. (2018) used existing hardware (Arduino) and open source software to build a pro-
totype to test their idea of robotically playing musical instruments automatically from a
mobile phone.
12.2.5 Compromises in Prototyping
By their very nature, prototypes involve compromises: the intention is to produce something
quickly to test an aspect of the product. Youn-Kiung Lim et al. (2008) suggest an anatomy of
prototyping that structures the different aspects of a prototype and what it aims to achieve.
Their ideas are expanded in Box 12.1. The kind of questions that any one prototype can
answer is limited, and the prototype must be built with the key issues in mind. In low-
fidelity prototyping, it is fairly clear that compromises have been made. For example, with a
paper-based prototype, an obvious compromise is that the device doesn’t actually work. For
physical prototypes or software prototypes, some of the compromises will still be fairly clear.
For example, the casing may not be very robust, the response speed may be slow, the look
and feel may not be finalized, or only a limited amount of functionality may be available.
Box 12.2 discusses when to use low- or high-fidelity prototyping.
BOX 12.1
The Anatomy of Prototyping: Filters and Manifestations
Youn-Kyung Lim et al. (2008) propose a view of prototypes that focuses on their role as fil-
ters, for example to emphasize specific aspects of a product being explored by the prototype,
and as manifestations of designs, for instance, as tools to help designers develop their design
ideas through external representations.
They suggest three key principles in their view of the anatomy of prototypes:
1. Fundamental prototyping principle: Prototyping is an activity with the purpose of creating
a manifestation that, in its simplest form, filters the qualities in which designers are inter-
ested without distorting the understanding of the whole.
(Continued)
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N430
2. Economic principle of prototyping: The best prototype is one that, in the simplest and the most
efficient way, makes the possibilities and limitations of a design idea visible and measurable.
3. Anatomy of prototypes: Prototypes are filters that traverse a design space and are manifes-
tations of design ideas that concretize and externalize conceptual ideas.
Youn-Kyung Lim et al. identify several dimensions of filtering and of manifestation that
may be considered when developing a prototype, although they point out that these dimen-
sions are not complete but provide a useful starting point for consideration of prototype
development. These are shown in Table 12.1 and Table 12.2.
Filtering dimension Example variables
Appearance size; color; shape; margin; form; weight; texture; proportion;
hardness; transparency; gradation; haptic; sound
Data data size; data type (for example, number; string; media); data
use; privacy type; hierarchy; organization
Functionality system function; users’ functionality needs
Interactivity input behavior; output behavior; feedback behavior;
information behavior
Spatial structure arrangement of interface or information elements; relationship
among interface or information elements, which can be either
two- or three-dimensional, intangible or tangible, or mixed
Table 12.1 Example variables of each filtering dimension
Manifestation
dimension
Definition Example variables
Material Medium (either
visible or invisible)
used to form
a prototype
Physical media, for example, paper, wood, and
plastic; tools for manipulating physical matters,
such as a knife, scissors, pen, and sandpaper;
computational prototyping tools, for instance,
Python; physical computing tools, such as, Phidgets
and Basic Stamps; available existing artifacts, such as
a beeper to simulate a heart attack
Resolution Level of detail or
sophistication of
what is manifested
(corresponding
to fidelity)
Accuracy of performance, for instance, feedback
time responding to an input by a user (giving
user feedback in a paper prototype is slower than
in a computer-based one); appearance details;
interactivity details; realistic versus faked data
Scope Range of what
is covered to be
manifested
Level of contextualization, for example, website
color scheme testing with only color scheme
charts or color schemes placed in a website layout
structure; book search navigation usability testing
with only the book search related interface or the
whole navigation interface
Table 12.2 The definition and variables of each manifestation dimension
1 2 . 2 P R O T O T Y P I N G 431
BOX 12.2
High-Fidelity and Low-Fidelity Prototypes
Table 12.3 summarizes proclaimed advantages and disadvantages of high- and low-fidelity
prototyping.
Component kits and pattern libraries for interface components (see section 12.7 and
Chapter 13, “Interaction Design in Practice”) make it quite easy to develop polished func-
tional prototypes quickly, but there is a strong case for the value of low-fidelity prototypes,
such as paper-based sketches, sticky note designs, and storyboarding to explore initial ideas.
Paper prototyping, for example, is used in game design (Gibson, 2014), website development,
and product design (Case Study 12.1). Both high- and low-fidelity prototypes can provide
useful feedback during evaluation and design iterations, but how do you know which to
choose? The advantages and disadvantages listed earlier will help, but there are other factors
too. Beant Dhillon et al. (2011) found that a low-fidelity video prototype elicited comparable
user feedback as a high-fidelity one, but it was quicker and cheaper to produce. Gavin Sim
et al. (2016) investigated the effect of using a paper booklet versus iPads with children who
were rating a game concept. They found that the children’s rating of the game was unaffected
by the form factor but that they rated the paper version significantly higher for aesthetics.
Type Advantages Disadvantages
Low-fidelity
prototype
• Quick revision possible
• More time can be spent on
improving the design before
starting development
• Evaluates multiple design concepts
• Useful communication device
• Proof of concept
• Limited error checking
• Poor detailed specification for
development
• Facilitator-driven
• Limited usefulness for usability tests
• Navigational and flow limitations
High-fidelity
prototype
• (Almost) complete functionality
• Fully interactive
• User-driven
• Clearly defines navigational scheme
• Use for exploration and test
• Look and feel of intended product
• Serves as a “living” or evolving
specification
• Marketing and sales tool
• More resource-intensive to develop
• Time-consuming to modify
• Inefficient for proof-of-
concept designs
• Potential of being mistaken for the
final product
• Potential of setting inappropriate
expectations
Table 12.3 Advantages and disadvantages of low- and high-fidelity prototypes
This article covers the benefits of high- and low-fidelity prototyping and how to
produce them:
https://www.nngroup.com/articles/ux-prototype-hi-lo-fidelity/?lm=aesthetic-
usability-effect&pt=article
https://www.nngroup.com/articles/ux-prototype-hi-lo-fidelity/?lm=aesthetic-usability-effect&pt=article
https://www.nngroup.com/articles/ux-prototype-hi-lo-fidelity/?lm=aesthetic-usability-effect&pt=article
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N432
CASE STUDY 12.1
Paper Prototyping as a Core Tool in the Design of Telephone
User Interfaces
Paper prototyping is being used by telephone and tablet companies as a core part of their
design process (see Figure 12.7). Mobile devices are feature-rich. They include megapixel
cameras, music players, media galleries, downloaded applications, and more. This requires
designing interactions that are complex but that are clear to learn and use. Paper prototyp-
ing offers a rapid way to work through every detail of the interaction design across multiple
applications.
Mobile device projects involve a range of disciplines—all with their own perspective on
what the product should be. A typical project may include programmers, project managers,
marketing experts, commercial managers, handset manufacturers, user experience specialists,
visual designers, content managers, and network specialists. Paper prototyping provides a
vehicle for everyone involved to be part of the design process—considering the design from
multiple angles in a collaborative way.
The case study on the id-book.com website describes the benefits of using paper proto-
typing from the designer’s viewpoint, while considering the bigger picture of its impact across
the entire project lifecycle. It starts by explaining the problem space and how paper prototyp-
ing is used as an integrated part of user interface design projects for European and U.S.-based
mobile operator companies. The case study uses project examples to illustrate the approach
and explains step by step how the method can be used to include a range of stakeholders in
the design process—regardless of their skill set or background. The case study offers exercises
so that you can experiment with the approach yourself.
Figure 12.7 Prototype developed for cell phone user interface
http://id-book.com
1 2 . 2 P R O T O T Y P I N G 433
Two common properties that are often traded off against each other are breadth of
functionality versus depth. These two kinds of prototyping are called horizontal proto-
typing (providing a wide range of functions but with little detail) and vertical prototyp-
ing (providing a lot of detail for only a few functions). Another common compromise is
level of robustness versus degree of changeability. Making a prototype robust may lead
to it being harder to change. This compromise may not be visible to a user of the product
until something goes wrong. For example, the internal structure of a piece of code may
not have been carefully designed, or the connections between electronic components
may be delicate.
One of the consequences of high-fidelity prototypes is that the prototype can appear to
be good enough to be the final product, and users may be less prepared to critique something
if they perceive it as a finished product. Another consequence is that fewer alternatives are
considered because the prototype works and users like it.
Although prototypes will have undergone extensive user evaluation, they will not neces-
sarily have been built with good engineering principles or subjected to rigorous quality test-
ing for other characteristics such as security and error-free operation. Building a product to
be used by thousands or millions of people running on various platforms and under a wide
range of circumstances requires a different construction and testing regime than producing a
quick prototype to answer specific questions.
The next “Dilemma” box discusses two different development philosophies. In evolu-
tionary prototyping, a prototype evolves into the final product and is built with these engi-
neering principles in mind. Throwaway prototyping uses the prototypes as stepping stones
toward the final design. In this case, the prototypes are thrown away, and the final product is
built from scratch. In an evolutionary prototyping approach, each stage will be subjected to
rigorous testing; for throwaway prototyping, such testing is not necessary.
Source: Reproduced with permission of Penwil Cartoons
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N434
12.3 Conceptual Design
Conceptual design is concerned with developing a conceptual model; conceptual models
were introduced in Chapter 3, “Conceptualizing Interaction.” The idea of a conceptual model
can be difficult to grasp because these models take many different forms and there isn’t a
definitive detailed characterization of one. Instead, conceptual design is best understood by
exploring and experiencing different approaches to it, and the purpose of this section is to
provide some concrete suggestions about how to go about doing this.
A conceptual model is an outline of what people can do with a product and which con-
cepts are needed for the user to understand how to interact with it. The former will emerge
from an understanding of the problem space and the current functional requirements. Which
concepts are needed to understand how to interact with the product depends on a variety
of issues such as who the user will be, what kind of interaction will be used, what kind of
interface will be used, terminology, metaphors, application domain, and so on? The first step
in developing a conceptual model is to steep yourself in the data about the users and their
goals and try to empathize with them.
There are different ways to achieve empathy with users. For example, Karen Holtzblatt and
Hugh Beyer’s (2017) contextual interviews, interpretation sessions and the Wall Walk, intro-
duced in Chapter 11, supports this. These three activities together ensure that different people’s
perspectives on the data and what they observed are captured, help to deepen understanding
and to expose the whole team to different aspects of the problem space, and immerse the team
in the users’ world. This stimulates ideas based on an extended understanding of the user and
their context. Once captured, ideas are tested against other data and scenarios, discussed with
DILEMMA
Prototyping vs. Engineering
The compromises made when developing low-fidelity prototypes are evident, but compro-
mises in high-fidelity prototypes are not so obvious. When a project is under pressure, it can
become tempting to integrate a set of existing high-fidelity prototypes together to form the
final product. Many hours will have been spent developing them, and evaluation with users
has gone well. So, why throw it all away? Generating the final product this way will simply
store up testing and maintenance problems for later (see Box 13.1 on technical debt). In short,
this is likely to compromise the quality of the product, unless the prototypes have been built
with sound engineering principles from the start.
On the other hand, if the device is an innovation, then being first to market with a “good
enough” product may be more important for securing market position than having a very
high-quality product that reaches the market two months after a competitor’s product.
The dilemma arises in deciding how to treat high-fidelity prototypes—engineer them
from the start or accept that they will be thrown away.
1 2 . 3 C O N C E P T U A L D E S I G N 435
other design team members, and prototyped for testing with users. Contextual Design describes
further activities that cover the whole design process. Trying to empathize with users may not
be the right approach, however, as discussed in the following “Dilemma” box.
Using different creativity and brainstorming techniques to explore ideas with other mem-
bers of the team can help build a picture of the users and their goals. Gradually, a picture of
the desired users’ experience will emerge and become more concrete. This process is helped
by considering the issues in this section and by using scenarios and prototypes to capture and
experiment with ideas. The availability of ready-made components increases the ease with
which ideas can be prototyped, which also helps to explore different conceptual models and
design ideas. Mood boards (traditionally used in fashion and interior design) may be used
to capture the desired feel of a new product (see Figure 12.8). This is informed by any data
gathering or evaluation activities and considered in the context of technological feasibility.
Developing a range of scenarios, as described in Chapter 11, can also help with concep-
tual design (Bødker, 2000) and to think through the consequences of different ideas. Suzanne
Bødker (2000) also proposes the notion of plus and minus scenarios. These attempt to cap-
ture the most positive and the most negative consequences of a particular proposed design
solution, thereby helping designers to gain a more comprehensive view of the proposal.
Figure 12.8 An example mood board developed for a personal safety product called Guard Llama
Source: http://johnnyhuang.design/guardllama.html
Read about how to create mood boards for UX projects here:
https://uxplanet.org/creating-better-moodboards-for-ux-projects-
381d4d6daf70
Invision offers a tool to help with this. See the following web page:
https://www.invisionapp.com/inside-design/boards-share-design-inspiration-
assets/
http://johnnyhuang.design/guardllama.html
https://uxplanet.org/creating-better-moodboards-for-ux-projects-381d4d6daf70
https://uxplanet.org/creating-better-moodboards-for-ux-projects-381d4d6daf70
https://www.invisionapp.com/inside-design/boards-share-design-inspiration-assets/
https://www.invisionapp.com/inside-design/boards-share-design-inspiration-assets/
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N436
This idea has been extended by Mancini et al. (2010) who used positive and negative video
scenarios to explore futuristic technology. Their approach used video to represent positive
and negative consequences of a new product to help with diet and well-being, which was
designed to explore privacy concerns and attitudes. The two videos (each with six scenes)
focus on Peter, an overweight businessman who has been advised by his doctor to use a new
product DietMon to help him lose weight. The product consists of glasses with a hidden
camera, a microchip in the wrist, a central data store, and a text messaging system to send
messages to Peter’s mobile phone telling him the calorific value of the food he is looking at
and warning him when he is close to his daily limit (Price et al, 2010). Figure 12.9 shows the
content of two scenes from the videos and the positive and negative reactions; Figure 12.10
is a snapshot from the negative video.
Scene 2: breakfast at home
Peter starts preparing his breakfast with his
new glasses on. His wife notices them and
he keenly gives her a demonstration of what
they are and how they work, and tells her
about the microchip. She seems impressed
and leaves the room to get ready for work.
Peter opens the fridge to put away the butter
and sees a pastry. He looks at it and gets a
DietMon message telling him the calorie
content of the pastry. He shows that to his
wife, who is entering the kitchen and looks at
him with a smile.
Peter prepares breakfast with his new
glasses on. His wife notices them. While
looking at his toast, he gets a text. His wife
enquires what that is. He says it’s nothing
and he does not feel like having toast after all.
When she questions why he becomes tense
and reluctantly tells her about DietMon.
Skeptical, she leaves the room with a
sarcastic comment. Peter opens the fridge
and sees a pastry. As he gives in and takes a
bite, he is caught by his wife, who is entering
the kitchen and looks at him with a grin.
Scene 3: birthday party at the office
Peter is working away at his desk when
some colleagues invite him to a small
birthday celebration. He tries to refuse but
they insist. As he joins them, wearing his
glasses, he greets the birthday-lady. His
colleague Chris serves him a slice of cake.
Peter looks at it and takes out his mobile.
He gets a text, checks it and says the slice is
too big, and asks Chris to cut it in a half.
Chris is intrigued and asks for an explanation,
so Peter gives his colleagues a keen
demonstration of how the technology works.
His audience is impressed, gathered around
him.
Peter is working away at his desk when some
colleagues invite him to a small birthday
celebration. He tries to refuse but they insist.
As he joins them, wearing his glasses, his
colleague Chris gives him a slice of cake. He
takes the plate and greets the birthday-lady.
He gets a text and, pretending it’s an
important phone call, moves away from the
others with the cake. Turned away from them,
he throws the cake in a bin and goes back
pretending to have already finished it. Chris
comments on how fast he ate. Peter excuses
himself, saying he has a deadline to meet, and
leaves.
Figure 12.9 How two scenes from the videos differ in terms of positive and negative reactions to
the system. The positive version is on the left and the negative on the right
Source: Mancini et al. (2010)
1 2 . 3 C O N C E P T U A L D E S I G N 437
Figure 12.10 Peter being caught eating the pastry out of the fridge at breakfast (scene 2, negative
reaction)
Source: Price et al. (2010)
DILEMMA
Is Trying to Empathize with Users the Right Approach?
Empathizing with users who live in a very different context than the designers is not easy, no
matter how much data is collected. Interaction designers have tried several ways to under-
stand situations that are outside their experience, two of which are experience prototyping
and the Third Age suit.
(Continued)
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N438
Experience prototyping was introduced by Marion Buchenau and Jane Fulton Suri
(2000) who describe a team designing a chest-implanted automatic defibrillator. A defibril-
lator is used with victims of cardiac arrest to deliver a strong electric shock to the heart that
is intended to restore the heart muscle to its regular rhythm. This kind of event is completely
outside most people’s experience. To simulate some critical aspects of the experience, one of
which is the random occurrence of a defibrillating shock, each team member was sent text
messages at random times over one weekend. Each message simulated the occurrence of a
defibrillating shock, and team members were asked to record where they were, who they were
with, what they were doing, and what they thought and felt knowing that this represented a
shock. Example insights ranged from anxiety around everyday happenings, such as holding
a child and operating power tools, to being in social situations and at a loss how to commu-
nicate to onlookers what was happening. This first-hand experience brought new insights to
the design effort.
The second example is the Third Age suit, designed so that car designers can experi-
ence what it is like for people with some loss of mobility or declining sensory perception to
drive their cars. The suit restricts movement in the neck, arms, legs, and ankles. Originally
developed by Ford Motor Company and Loughborough University (see Figure 12.11), it has
been used to raise awareness within groups of car designers, architects, and other product
designers.
Using these techniques appears to have brought new insights to the design process, but
how deep are those insights and how accurate are they? According to Michelle Nario- Redmond
Figure 12.11 The Third Age suit helps designers experience the loss of mobility and sensory
perception.
Source: Ford Motor Co.
1 2 . 3 C O N C E P T U A L D E S I G N 439
12.3.1 Developing an Initial Conceptual Model
The core components of the conceptual model are metaphor and analogies, the concepts to
which users are exposed, the relationship between those concepts, and the mappings between
the concepts and user experience being supported (Chapter 3). Some of these will derive
from the product’s requirements, such as the concepts involved in a task and their rela-
tionships, such as through scenarios and use cases. Others such as suitable metaphors and
analogies will be informed by immersion in the data and attempting to understand the users’
perspectives.
This section introduces approaches that help to produce an initial conceptual model. In
particular, it considers the following:
• How to choose interface metaphors that will help users understand the product?
• Which interaction type(s) would best support the users’ activities?
• Do different interface types suggest alternative design insights or options?
All of these approaches provide different ways of thinking about the product and help
generate potential conceptual models.
Interface Metaphors
Interface metaphors combine familiar knowledge with new knowledge in a way that will
help users understand the product. Choosing suitable metaphors and combining new and
familiar concepts requires a balance between utility and relevance, and it is based on an
understanding of the users and their context. For example, consider an educational system to
et al. (2017), who conducted experiments to investigate the impact of disability simulations,
they have unexpected negative consequences. They found that these simulations can result
in feelings of fear, apprehension, and pity toward those with disabilities, rather than any
sense of empathy. In addition, experiencing the disability for only a short time does not take
into account the various coping strategies and innovative techniques that individuals develop.
They suggest that a better way to design for these circumstances is to involve people with
those disabilities and to understand their experiences more holistically.
To see the Third Age suit in action, watch this video:
https://www.youtube.com/watch?v=Yb0aqr0rzrs
To read an overview of the disability simulation experiment results, see
this article:
https://blog.prototypr.io/why-i-wont-try-on-disability-to-build-
empathy-in-the-design-process-and-you-should-think-twice-
7086ed6202aa
https://www.youtube.com/watch?v=Yb0aqr0rzrs
https://blog.prototypr.io/why-i-wont-try-on-disability-to-build-empathy-in-the-design-process-and-you-should-think-twice-7086ed6202aa
https://blog.prototypr.io/why-i-wont-try-on-disability-to-build-empathy-in-the-design-process-and-you-should-think-twice-7086ed6202aa
https://blog.prototypr.io/why-i-wont-try-on-disability-to-build-empathy-in-the-design-process-and-you-should-think-twice-7086ed6202aa
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N440
teach 6-year-olds mathematics. One possible metaphor is a classroom with a teacher stand-
ing at the front. But considering the users of the product and what is likely to engage them,
a metaphor that reminds them of something enjoyable is more likely to keep them engaged,
such as a ball game, the circus, a playroom, and so on.
Different approaches to identifying and choosing an interface metaphor have been tried.
For example, Dietrich Kammer et al. (2013) combined creativity methods to explore every-
day objects, paper prototypes, and toolkits to support groups of students designing novel
interface metaphors and gestures for mobile devices. They found that developing metaphors
for both tablets and smartphones resulted in flexible metaphors. On the other hand, Marco
Speicher et al. (2018) decided on an apartment metaphor for a VR online shopping experi-
ence by considering the limitations of systems that try to mimic physical stores.
Tom Erickson (1990) suggests a three-step process for choosing a good interface meta-
phor. Although this work is quite old, the approach is remarkably useful with current tech-
nologies. The first step is to understand what the system will do, that is, to identify functional
requirements. Developing partial conceptual models and trying them may be part of the
process. The second step is to understand which bits of the product are likely to cause users
problems, that is, which tasks or subtasks cause problems, are complicated, or are critical.
A metaphor is only a partial mapping between the product and the real thing upon which the
metaphor is based. Understanding areas in which users are likely to have difficulties means
that the metaphor can be chosen to support those aspects. The third step is to generate meta-
phors. Looking for metaphors in the users’ description of relevant activities, or identifying
metaphors used in the application domain, is a good starting point.
When suitable metaphors have been generated, they need to be evaluated. Erickson
(1990) suggests five questions to ask:
• How much structure does the metaphor provide? A good metaphor will provide struc-
ture—preferably familiar structure.
• How much of the metaphor is relevant to the problem? One of the difficulties of using
metaphors is that users may think they understand more than they do and start apply-
ing inappropriate elements of the metaphor to the product, leading to confusion or false
expectations.
• Is the interface metaphor easy to represent? A good metaphor will be associated with par-
ticular physical, visual, and audio elements, as well as words.
• Will your audience understand the metaphor?
• How extensible is the metaphor? Does it have extra aspects that may be useful later?
For the group travel organizer introduced in Chapter 11, one potential metaphor that
was prompted by the quote from Sky in her persona is a family restaurant. This seems appro-
priate because the family is all together, and each can choose what they want. Evaluating this
metaphor using the previous five questions listed prompted the following thoughts:
• Does it supply structure? Yes, it supplies structure from the users’ point of view, based on
the familiar restaurant environment. Restaurants can be very different in their interior
and the food they offer, but the structure includes having tables and a menu and people to
serve the food. The experience of going to a restaurant involves arriving, sitting at a table,
ordering food, being served the food, eating it, and then paying before leaving. From a
different point of view, there is also structure around food preparation and how the kitch-
ens are run.
1 2 . 3 C O N C E P T U A L D E S I G N 441
• How much of the metaphor is relevant? Choosing a vacation involves seeing what is being
offered and deciding what is most attractive, based on the preferences of everyone in the
group. This is similar to choosing a meal in a restaurant. For example, a restaurant will
have a menu, and visitors to the restaurant will sit together and choose individual meals,
but they all sit in the same restaurant and enjoy the environment. For a group vacation, it
may be that some members of the group want to do different activities and come together
for some of the time, so this is similar. Information about the food such as allergens is
available from the server or in the menu, reviews of restaurants are available, and photos
or models of the food available are common. All of these characteristics are relevant to the
group travel organizer. One of the characteristics of a restaurant that is not so applicable
to a vacation is the need to pay at the end of the meal rather than before you get there.
• Is the metaphor easy to represent? There are several options in this regard, but the basic
structure of a restaurant can be represented. The key aspect of this conceptual model will
be to identify potential vacations that suit everyone and choose one. In a restaurant this
process involves looking at menus, talking to the server, and ordering the food. Vacation
information including photos and videos could be presented in a menu—maybe as one
menu for adults and one for children. So, the main elements of the metaphor seem straight-
forward to represent.
• Will your audience understand the metaphor? For this example, the user group has not yet
been investigated in detail, but eating in a restaurant is common.
• How extensible is the metaphor? There are several different types of restaurant experi-
ences—à la carte, fixed menu, serve yourself, all you can eat, and food courts, for example.
Elements from these different types of restaurants may be used to expand initial ideas.
ACTIVITY 12.2
One of the disadvantages of the restaurant metaphor is the need to have a shared experience
when members of the group are in different locations. Another possible interface metaphor for
the group travel organizer is the travel consultant. A travel consultant discusses the require-
ments with the traveler(s) and tailors the vacation accordingly, offering maybe two or three
alternatives, but making most of the decisions on the travelers’ behalf. Ask the earlier five
questions about this metaphor.
Comment
1. Does the travel consultant metaphor supply structure?
Yes.The key characteristic of this metaphor is that the travelers specify what they want,
and the consultant researches the options. It relies on the travelers giving the consultant
sufficient information to search within a suitable range rather than leaving them to make
key decisions.
2. How much of the metaphor is relevant?
The idea of handing over responsibility to someone else to search for suitable vacations
may be appealing to some users, but it might feel uncomfortable to others. The level of
(Continued)
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N442
Interaction Types
Chapter 3 introduced five different types of interaction: instructing, conversing, manipulat-
ing, exploring, and responding. Which type of interaction is best suited to the current design
depends on the application domain and the kind of product being developed. For example,
a computer game is most likely to suit a manipulating style, while a software application for
drawing or drafting has aspects of instructing and conversing.
Most conceptual models will include a combination of interaction types, and different
parts of the interaction will be associated with different types. For example, in the group
travel organizer, one of the user tasks is to find out the visa regulations for a particular desti-
nation. This will require an instructing approach to interaction as no dialog is necessary for
the system to show the regulations. The user simply has to enter a predefined set of informa-
tion, for instance, the country issuing the passport and destination. On the other hand, trying
to identify a vacation for a group of people may be conducted more like a conversation. For
example, the user may begin by selecting some characteristics of the destination and some
time constraints and preferences. Then the organizer will respond with several options, and
the user will provide more information or preferences and so on. Alternatively, for users who
don’t have any clear requirements yet, they might prefer to explore availability before ask-
ing for specific options. Responding could be used when the user chooses an option that has
additional restrictions and the system asks if the user meets them.
Interface Types
Considering different interfaces at this stage may seem premature, but it has both a design
and a practical purpose. When thinking about the conceptual model for a product, it is
important not to be unduly influenced by a predetermined interface type. Different interface
types prompt and support different perspectives on potential user experiences and possible
behaviors, hence prompting alternative design ideas.
In practical terms, prototyping the product will require an interface type, or at least can-
didate alternative interface types. Which ones to choose depends on the product constraints
responsibility given to the consultant can be adjusted, though, depending on user prefer-
ences. It is common for individuals to put together vacations themselves based on web
searches, but this can be time-consuming and diminish the excitement of planning a vaca-
tion. It would be attractive to some users if the initial searching and sifting is done for them.
3. Is the metaphor easy to represent?
Yes, it could be represented by a software agent or by having a sophisticated database entry
and search facility. But the question is, would users like this approach?
4. Will your audience understand the metaphor?
Yes.
5. How extensible is the metaphor?
The wonderful thing about people is that they are flexible; hence, the metaphor of the travel
consultant is also pretty flexible. For example, the consultant could be asked to refine their
vacation recommendations according to as many different criteria as the travelers require.
1 2 . 3 C O N C E P T U A L D E S I G N 443
that arise from the requirements. For example, input and output modes will be influenced
by user and environmental requirements. Therefore, considering interfaces at this point also
takes one step toward producing practical prototypes.
To illustrate this, we consider a subset of the interfaces introduced in Chapter 7, “Inter-
faces,” and the different perspectives they bring to the group travel organizer.
• Shareable Interface The travel organizer has to be shareable, as it is intended to be used
by a group of people and it should be exciting and fun. The design issues for shareable
interfaces, which were introduced in Chapter 7, will need to be considered for this system.
For example, how best (whether) to use the individuals’ own devices such as smartphones
in conjunction with a shared interface. Allowing group members to interact at a distance
suggests the need for multiple devices, so a combination of form factors is required.
• Tangible Interface Tangible interfaces are a kind of sensor-based interaction, where
blocks or other physical objects are moved around. Thinking about a travel organizer in
this way conjures up an interesting image of people collaborating, maybe with the physical
objects representing themselves while traveling, but there are practical problems of having
this kind of interface, as the objects may be lost or damaged.
• Virtual Reality The travel organizer seems to be an ideal product for making use of a vir-
tual reality interface, as it would allow individuals to experience the destination and maybe
some of the activities available. Virtual reality would not be needed for the whole product,
just for the elements where users want to experience the destination.
ACTIVITY 12.3
Consider the new navigation app for a large shopping center introduced in Chapter 11.
1. Identify tasks associated with this product that would best be supported by each of the
interaction types instructing, conversing, manipulating, exploring, and responding.
2. Pick out two interface types from Chapter 7 that might provide different perspectives on
the design.
Comment
1. Here are some suggestions. You may have identified others.
• Instructing The user wants to see the location of a specific shop.
• Conversing The user wants to find one particular branch out of several; the app might
ask them to pick one from a list. Or, the user might want to find a particular kind of
shop, and the app will display a list from which to choose.
• Manipulating The chosen route could be modified by dragging the path to encompass
other shops or specific walkways.
• Exploring The user might be able to walk around the shopping center virtually to see
what shops are available.
• Responding The app asks whether the user wants to visit their favorite snack bar on
the way to the chosen shop.
(Continued)
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N444
12.3.2 Expanding the Initial Conceptual Model
The previous elements represent the core of the conceptual model. For prototyping or testing
with users, these ideas need some expansion. Examples include which functions the product
will perform and which the user will perform, how those functions are related, and what
information is required to support them. Some of this will have been considered during the
requirements activity and will evolve after prototyping and evaluation.
What Functions Will the Product Perform?
This question is about whether the product or the user takes responsibility for different parts
of the overall goal. For example, the travel organizer is intended to suggest specific vacation
options for a group of people, but is that all it should do? What about if it automatically
reserved the bookings? Or does it wait until it is given a preferred choice? In the case of
visa requirements, will the travel organizer simply provide the information, or link to visa
services? Deciding what the system will do and the user will do is sometimes called task allo-
cation. This trade-off has cognitive implications (see Chapter 4, “Cognitive Aspects”) and
affects social aspects of collaboration (see Chapter 5, “Social Interaction”). If the cognitive
load is too high for the user, then the device may be too stressful to use. On the other hand, if
the product has too much control and is too inflexible, then it may not be used at all.
Another decision is which functions to hard-wire into the product and which to leave
under software control, thereby indirectly in the control of the human user.
How Are the Functions Related to Each Other?
Functions may be related temporally; for example, one must be performed before another,
or two can be performed in parallel. They may also be related through any number of pos-
sible categorizations, for instance, all functions relating to privacy on a smartphone or all
options for viewing photographs on a social networking site. The relationships between tasks
may constrain use or may indicate suitable task structures within the product. For example,
if one task depends on another, the order in which tasks can be completed may need to be
restricted. If use cases or other detailed analysis of the tasks have been generated, these will
help. Different styles of requirements (for example, stories or atomic requirements shell)
provide different levels of detail, so some of this information will be available, and some will
evolve as the design team explores and discusses the product.
2. Navigation apps tend to be smartphone-based, so it is worth exploring other styles to see
what insights they may bring. We had the following thoughts, but you may have had others.
The navigation app needs to be mobile so that the user can move around to find the rele-
vant shop. Using voice or gesture interfaces is one option, but this could still be delivered
through a mobile device. Thinking more broadly, perhaps a haptic interface that guides the
user to the required location might suffice. Smart interfaces, such as one built into the
environment is an alternative, but privacy issues may arise if an individual’s data is
displayed for all to see.
1 2 . 4 C O N C R E T E D E S I G N 445
What Information Is Needed?
What data is required to perform a task? How is this data to be transformed by the system?
Data is one of the categories of requirements identified and captured through the require-
ments activity. During conceptual design, these requirements are considered to ensure that
the model provides the information needed to perform the task. Detailed issues of structure
and display, such as whether to use an analog display or a digital display, will more likely be
dealt with during the concrete design activity, but implications arising from the type of data
to be displayed may impact conceptual design issues.
For example, identifying potential vacations for a group of people using the travel organ-
izer requires the following: what kind of vacation is required; available budget; preferred
destinations (if any); preferred dates and duration (if any); how many people it is for; and are
there any special requirements (such as disability) within the group? To perform the function,
the system needs this information and must have access to detailed vacation and destination
descriptions, booking availability, facilities, restrictions, and so on.
Initial conceptual models may be captured in wireframes—a set of documents that show
structure, content, and controls. Wireframes may be constructed at varying levels of abstrac-
tion, and they may show part of the product or a complete overview. Chapter 13, “Interac-
tion Design in Practice,” includes more information and some examples.
12.4 Concrete Design
Conceptual design and concrete design are closely related. The difference between them is
rather a matter of changing emphasis: during design, conceptual issues will sometimes be
highlighted, and at other times, concrete detail will be stressed. Producing a prototype inevi-
tably means making some concrete decisions, albeit tentatively, and since interaction design
is iterative, some detailed issues will come up during conceptual design, and vice versa.
Designers need to balance the range of environmental, user, data, usability, and user
experience requirements with functional requirements. These are sometimes in conflict. For
example, the functionality of a wearable interactive product will be restricted by the activities
the user wants to perform while wearing it; a computer game may need to be learnable but
also challenging.
There are many aspects to the concrete design of interactive products: visual appearance
such as color and graphics, icon design, button design, interface layout, choice of interaction
devices, and so on. Chapter 7 introduces several interface types, together with their associ-
ated design considerations, guidelines, principles, and rules, which help designers ensure that
their products meet usability and user experience goals. These represent the kinds of decision
that are made during concrete design.
Concrete design also deals with issues related to user characteristics and context. Two
aspects that have drawn particular attention for concrete design are discussed in this section:
accessibility and inclusiveness; and designing for different cultures. Accessibility and inclu-
siveness were introduced in Section 1.6. Accessibility refers to the extent to which a product
is accessible to as many people as possible, while inclusiveness means being fair, open,
and equal to everyone. The aim of inclusive design is to empower users in their everyday and
working lives (Rogers and Marsden, 2013).
A range of input and output modes is available for interaction design. Apart from
standard keyboard, mouse, and touchscreen, there are also different pointing devices and
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N446
keyboards, screen readers, refreshable Braille, and eye-tracking, among others. Regardless
of which alternate input or output modalities are used, interactive interfaces must be flex-
ible enough to work with these various devices. This is particularly important for users with
disabilities, who may be unable to use pointing devices or standard keyboards and may
instead interact using a head or mouth stick, voice recognition, video with captions, audio
transcripts, and so on.
Making an interface accessible involves engaging users with disabilities in the devel-
opment process to better understand their needs and using the Web Content Accessibility
Guidelines (WCAG), which applies to all interfaces, not just web-based interfaces (see
Box 16.2). When interfaces are designed to be accessible, they not only work for people
with disabilities, but they also provide flexibility to other users without disabilities who
are faced with temporary or situational impairments, for example, a driver who is unable
to look at a display screen or a train passenger watching a video without disturbing other
passengers.
Interfaces that are not accessible can lead to various forms of discrimination. For
instance, air fares are often lower if purchased through a website. If that website is inac-
cessible for blind consumers, for example, and those consumers have to use the call center,
they may unknowingly be charged higher fares for the same flight (Lazar et al., 2010).
Many companies use online job applications, but if the online site is inaccessible, job appli-
cants may be forced to identify themselves as having a disability before applying for a job.
Similarly, when job aggregator websites (with information about jobs at many different
employers) are inaccessible and individuals who are blind are told to call on the phone as
an accommodation, they frequently aren’t told about many or even any of the jobs avail-
able (Lazar et al., 2015).
There are resources available to help design for inclusivity, accessibility, and flexibility,
such as Microsoft’s inclusive design toolkit.
Aspects of cross-cultural design include use of appropriate language(s), colors, icons
and images, navigation, and information architecture (Rau et al., 2013). These are all
important for concrete design; however, tensions between local cultures and HCI prin-
ciples have been highlighted (Winschiers-Theophilus and Bidwell, 2013) together with
a desire to reframe human-computer interaction (HCI) through local and indigenous
perspectives (Abdelnour-Nocera et al., 2013). These concerns not only impact concrete
design but also wider issues such as what to design and how to design in order to be
accepted by the target user group. For example, Gary Marsden et al. (2008) warn of the
problems in seeing a user’s need and attempting to meet that need without first asking
the community if they too recognize that need. For one approach on how to address this
concern, see Case Study 12.2.
Microsoft’s inclusive design toolkit has some useful and interesting resources.
You can find out more at https://www.microsoft.com/design/inclusive
https://www.microsoft.com/design/inclusive
1 2 . 5 G E N E R AT I N G P R O T O T Y P E S 447
12.5 Generating Prototypes
This section illustrates how prototypes may be used in design, and it demonstrates how
prototypes may be generated from the output of the requirements activity—producing a
storyboard from a scenario and an index card-based prototype from a use case. Both of these
are low-fidelity prototypes.
12.5.1 Generating Storyboards
A storyboard represents a sequence of actions or events that the user and the product go
through to achieve a goal. A scenario is one story about how a product may be used to
achieve that goal. A storyboard can be generated from a scenario by breaking the scenario
into a series of steps that focus on interaction and creating one scene in the storyboard for
each step. The purpose for doing this is twofold: first to produce a storyboard that can be
used to get feedback from users and colleagues and second to prompt the design team to con-
sider the scenario and the product’s use in more detail. For example, consider the scenario for
the travel organizer developed in Chapter 11. This can be broken down into six main steps.
1. Will, Sky, and Eamonn gather around the organizer, but Claire is at her mother’s house.
2. Will tells the organizer their initial idea of a sailing trip in the Mediterranean.
3. The system’s initial suggestion is that they consider a flotilla trip, but Sky and Eamonn
aren’t happy.
4. The travel organizer shows them some descriptions written by young people about flo-
tilla trips.
5. Will confirms this recommendation and asks for details.
6. The travel organizer emails details of the different options.
Notice that the first step sets the context, and later steps focus more on the goal. Break-
ing the scenario into steps can be achieved in different ways. The purpose of working from
the scenario is for the design team to think through the product and its use, and so the steps
are not as important as the thinking that happens through the process. Also, notice that some
of these events are focused solely on the travel organizer’s interface, and some are concerned
with the environment. For example, the first one talks about the family gathering around the
organizer, while the fourth and sixth are focused on the travel organizer. Storyboards can
focus on the screens or on the environment, or a mixture of both. Either way, sketching out
the storyboard will prompt the design team to think about design issues.
For example, the scenario says nothing about the kinds of input and output devices that
the system might use, but drawing the organizer forces the designer to think about these
things. There is some information in the scenario about the environment within which the
system will operate, but drawing the scene requires specifics about where the organizer will
be located and how interaction will continue. When focusing on the screens, the designer
is prompted to consider issues including what information needs to be available and what
information needs to be output. This all helps to explore design decisions and alternatives,
but it is also made more explicit because of the drawing act.
The storyboard in Figure 12.12 includes elements of the environment and some of the
screens. While drawing this, various questions came to mind such as how can the interaction
be designed for all of the family? Will they sit or stand? How to handle remote participants?
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N448
What kind of help needs to be available? What physical components does the travel organ-
izer need? How to enable all of the family to interact with the system (notice that the first
scene uses voice input while other scenes have a keyboard option as well)? and so on. In this
exercise, the questions it prompts are just as important as the end product.
Figure 12.12 The storyboard for the travel organizer
ACTIVITY 12.4
Activity 11.4 in Chapter 11 developed a futuristic scenario for the one-stop car shop.
Using this scenario, develop a storyboard that focuses on the environment of the user. As you
draw this storyboard, write down the design issues that it prompts.
Comment
The following is based on the scenario in the comment for Activity 11.4. This scenario breaks
down into five main steps.
1. The user arrives at the one-stop car shop.
2. The user is directed into an empty booth.
3. The user sits down in the racing car seat, and the display comes alive.
4. The user can view reports.
5. The user can take a virtual reality drive in their chosen car.
The storyboard is shown in Figure 12.13. Issues that arose while drawing this storyboard
included how to display the reports, what kind of virtual reality equipment is needed, what
input devices are needed—a keyboard or touchscreen, a steering wheel, clutch, accelerator,
and brake pedals? How much like actual car controls do the input devices need to be? You
may have thought of other issues.
1 2 . 5 G E N E R AT I N G P R O T O T Y P E S 449
12.5.2 Generating Card-Based Prototypes
Card-based prototypes are commonly used to capture and explore elements of an interac-
tion, such as dialog exchanges between the user and the product. The value of this kind of
prototype lies in the fact that the interaction elements can be manipulated and moved around
in order to simulate interaction with a user or to explore the user’s end-to-end experience.
This may be done as part of the evaluation or in conversations within the design team. If the
storyboard focuses on the screens, this can be translated almost directly into a card-based
prototype and used in this way. Another way to produce a card-based prototype is to gener-
ate one from a use case output from the requirements activity.
For example, consider the use cases for the visa requirements aspect of the group travel
organizer presented in Section 11.6. The first, less-detailed use case provides an overview of
the interaction, while the second one is more detailed.
This second use case can be translated into cards as follows. For each step in the use
case, the travel organizer will need to have an interaction component to deal with it, for
example, input via a button, menu option, or voice, and output via a display or sound.
By stepping through the use case, a card-based prototype can be developed that covers the
required behavior, and different designs can be considered. For example, Figure 12.14 shows
six dialog elements on six separate cards. The set on the left has been written in friendlier
language, while the set on the right is more official. These cover steps 1, 2, 3, 4, and 5.
The alternative courses, for example those dealing with error messages, would also each
have a card, and the tone and information contained in the error message could be evaluated
with users. For example, step 7.1 might translate into a simple “No visa information is avail-
able,” or a more helpful, “I am not able to find visa information for you to visit your chosen
destination. Please contact the
These cards can be shown to potential users of the system or fellow designers to get infor-
mal feedback. In this case, we showed these cards to a colleague, and through discussion of
the application and the cards, concluded that although the cards represent one interpretation
Figure 12.13 The storyboard generated from the one-stop car shop scenario in Activity 11.4
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N450
of the use case, they focus too much on an interaction model that assumes a WIMP/GUI
interface. Our discussion was informed by several things including the storyboard and the
scenario. One alternative would be to have a map of the world, and users can indicate their
destination and nationality by choosing one of the countries on the map; another might be
based around national flags. These alternatives could be prototyped using cards and further
feedback obtained. Cards can also be used to elaborate other aspects of the concrete design,
such as icons and other interface elements.
ACTIVITY 12.5
Look at the storyboard in Figure 12.4. This storyboard shows Christina exploring the Acrop-
olis in search of information about the pottery trade. In the second scene in the top row,
Christina “adjusts the preferences to find information about the pottery trade in Ancient
Greece.” Many interaction icons have become standardized, but there isn’t a standard one for
“pottery trade.” Suggest two alternative icons to represent this and draw them on separate
Figure 12.14 Cards 1–3 of a card-based prototype for the travel organizer
1 2 . 5 G E N E R AT I N G P R O T O T Y P E S 451
A set of card-based prototypes that cover a scenario from beginning to end may be the
basis of a more detailed prototype, such as an interface or screen sketch, or it may be used
in conjunction with personas to explore the user’s end-to-end experience. This latter purpose
is also achieved by creating a visual representation of the user’s experience. These represen-
tations are variably called a design map (Adlin and Pruitt, 2010), a customer journey map
cards. Using the storyboard in Figure 12.4 and the two cards, try out the different icons with
a friend or colleague to see what they understand by your two icons.
Comment
The two cards we drew are shown in Figure 12.15. The first is simply an Ancient Greek pot,
while the second attempts to capture the idea of a pottery seller in the market. When we
stepped through the storyboard with a colleague and showed them these alternatives, both
were found to require improvement. The pot on its own did not capture the pottery trade, and
it wasn’t clear what the market seller represented, but there was a preference for the latter,
and the users’ feedback was useful.
Figure 12.15 Two icons to represent “pottery trade” for the new mobile device for exploring
historic sites depicted in the storyboard of Figure 12.4
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N452
Flight to NYC
Who?
A
FT
E
R
BEFORE
DURING
Description:
expedited security & passport checks for frequent flyers
waiting to board plane in comfortable lounge
board plane
safety procedures
take off
seating – wide comfy seats
sleep in fold-out beds
read/work in quiet with internet connection
Use the Experience Icons:
Designing the Experience – Example WOW
How can this be a positive
experience?
Make or break moment –
what can we do to make
sure consumers come back
time and time again
Where do we need data to
help deliver the experience?
watch a movie
choice of mealtimes
sleep in fold-out beds
disembark – reserve seat
for return flight
short walk in airport
passport & immigration fast track
luggage collection
customs fast track
business lounge for shower
met by driver
driver checks into hotel for me
hotel reconfirm flights back
Contact travel dept.
book tickets
receive tickets
check-in at Paddington train station
Heathrow express train to Airport
Richard is a tall, senior
executive travelling
as part of work
(a)
Map Title
Step
Question
Comment Design idea
Link to another Map
Design Map: Megan delivers the presentation
last updated by TA, 2/16/06
Megan logs on
to the
presentation
system
Megan sees the
presenter page
Megan sees
that her slides
are ready and
she does a last-
minute flip
through
Megan fixes an
error on one of
her slides.
Megan sees
that audience
members are
starting to arrive
Key:
step
comment
question
idea
See a related
Design Map
Should we let
Megan log on if
Ivan hasn’t set
everything up
yet?
Megan has
already
uploaded all of
her slides.
The presenter
page should
reassure her
that the streams
are started and
the preso is
good to go.
The presenter
page should let
her see that
Ivan is there
already.
What if an
audience
member tries to
connect before
Megan or even
before Ivan?
Let’s create a
way for her to
flip through her
slides (and
change them?)
without any
audience
members
seeing this
process.
Are we going to
be able to
support this
kind of last-
minute change?
See Design
Map: Sam
arrives for the
Presentation.
Legend
(b)
Figure 12.16 (a) An experience map using a wheel representation and (b) an example timeline
design map illustrating how to capture different issues
Source: (a) LEGO (b) Adlin and Pruitt (2010), p. 134. Used courtesy of Morgan Kaufmann
1 2 . 5 G E N E R AT I N G P R O T O T Y P E S 453
(Ratcliffe and McNeill, 2012), or an experience map. They illustrate a user’s path or journey
through the product or service and are usually created for a particular persona and based
on a particular scenario, hence giving the journey sufficient context and detail to bring the
discussions to life. They support designers in considering the user’s overall experience when
achieving a particular goal and are used to explore and question the designed experience and
to identify issues that have not been considered so far. They may be used to analyze existing
products and to collate design issues, or as part of the design process.
There are many different types of representation of varying complexities. Two main ones are:
the wheel and the timeline. The wheel representation is used when an interaction phase is more
important than an interaction point, such as for a flight (see Figure 12.16(a) for an example). The
timeline is used where a service is being provided that has a recognizable beginning and end point,
such as purchasing an item through a website (an example of a timeline representation is shown
in Figure 11.7(b)—look for the smiley faces). Figure 12.16(b) illustrates the structure of a timeline
and how different kinds of issues may be captured, such as, questions, comments, and ideas.
To generate one of these representations, take one persona and two or three scenarios.
Draw a timeline for the scenario and identify the interaction points for the user. Then use this
as a discussion tool with colleagues to identify any issues or questions that may arise. Some
people consider the user’s mood and identify pain points, sometimes the focus will be on
technical issues, and at other times this can be used to identify missing functionality or areas
of under-designed interaction.
This video illustrates the benefits of experience mapping using a timeline:
http://youtu.be/eLT_Q8sRpyI.
To read about the main elements of a customer journey map, when you need
them, and how to construct them, see this article:
https://www.nngroup.com/articles/customer-journey-mapping/.
BOX 12.3
Involving Users in Design: Participatory Design
Participatory design (PD) emerged in Scandinavia in the late 1960s and early 1970s. There were
two influences on this early work: the desire to be able to communicate information about
complex systems and the labor union movement pushing for workers to have democratic con-
trol over changes in their work. In the 1970s, new laws gave workers the right to have a say in
how their working environment was changed, and such laws are still in force today.
(Continued)
https://www.nngroup.com/articles/customer-journey-mapping/
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N454
The idea that those who use information technology will play a critical role in its design,
and in particular that they will engage in active and genuine participation with the design itself,
is still central to participatory design (Simonsen and Robertson, 2012). But the approach has
evolved considerably in response to political, social, and technological changes (Bannon et al.,
2018). In addition, many approaches to technology design include participation with users, so
what makes participatory design different?
In a review of research in the PD conference 2002–2012, Kim Halskov et al. (2015)
wanted to understand the different definitions of ‘participation’ as evidenced by those papers.
They identified three approaches.
• Implicit, meaning that the paper wasn’t clear about the details of participation
• Users as full participants in the design process, which goes beyond simple involvement
of users, but extends to understanding the user’s point of view, and regarding what users
know as being important
• Mutual learning between users and designer
One of the key questions for participatory design today is how to handle scale: How
to ensure participation by a community when that community includes several hundreds or
thousands of people? How to ensure participation when the data collected can be from many
different sources and without explicit agreement? How to ensure participation when users
extend over several countries?
Daniel Gooch et al. (2018) designed an approach to facilitate citizen engagement in a
smart city project. They used an integrated approach of online and offline activities that was
tailored to local contexts and showed how it is possible to engage citizens in a way that
addresses citizens’ current concerns. They also identified four key challenges to utilizing par-
ticipatory design on an urban scale.
• Balancing scale with the personal. In particular, the need to engage face-to-face with
potential participants.
• Who has control of the process? If participants are to have a meaningful say in what
is designed, they need to have some of the power, but sometimes regulations mitigate
against this.
• Who is participating? In a city, there are many diverse stakeholders, yet it is important to
include all sections of the society to avoid bias.
• Integrating citizen-led work with local authorities. Regulations set by local authorities can
be an obstacle to innovation within a city context.
Case Study 12.2 describes an extension to participatory design, called community-based
design, developed for the local situation in South Africa.
1 2 . 5 G E N E R AT I N G P R O T O T Y P E S 455
CASE STUDY 12.2
Deaf Telephony
This case study by Edwin Blake, William Tucker, Meryl Glaser, and Adinda Freudenthal dis-
cusses their experiences of community-based design in South Africa. The process of community-
based co-design is one that explores various solution configurations in a multidimensional
design space whose axes are the different dimensions of requirements and the various dimen-
sions of designer skills and technological capabilities. The bits of this space that one can “see”
are determined by one’s knowledge of the user needs and one’s own skills. Co-design is a way
of exploring that space in a way that alleviates the myopia of one’s own viewpoint and bias.
As this space is traversed, a trajectory is traced according to one’s skills and learning and
according to the users’ expressed requirements and their learning.
The project team set out to assist South African deaf people to communicate with
each other, with hearing people, and with public services. The team has been working
for many years with a deaf community that has been disadvantaged due both to poverty
and hearing impairment. The story of this wide-ranging design has been one of continual
fertile (and on occasion frustrating) co-design with this community. The team’s long-
term involvement has meant that they have transformed aspects of the community and
that they have themselves been changed in what they view as important and in how they
approach design. Figure 12.17 illustrates one participant’s view of communication, cap-
tured during a community engagement event, and Figure 12.18 shows two participants
discussing design using sign language.
(Continued)
Figure 12.17 One participant’s view of communication
Source: Edwin Blake
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N456
Deaf users in this community started out knowing essentially nothing about computers.
Their first language is South African Sign Language (SASL), and this use of SASL is a proud
sign of their identity as a people. Many are also illiterate or semi-literate. There are a large
number of deaf people using SASL; in fact, there are more using it than with some of the
smaller official languages. Since the advent of democracy in 1994, there has been an increasing
empowerment of deaf people, and it is accepted as a distinct language in its own right.
In this case study on id-book.com, a brief historical overview of the project and the vari-
ous prototypes that formed nodes in a design trajectory are presented. The methodology of
Action Research and its cyclical approach to homing in on an effective implementation is
reviewed. An important aspect of the method is how it facilitates learning by both the research-
ers and the user community so that together they can form an effective design team. Lastly,
such a long-term intimate involvement with a community raises important ethical issues,
which are fundamentally concerns of reciprocity.
Figure 12.18 Participants discussing design in sign language
Source: Helen Sharp
ACTIVITY 12.6
Design thinking has been described as an approach to problem-solving and innovative design
that focuses on understanding what people want and what technology can deliver. It is derived
from professional design practice, and it is often viewed as having five stages that together
evolve a solution: empathize, define, ideate, prototype, and test. A slightly different view of
design thinking, according to IDEO (https://www.ideou.com/pages/design-thinking), empha-
sizes human needs, empathy, and collaboration by looking at the situation through three
lenses: desirability, feasibility, and viability.
Design thinking has become very popular, but some have questioned its benefits and
implications. This activity invites you to decide for yourself.
http://id-book.com
https://www.ideou.com/pages/design-thinking
1 2 . 6 C O N S T R U C T I O N 457
12.6 Construction
As prototyping and building alternatives progresses, development will focus more on putting
together components and developing the final product. This may take the form of a physical
product, such as a set of alarms, sensors, and lights, a piece of software, or both. Whatever
the final form, it is unlikely that anything will need to be developed from scratch, as there are
many useful (in some cases essential) resources to support development. Here we introduce
two kinds of resources: physical computing kits and software development kits (SDKs).
Click the following links, and do some investigation yourself around the idea of design
thinking. Based on what you find, do you think the turn toward design thinking overall is
beneficial or damaging to interaction design?
Comment
Design thinking is similar to the approaches espoused by user-centered design and the notion
of design thinking has been embraced by many designers and organizations. Nevertheless, the
way in which it has been popularized has resulted in some heavy criticism too. In her presenta-
tion, Natasha Jen criticizes the simple five-stage process and invites proponents to share the
evidence of its success and its outcomes so that it can be improved.
Jon Kolko (2018) believes that this surge of interest in design thinking “will leave behind
two benefits: validation of the design profession as real, intellectual, and valuable—and a very
large need for designers who can make things.” However, he also points out that it has been
popularized at a simplistic level of detail.
At the end of the day, what this suggests is that design is a creative activity supported by
techniques, tools, and processes, but it cannot be boiled down into a particular process or set
of techniques—design involves “the habit of continually doing things in new ways in order to
make a difference,” as stated by Dan Nessler.
Jon Kolko’s (2018) article:
http://interactions.acm.org/archive/view/may-june-2018/the-
divisiveness-of-design-thinking
Natasha Jen’s (2017) presentation:
Dan Nessler’s (2016) article:
https://medium.com/digital-experience-design/how-to-apply-a-
design-thinking-hcd-ux-or-any-creative-process-from-scratch-
b8786efbf812
http://interactions.acm.org/archive/view/may-june-2018/the-divisiveness-of-design-thinking
http://interactions.acm.org/archive/view/may-june-2018/the-divisiveness-of-design-thinking
https://medium.com/digital-experience-design/how-to-apply-a-design-thinking-hcd-ux-or-any-creative-process-from-scratch-b8786efbf812
https://medium.com/digital-experience-design/how-to-apply-a-design-thinking-hcd-ux-or-any-creative-process-from-scratch-b8786efbf812
https://medium.com/digital-experience-design/how-to-apply-a-design-thinking-hcd-ux-or-any-creative-process-from-scratch-b8786efbf812
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N458
12.6.1 Physical Computing
Physical computing is concerned with how to build and code prototypes and devices using
electronics. Specifically, it is the activity of “creating physical artifacts and giving them behav-
iors through a combination of building with physical materials, computer programming, and
circuit building” (Gubbels and Froehlich, 2014). Typically, it involves designing things, using
a printed circuit board (PCB), sensors (for instance push buttons, accelerometers, infrared, or
temperature sensors) to detect states, and output devices (such as displays, motors, or buzz-
ers) that cause some effect. An example is a “friend or foe” cat detector that senses, via an
accelerometer, any cat (or anything else for that matter) that tries to push through a family’s
cat door. The movement triggers an actuator to take a photo of what came through the cat
door using a webcam positioned on the back door. The photo is uploaded to a website that
alerts the owner if the image does not match that of their own cat.
A number of physical computing toolkits have been developed for educational and pro-
totyping purposes. One of the earliest is Arduino (see Banzi, 2009). The goal was to enable
artists and designers to learn how to make and code physical prototypes using electronics
in a couple of days, having attended a workshop. The toolkit is composed of two parts: the
Arduino board (see Figure 12.19), which is the piece of hardware that is used to build objects,
and the Arduino integrated development environment (IDE), which is a piece of software that
makes it easy to program and upload a sketch (Arduino’s name for a unit of code) to the board.
A sketch, for example, might turn on an LED when a sensor detects a change in the light level.
The Arduino board is a small circuit that contains a tiny chip (the microcontroller). It has two
rows of small electrical “sockets” that let the user connect sensors and actuators to its input
and output pins. Sketches are written in the IDE using a simple processing language and then
translated into the C programming language and uploaded to the board.
There are other toolkits that have been developed, based on the basic Arduino kit. The
most well-known is the LilyPad, which was co-developed by Leah Beuchley (see Figure 12.20
and her interview at the end of Chapter 7). It is a set of sewable electronic components for
building fashionable clothing and other textiles. The Engduino is a teaching tool based on
the Arduino LilyPad; it has 16 multicolor LEDs and a button, which can be used to provide
Figure 12.19 The Arduino board
Source: Used courtesy of Dr Nicolai Marquardt
1 2 . 6 C O N S T R U C T I O N 459
visual feedback and simple user input. It also has a thermistor (that senses temperature), a 3D
accelerometer (that measures accelerations), and an infrared transmitter/receiver that can be
used to transmit messages from one Engduino to another.
Other kinds of easy-to-use and quick-to-get-started physical toolkits, intended to pro-
vide new opportunities for people to be inventive and creative, are Senseboard (Richards and
Woodthorpe, 2009), Raspberry Pi (https://www.raspberrypi.org/), .NET Gadgeteer (Villar
et al., 2012), and MaKey MaKey (Silver and Rosenbaum, 2012). The MaKey MaKey toolkit
is composed of a printed circuit board with an Arduino microcontroller, alligator clips, and
a USB cable (see Figure 12.21). It communicates with a computer to send key presses, mouse
clicks, and mouse movements. There are six inputs (the four arrow keys, the space bar, and
a mouse click) positioned on the front of the board onto which alligator clips are clipped
in order to connect with a computer via the USB cable. The other ends of the clips can be
attached to any noninsulating object, such as a vegetable or piece of fruit. Thus, instead of
using the computer keyboard buttons to interact with the computer, external objects such as
bananas are used. The computer thinks MaKey MaKey is just like a keyboard or mouse. An
example is to play a digital piano app using bananas as keys rather than keys on the com-
puter keyboard. When they are touched, they make a connection to the board and MaKey
MaKey sends the computer a keyboard message.
Figure 12.20 The Lilypad Arduino kit
Source: Used courtesy of Leah Beuchley
Watch this video that introduces Magic Cubes—a novel toolkit that is assembled
from six sides that are slotted together to become an interactive cube that lights
up in different colors, depending on how vigorously it is shaken. Intended to
encourage children to learn, share, and fire their imagination to come up with new
games and other uses, see it in action at https://uclmagiccube.weebly.com/
video.html.
https://www.raspberrypi.org/
https://uclmagiccube.weebly.com/video.html
https://uclmagiccube.weebly.com/video.html
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N460
One of the most recent physical computing systems is the BBC micro:bit (https://microbit
.org, see Figure 12.22). Like Arduino, the micro:bit system consists of a physical computing
device that is used in conjunction with an IDE. However, unlike Arduino, the micro:bit device
contains a number of built-in sensors and a small display so that it is possible to create simple
physical computing systems without attaching any components or wires. If desired, external
components can still be added, but rather than the small electrical sockets of the Arduino, the
micro:bit has an “edge connector” for this purpose. This is formed from a row of connection
points that run along one edge of the device and allow it to be “plugged into” a range of acces-
sories including larger displays, Xbox-style game controllers, and small robots. The micro:bit
IDE, which runs in a web browser with no installation or setup process, supports a graphical
programming experience based on visual “blocks” of code alongside text-based editing using
a variant of JavaScript. This means that the micro:bit provides a great experience for young
students and other beginner programmers, while also supporting more sophisticated program-
ming. As a result, micro:bit has been widely adopted in schools around the world.
Figure 12.21 The MaKey MaKey toolkit
Source: Helen Sharp
USB connector reset button
radio & Bluetooth antenna
processor
compass
accelerometer
edge connector for accessories
battery socket
FRONT BACK
25 LED lights
2 buttons
Figure 12.22 The BBC micro:bit
Source: https://microbit.org/guide/features. Used courtesy of Micro:bit Foundation
https://microbit.org
https://microbit.org
https://microbit.org/guide/features. Used courtesy of Micro:bit Foundation
1 2 . 6 C O N S T R U C T I O N 461
So far, physical toolkits have been aimed at children or designers to enable them to
start programming through rapid creation of small electronic gadgets and digital tools (for
example, Hodges et al., 2013, Sentance et al., 2017). However, Yvonne Rogers et al. (2014)
demonstrated how retired people were equally able to be creative using the kit, turning
“everyday objects into touchpads.” They ran a series of workshops where small groups of
retired friends, aged between their early 60s and late 80s, assembled and played with the
MaKey MaKey toolkit (see Figure 12.23). After playing music using fruit and vegetables as
input, they saw many new possibilities for innovative design. Making and playing together,
however childlike it might seem at first, can be a catalyst for imagining, free thinking, and
exploring. People are sometimes cautious to volunteer their ideas, fearing that they are easily
squashed, but in a positive environment they can flourish. The right kind of shared experi-
ence can create a positive and relaxed atmosphere in which people from all walks of life can
freely bounce ideas off each other.
Figure 12.23 A group of retired friends playing with a MaKey MaKey toolkit
Source: Helen Sharp
BOX 12.4
The Rise of the Maker Movement
The maker movement emerged in the mid-2000s. Following in the footsteps of the personal
computer revolution and the Internet, some viewed it as the next big transformation that
would modernize manufacturing and production (Hatch, 2014). Whereas the explosion of the
Web was all about what it could do for us virtually, with a proliferation of apps, social media,
and services, the maker movement is transforming how we make, buy, consume, and recycle
physical things, from houses to clothes and food to bicycles. At its core is DIY—crafting physi-
cal things using a diversity of machines, tools, and methods collaboratively in workshops and
makerspaces. In a nutshell, it is about inventing the future through connecting technologies,
the Internet, and physical things.
(Continued)
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N462
While there have always been hobbyists tinkering away making radios, clocks, and other
devices, the world of DIY making has been opened up to many more people. Affordable, pow-
erful, and easy-to-use tools, coupled with a renewed focus on locally-sourced products and
community-based activities, and a desire for sustainable, authentic, and ethically- produced
products, has led to a groundswell of interest in “making.” Fablabs (fabrication laboratories)
first started appearing in cities throughout the world, offering a large physical space contain-
ing electronics and manufacturing equipment, including 3D printers, CNC milling machines,
and laser cutters. Individuals bring their digital files to print and make things such as large
3D models, furniture, and installations—something that would have been impossible for
them to do previously. Then smaller makerspaces started appearing in the thousands across
the world, from Shanghai to rural India, again sharing production facilities for all to use and
make. While some are small, for example sharing the use of a 3D printer, others are much
larger and well resourced, offering an array of manufacturing machines, tools, and work-
spaces to make in.
Another development has been to build and program e-textiles using sewing machines
and electronic thread. E-textiles comprise fabrics that are embedded with electronics, such as
sensors, LEDs, and motors that are stitched together using conductive thread and conductive
fabrics (Buechley and Qiu, 2014). An early example is the turn-signal biking jacket (devel-
oped by Leah Buechley and illustrated in Figure 1.4). Other e-textiles include interactive soft
toys, wallpaper that sings when touched, and fashion clothing that reacts to the environment
or events.
A central part of the maker movement involves tinkering (as discussed in section 12.2.4)
and the sharing of knowledge, skills, know-how, and what you have made. The Instructables
.com website is for anyone to explore, document, and share their DIY creations. Go to
the Instructables site and take a look at a few of the projects that have been uploaded
by makers. How many of them are a combination of electronics, physical materials, and
pure invention? Are they fun, useful, or gadgety? How are they presented? Do they inspire
you to make?
Another site, Etsy.com, is an online marketplace for people who make things to sell their
crafts and other handmade items, which has grown in popularity over the past few years. It is
designed to be easy for makers to use and to set up their store to sell to family, friends, and
strangers across the world. Unlike corporate online sites, such as Amazon or eBay, Etsy is a
place for craft makers to reach out to others and to show off their wares in ways that they feel
best fit what they have made. This transition from “making” to “manufacturing,” albeit on the
limited scale of craft production, is an interesting phenomenon. Some authors believe that
the trend will continue and that increasingly new products and new businesses will emerge
from activities rooted in maker culture (Hodges et al., 2014).
In essence, the maker movement is about taking the DIY movement online to make it
public, and in doing so, massively increase who can take part and how it is shared (Anderson,
2013). In his interview at the end of this chapter, Jon Froehlich explains more about the maker
movement.
http://Instructables.com
http://Instructables.com
http://Etsy.com
1 2 . 6 C O N S T R U C T I O N 463
12.6.2 SDKs: Software Development Kits
A software development kit (SDK) is a package of programming tools and components
that supports the development of applications for a specific platform, for example, for iOS
on iPhone and iPad and for Android on mobile phone and tablet apps. Typically, an SDK
includes an integrated development environment, documentation, drivers, and sample pro-
gramming code to illustrate how to use the SDK components. Some also include icons and
buttons that can easily be incorporated into the design. While it is possible to develop appli-
cations without using an SDK, it is much easier using such a powerful resource and so much
more can be achieved.
For example, the availability of Microsoft’s Kinect SDK has made the device’s power-
ful gesture recognition and body motion tracking capabilities accessible. This has led to the
exploration of many applications including elderly care and stroke rehabilitation (Webster
and Celik, 2014), motion tracking in immersive games (Manuel et al., 2012), user identifica-
tion using body lengths (Hayashi et al., 2014), robot control (Wang et al., 2013), and virtual
reality (Liu et al, 2018).
An SDK will include a set of application programming interfaces (APIs) that allows
control of the components without the developer needing to know the intricacies of how
they work. In some cases, access to the API alone is sufficient to allow significant work to be
undertaken, for instance, Eiji Hayashi et al. (2014) only needed access to the APIs. The dif-
ference between APIs and SDKs is explained in Box 12.5.
See the following websites to learn about two different types of SDKs and
their use:
• Building voice-based services with Amazon’s Alexa Skills Kit:
https://developer.amazon.com/alexa-skills-kit.
• Constructing augmented reality experiences with Apple’s ARKit:
https://developer.apple.com/arkit/.
BOX 12.5
APIs and SDKs
SDKs consist of a set of programming tools and components, while an API is the set of inputs
and outputs, that is, the technical interface to those components. To explain this further, an
API allows different-shaped building blocks of a child’s puzzle to be joined together, while
an SDK provides a workshop where all of the development tools are available to create what-
ever size and shape blocks you desire, rather than using preshaped building blocks. An API
therefore allows the use of pre-existing building blocks, while an SDK removes this restriction
and allows new blocks to be created or even to build something without blocks at all. An SDK
for any platform will include all of the relevant APIs, but it adds programming tools,
documentation, and other development support as well.
https://developer.amazon.com/alexa-skills-kit
https://developer.apple.com/arkit/
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N464
In-Depth Activity
This in-depth activity builds upon the requirements activities related to the booking facility
introduced at the end of Chapter 11.
1. Based on the information gleaned from the activity in Chapter 11, suggest three different
conceptual models for this system. Consider each of the aspects of a conceptual model
discussed in this chapter: interface metaphor, interaction type, interface type, activities it
will support, functions, relationships between functions, and information requirements. Of
these conceptual models, decide which one seems most appropriate and articulate the
reasons why.
2. Using the scenarios generated for the online booking facility, produce a storyboard for the
task of booking a ticket for one of the conceptual models in step 1. Show it to two or three
potential users and record some informal feedback.
3. Considering the product’s concrete design, sketch out the application’s initial interface.
Consider the design issues introduced in Chapter 7 for the chosen interface type. Write one
or two sentences explaining your choices and consider whether the choice is a usability
consideration or a user experience consideration.
4. Sketch out an experience map for the product. Use the scenarios and personas you gener-
ated previously to explore the user’s experience. In particular, identify any new interaction
issues that had not been considered previously, and suggest what could be done to
address them.
5. How does the product differ from applications that typically might emerge from the maker
movement? Do software development kits have a role? If so, what is that role? If not,
why not?
Summary
This chapter explored the activities of design, prototyping, and construction. Prototyping and
scenarios are used throughout the design process to test ideas for feasibility and user accep-
tance. We have looked at different forms of prototyping, and the activities have encouraged
you to think about and apply prototyping techniques in the design process.
Key points
• Prototyping may be low fidelity (such as paper-based) or high fidelity (such as software-
based).
• High-fidelity prototypes may be vertical or horizontal.
• Low-fidelity prototypes are quick and easy to produce and modify, and they are used in the
early stages of design.
F U R T h E R R E A D I N G 465
Further Reading
BANZI, M. and SHILOH, M. (2014) Getting Started with Arduino (3rd ed.). Maker Media
Inc. This hands-on book provides an illustrated step-by-step guide to learning about Arduino
with lots of ideas for projects to work on. It outlines what physical computing is in relation to
interaction design and the basics of electricity, electronics, and prototyping using the Arduino
hardware and software environment.
GREENBERG, S., CARPENDALE, S., MARQUARDT, N. and BUXTON, B. (2012) Sketch-
ing User Experiences. Morgan Kaufmann. This is a practical introduction to sketching.
It explains why sketching is important, and it provides useful tips to get the reader into the
habit of sketching. It is a companion book to Buxton, B. (2007) Sketching User Experiences.
Morgan Kaufmann, San Francisco.
INTERACTIONS MAGAZINE (2018) Designing AI. ACM. This issue of the Interactions
magazine is all about design and different aspects of it including sketching, human-centered
design for children, collaborative art, design capabilities, and the special topic of design-
ing for AI.
LAZAR, J., GOLDSTEIN, D., and TAYLOR, A. (2015). Ensuring Digital Accessibility
Through Process and Policy. Waltham, MA: Elsevier/Morgan Kaufmann Publishers. This
book is about accessibility, bringing together knowledge in technology, law, and research.
It includes a range of standards, regulations, methods, and case studies.
• Ready-made software and hardware components support the creation of prototypes.
• There are two aspects to the design activity: conceptual design and concrete design.
• Conceptual design develops an outline of what people can do with a product and what con-
cepts are needed to understand how to interact with it, while concrete design specifies the
details of the design such as layout and navigation.
• We have explored three approaches to help you develop an initial conceptual model: inter-
face metaphors, interaction styles, and interface styles.
• An initial conceptual model may be expanded by considering which functions the product
will perform (and which the user will perform), how those functions are related, and what
information is required to support them.
• Scenarios and prototypes can be used effectively in design to explore ideas.
• Physical computing kits and software development kits facilitate the transition from design
to construction.
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N466
INTERVIEW with
Jon Froehlich
Jon Froehlich is an Associate Professor
in the Paul G. Allen School of Computer
Science and Engineering at the University
of Washington (UW) where he directs the
Makeability Lab (http://makeabilitylab
.io/), a cross-disciplinary research group
focused on applying computer science and
HCI to high-value social domains such as
environmental sustainability and STE(A)
M education. He has published more than
50 peer-reviewed publications; 11 have
been honored with awards, including Best
Papers at ACM CHI and ASSETS and a
10-Year Impact Award at UbiComp. Jon is
a father of two, and he is increasingly pas-
sionate about CS4All—both as an educa-
tor and a researcher.
Can you tell us a bit about your research,
what you do, and why you do it?
The goal of my research is to develop in-
teractive tools and techniques to address
pressing global challenges in areas such as
accessibility, STE(A)M education, and en-
vironmental sustainability. To succeed at
this work, I collaborate across disciplines
with a focus on identifying long-term,
ambitious research problems such as map-
ping the accessibility of the physical world
via crowdsourcing plus computer vision
that can also provide immediate, practi-
cal utility. Typically, my research involves
inventing or reappropriating methods to
sense physical or behavioral phenomena,
leveraging techniques in computer vision
(CV) and machine learning (ML) to inter-
pret and characterize this data, and then
building and evaluating interactive soft-
ware or hardware tools uniquely enabled
by these approaches. My research process
is iterative, consisting of formative studies,
which then inform the design and imple-
mentation of prototypes, followed by a
series of evaluations, first in the lab and
then eventually deployment studies of
refined prototypes in the field.
What is the maker movement, and why are
you so enthusiastic about it?
The maker movement emerged in the
mid-2000s as an informal collection of
hobbyists, engineers, artists, coders, and
http://makeabilitylab.io/
http://makeabilitylab.io/
467
craftspeople dedicated to playful creation,
self-learning, and material design. While
the movement builds on longstanding hob-
byist and do-it-yourself (DIY) culture—for
example, in woodworking and electron-
ics—the movement was galvanized and
accelerated by a series of socio-technical
developments, including new, low-cost
computational fabrication tools like CNC
mills and 3D printers, the emergence of in-
expensive and easy-to-use microcontroller
platforms like Arduino and Raspberry
Pi, online marketplaces like Adafruit and
Sparkfun that made it easy to find and
purchase parts, and social networks like
Instructables, YouTube, and Thingiverse,
which provided a forum for novices and
experts alike to share and critique ideas,
tutorials, and creations.
My enthusiasm for the maker movement
stems both from my intrinsic excitement
as a technologist in observing the creativ-
ity and creations of “makers” as well as
from my perspectives as an educator and
mentor in wondering how we can borrow
from and adapt elements of the movement
into formal education. While the maker
movement is a relatively new phenom-
enon, its historical roots in education and
learning science stretch back to pioneering
educational thinkers like Maria Montes-
sori, Jean Piaget, Seymour Papert, Lev Vy-
gotsky, and others, all who emphasize the
importance of learning through creation
and experimentation, the role of peer men-
torship, and how sharing work and solicit-
ing feedback shapes thinking. For example,
Papert’s Constructionism learning theory
places a critical focus not just on learning
through making but on the social nature
of design—that is, that ideas are shaped
by the knowledge of an audience and the
feedback provided by others.
I have tried to inject this philoso-
phy into my undergrad and graduate
teaching. As one example, students in my
Tangible Interactive Computing course
explore the materiality of interactive
computing via design prompts such as
making a new input device for a com-
puter using lo-fi materials like conduc-
tive clay and fabric, breaking and remak-
ing an existing electronic technology to
reformulate its physical interaction, and
combining computer vision and video
cameras to create whole-body, gestural
input. Students share and critique each
other’s work but also design outwardly
beyond the confines of the classroom by
sharing their results and design processes
publicly (under pseudonyms, if preferred)
via videos on YouTube, step-by-step tuto-
rials on Instructables.com, and on the
course website. Student-written Instruc-
tables in Tangible Interactive Comput-
ing, for example, have won awards and
acquired more than 300,000 views and
1,900 favorites.
What are the advantages and challenges
of working with communities to design
products?
Much of my research involves design-
ing and evaluating technologies for users
who have different abilities, perspec-
tives, and/or experiences from me and
my research group—for example, early
elementary school learners, people who
use wheelchairs, or people with visual
impairments. Thus, a key facet of our re-
search and design process is employing
methods from participatory design (or
“co-design”), an approach to design that
attempts to actively involve and empower
target users throughout the design pro-
cess from ideation to lo-fi prototyping to
(Continued)
I N T E R V I E W W I T h J O N F R O E h L I C h
http://Instructables.com
1 2 D E S I G N , P R O T O T Y P I N G , A N D C O N S T R U C T I O N468
summative evaluation. For example, in
the MakerWear project (Kazemitabaar
et al., 2017)—a wearable construction kit
for children—we worked with children
to gather design ideas and solicit critical
feedback, to test initial designs, and to
help co-design toolkit behavior and the
overall look and feel. Similarly, we also
involved professional STEM educators
to help us improve our designs and think
about corresponding learning activities.
Finally, we ran a series of pilot studies
followed by workshops in afterschool
programs and a children’s museum to ex-
amine what and how children make with
MakerWear, what challenges arise, and
how their designs differ from creations
made with other toolkits (for example, in
robotics).
This human-centered, participatory
design approach offers many advantages,
including ensuring that we are addressing
real user problems, helping ground our
design decisions through use and feed-
back from target stakeholders, and em-
powering our users to have a real voice
in shaping outcomes (from which our
participants of all ages seem to gain sat-
isfaction). There are trade-offs, however.
Soliciting ideas from target users in an
unstructured and unprincipled manner
may lead to poorly defined outcomes and
suboptimal designs. When working with
children, we often follow Druin’s Coop-
erative Inquiry methodology (Guha et al.,
2013), which provides a set of techniques
and guidelines for co-design with chil-
dren that helps to channel and focus their
creativity and ideas. A second challenge
is in recruiting and supporting co-design
sessions: this is a resource-intensive pro-
cess that requires time and effort from
both stakeholders and the research team.
To mitigate this challenge, we often work
on establishing and maintaining longi-
tudinal relationships with community
groups like local schools and museums.
Finally, not all projects are amenable to
these methods (such as when timelines are
particularly aggressive).
Have you encountered any big surprises in
your work?
The life of a researcher is full of sur-
prises—one must get comfortable with
ambiguity and ending a research jour-
ney at an unpredictable location. My
most significant surprises, however, have
come from people: from my students,
from my mentors, and from my collab-
orators. My research methods and ideas
have been profoundly influenced in un-
expected ways by colleagues like Profes-
sor Tamara Clegg who made me rethink
how we can personalize STEM learning
through opportunities in everyday life
(what she calls “scientizing” life) and
Professor Allison Druin who introduced
me to and immersed me in children-
oriented participatory design methods.
(I could hear the excited shouts and joy-
ful exclamations of Kidsteam from my
office, and I couldn’t resist finding out
more, which fundamentally changed how
I did research in STEM education.) My
students never cease to surprise me, from
3D-printing gears to fix an aerial drone
to developing an interactive sandbox that
traces human movement using electro-
mechanically controlled marbles to de-
signing an e-textile shirt that senses and
visualizes the wearer’s changing physiol-
ogy via integrated anatomical models.
What are your hopes for the future?
As a graduate student, I recall being asked,
“What are the biggest open questions in
HCI, and how does your research work
469
toward addressing them?” I found this
question both profoundly interesting and
profoundly startling because it forced me
to think about the most significant open
areas in my field and to (somewhat uncom-
fortably) confront the relationship be-
tween this answer and my research. At the
risk of sounding overly ambitious, I would
like to adapt this question, which serves as
a guiding principle for my research but
that I also hope will inspire others: “What
are the most significant societal challenges
across the world? What role can computer
science, HCI, and design play in address-
ing those challenges? And where does your
research/work fit?” As computation per-
vades nearly every aspect of our lives,
I believe it is our role as technologists,
designers, and practitioners to ask these
questions of ourselves and to think about
the political, economic, environmental,
and social implications of our work. As a
professor and educator, I am hopeful. This
larger world-view framing of CS seems to
resonate with younger generations and,
I hope, will soon become the norm.
I N T E R V I E W W I T h J O N F R O E h L I C h
Chapter 13
I N T E R A C T I O N D E S I G N I N P R A C T I C E
Objectives
The main goals of the chapter are to accomplish the following:
• Describe some of the key trends in practice related to interaction design.
• Enable you to discuss the place of UX design in agile development projects.
• Enable you to identify and critique interaction design patterns.
• Explain how open source and ready-made components can support interaction design.
• Explain how tools can support interaction design activities.
13.1 Introduction
As our interviewee at the end of Chapter 1, Harry Brignull, remarked, the field of inter-
action design changes rapidly. He says, “A good interaction designer has skills that work
like expanding foam.” In other words, the practice of interaction design is quite messy, and
keeping up with new techniques and developments is a constant goal. When placed within
the wider world of commerce and business, interaction designers face a range of pressures,
including restricted time and limited resources, and they need to work with people in a wide
range of other roles, as well as stakeholders. In addition, the principles, techniques, and
approaches introduced in other chapters of this book need to be translated into practice, that
is, into real situations with sets of real users, and this creates its own pressures.
Many different names may be given to a practitioner conducting interaction design activ-
ities, including interface designer, information architect, experience designer, usability engi-
neer, and user experience designer. In this chapter, we refer to user experience designer and
user experience design because these are most commonly found in industry to describe some-
one who performs the range of interaction design tasks such as interface design, user evalua-
tions, information architecture design, visual design, persona development, and prototyping.
13.1 Introduction
13.2 AgileUX
13.3 Design Patterns
13.4 Open Source Resources
13.5 Tools for Interaction Design
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E472
Other chapters of this book may have given the impression that designers create their
designs from scratch, with little or no help from anyone except users and immediate col-
leagues, but in practice, user experience (UX) designers draw on a range of support. Four
main areas of support that impact the job of UX designers are described in this chapter.
• Working with software and product development teams operating an agile model of devel-
opment (introduced in Chapter 2, “The Process of Interaction Design”) has led to tech-
nique and process adaptation, resulting in agileUX approaches.
• Reusing existing designs and concepts is valuable and time-saving. Interaction design and
UX design patterns provide the blueprint for successful designs, utilizing previous work
and saving time by avoiding “reinventing the wheel.”
• Reusable components—from screen widgets and source code libraries to full systems, and
from motors and sensors to complete robots—can be modified and integrated to gener-
ate prototypes or full products. Design patterns embody an interaction idea, but reusable
components provide implemented chunks of code or widgets.
• There is a wide range of tools and development environments available to support
designers in developing visual designs, wireframes, interface sketches, interactive proto-
types, and more.
This chapter introduces each of these four areas.
In this video, Kara Pernice suggests three challenges for UX in practice. It is
available at https://www.youtube.com/watch?v=qV5lLjmL278.
Here is a concrete view of what a UX designer does in practice:
https://www.interaction-design.org/literature/article/7-ux-deliverables-what-
will-i-be-making-as-a-ux-designer
BOX 13.1
Technical Debt in UX
Technical debt is a term commonly used in software development, coined originally by Ward
Cunningham in 1992, which refers to making technical compromises that are expedient in the
short term but that create a technical context that increases complexity and cost in the long
term. As with financial debt, technical debt is acceptable as a short-term approach to over-
coming an immediate shortfall, provided that the debt will be repaid quickly. Leaving a debt
for longer results in significant extra costs. Technical debt can be incurred unintentionally, but
pressures associated with time and complexity also lead to design trade-offs that may prove
to be expensive in the longer term.
https://www.interaction-design.org/literature/article/7-ux-deliverables-what-will-i-be-making-as-a-ux-designer
https://www.interaction-design.org/literature/article/7-ux-deliverables-what-will-i-be-making-as-a-ux-designer
1 3 . 2 A G I L E U X 473
13.2 AgileUX
Since the rise of agile software development during the 2000s, UX designers have been con-
cerned about the impact that it will have on their own work (Sharp et al., 2006), and the
debate is ongoing (McInerney, 2017). AgileUX is the collective label given to efforts that aim
to resolve these concerns by integrating techniques and processes from interaction design and
those from agile methods. While agile software development and UX design have some char-
acteristics in common such as iteration, a focus on measurable completion criteria, and user
involvement, agileUX requires a reorganization and some rethinking of UX design activities
and products. A recent reflection on the evolution of agileUX concluded that integrating
agile and UX requires mutual team understanding across three dimensions, and those dimen-
sions are variably understood (Da Silva et al., 2018): the “process and practice” dimension is
understood; the “people and social” dimension is nearly understood; but the “technology and
artifact” dimension—that is, use of technology to coordinate teams’ activities and artifacts to
mediate teams’ communication—has yet to be properly understood. A key aspect is for agile
development teams to understand that user experience design is not a role but is a discipline
and mind-set. This account makes it clear that using agileUX in practice is far from straight-
forward. The key is to find a suitable balance that preserves both the research and reflection
UX debt is created much like technical debt in the sense that trade-offs are made for the
needs of the project.
To address technical debt, a discipline of refactoring is needed, that is, correcting any
pragmatic trade-offs quickly after the immediate pressure has receded. Significant difficulties
arise if these trade-offs are not identified, understood, and corrected in a timely manner. Two
interrelated situations can lead to significant user experience debt that is then extremely costly
to correct.
• If an organization did not, in the past, understand the value of good user experience design
and products or software systems with poor user experiences persist. This can be particu-
larly prevalent for internal systems and products, where the drive for a good user experience
is less acute than for externally marketed products that face more competition from other
providers.
• If an organization has a large portfolio of products, each of which was developed indepen-
dently. This can be the result of acquisitions and mergers of companies, each with their own
UX brand, leading to a proliferation of designs.
In severe cases, UX debt can lead to the revamping of infrastructure and complete renewal of
products.
For an interesting take on UX debt, see this article: https://www.nngroup
.com/articles/ux-debt.
https://www.nngroup.com/articles/ux-debt
https://www.nngroup.com/articles/ux-debt
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E474
needed for good UX design, as well as rapid iterations that incorporate user feedback and
allow technical alternatives to be tested.
In a plan-driven (waterfall) software development process, requirements are specified
as completely as possible before any implementation begins. In an agile software develop-
ment process, requirements are specified only in enough detail for implementation to begin.
Requirements are then elaborated as implementation proceeds, according to a set of priori-
ties that change on a regular basis in response to changing business needs.
To integrate UX design into an agile workflow, it also needs to progress in a similar fash-
ion. Reprioritization may happen as frequently as every two weeks, at the beginning of each
iterative cycle. The shift from developing complete requirements up front to “just-in-time”
or just enough requirements aims to reduce wasted effort, but it means that UX designers
(along with their software engineer colleagues) have had to rethink their approach. All of the
techniques and principles that UX designers use are just as relevant, but how much of each
activity needs to be completed at what point in the iterative cycle and how the results of those
activities feed into implementation need to be adjusted in an agile development context. This
can be unsettling for designers, as the design artifacts are their main deliverable and hence
may be viewed as finished, whereas for agile software engineers, they are consumables and
will need to change as implementation progresses and requirements become elaborated.
Consider the group travel organizer example introduced in Chapter 11, and assume that
it is being developed using agileUX. Four epics (large user stories) for the product are identi-
fied in Chapter 11, as follows:
1. As a
2. As a
that
3. As a
4. As a
At the beginning of the project, these epics will be prioritized, and the central goal of
the product (to identify potential vacations) will be the top priority. This will then initially
be the focus of development activities. To allow users to choose a vacation, epic 4, supporting
the travel agent to update travel details, will also need to be implemented (otherwise travel
details will be out of date), so this is also likely to be prioritized. Establishing the detailed
requirements and the design of the other two areas will be postponed until after a product
that allows users to choose a vacation has been delivered. Indeed, once this product is deliv-
ered, the customer may decide that offering help for vaccinations and visas does not result in
sufficient business value for it to be included at all. In this case, referring users to other, more
authoritative sources of information may be preferable.
Conducting UX activities within an agile framework requires a flexible point of view
that focuses more on the end product as the deliverable than on the design artifacts as deliv-
erables. It also requires cross-functional teams where specialists from a range of disciplines,
including UX design and engineering, work closely together to evolve an understanding of
both the users and their context, as well as the technical capabilities and practicalities of
the technology. In particular, agileUX requires attention to three practices, each of which is
elaborated in the following sections.
1 3 . 2 A G I L E U X 475
• What user research to conduct, how much, and when
• How to align UX design and agile working practices
• What documentation to produce, how much, and when
13.2.1 User Research
The term user research refers to the data collection and analysis activities necessary to charac-
terize the users, their tasks, and the context of use before product development begins. Field
studies and ethnography are often used in these investigations, but agile development works
on short “timeboxes” of activity (up to four weeks in length, but often only two weeks in
length) and hence does not support long periods of user research. (Different names are given
by different agile methods to the iteration, or timeframe, the most common being sprint,
timebox, and cycle.) Even a month to develop a set of personas or to conduct a detailed
investigation into online purchasing habits (for example) is too long for some agile develop-
ment cycles. User-focused activities evaluating elements of the design, or interviews to clarify
requirements or task context, can be done alongside technical development (see the parallel
tracks approach discussed in a moment), but planning to conduct extensive user research
once iterative development starts will result in shallow user research, which is impossible to
react to, as there just isn’t enough time.
One way to address this is for user research to be conducted before the project begins,
or indeed before it is announced, as suggested by Don Norman (2006), who argues that it
is better to be on the team that decides which project will be done at all, hence avoiding the
constraints caused by limited timeboxes. This period is often called iteration zero (or Cycle
0, as you’ll see later in Figure 13.2), and it is used to achieve a range of up-front activities
including software architecture design as well as user research.
Another approach to conducting user research for each project is to have an ongoing
program of user research that revises and refines a company’s knowledge of their users
over a longer time span. For example, Microsoft actively recruits users of their software
to sign up and take part in user research that is used to inform future developments. In
this case, the specific data gathering and analysis needed for one project would be con-
ducted during iteration zero, but done in the context of a wider understanding of users
and their goals.
Source: Leo Cullum / Cartoon Stock
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E476
Lean UX (see Box 13.2) takes a different approach to user research by focusing on get-
ting products into the market and capturing user feedback on products that are in the mar-
ketplace. It specifically focuses on designing and developing innovative products.
ACTIVITY 13.1
Consider the “one-stop car shop” introduced in Activity 11.4. What kind of user research
would be helpful to conduct before iterative development begins? Of these areas, which would
be useful to conduct in an ongoing program?
Comment
Characterizing car drivers and the hybrid driving experience would be appropriate user
research before iterative development begins. Although many people drive, the driving expe-
rience is different depending on the car itself and according to the individual’s capabilities
and experiences. Collecting and analyzing suitable data to inform the product’s development
is likely to take longer than the timebox constraints would allow. Such user research could
develop a set of personas (maybe one set for each class of car) and a deeper understanding of
the hybrid driving experience.
Car performance and handling is constantly evolving, however, and so an understanding
of the driving experience would benefit from ongoing user research.
BOX 13.2
Lean UX (Adapted from Gothelf and Seiden (2016))
Lean UX is designed to create and deploy innovative products quickly. It is linked to agileUX
because agile software development is one of its underlying philosophies and it champions the
importance of providing a good user experience. Lean UX builds upon UX design, design thinking,
agile software development, and the Lean Startup ideas (Ries, 2011). All four perspectives empha-
size iterative development, collaboration between all stakeholders, and cross-functional teams.
Lean UX is based on tight iterations of build-measure-learn, a concept central to the lean
startup idea, which in turn was inspired by the lean manufacturing process from Japan. The
lean UX process is illustrated in Figure 13.1. It emphasizes waste reduction, the importance of
experimentation to learn, and the need to articulate outcomes, assumptions, and hypotheses
about a planned product. Moving the focus from outputs (for example, a new smartphone
app) to outcomes (for example, more commercial activity through mobile channels) clarifies
the aims of the project and provides metrics for defining success. The importance of identi-
fying assumptions was discussed in Chapter 3, “Conceptualizing Interaction.” An example
assumption might be that young people would rather use a smartphone app to access local
event information than any other media. Assumptions can be expressed as hypotheses that can
be put to the test more easily through research or by building a minimum viable product (MVP)
that can be released to the user group.
1 3 . 2 A G I L E U X 477
13.2.2 Aligning Work Practices
If requirements are specified before implementation begins, there is a tendency for designers
to develop complete UX designs at the beginning of a project to ensure a coherent design
throughout. In agile terms, this is referred to as big design up front (BDUF), and this is an
anathema to agile working. Agile development emphasizes regular delivery of working soft-
ware through evolutionary development and the elaboration of requirements as implemen-
tation proceeds. In this context, BDUF leads to practical problems since the reprioritization
Testing hypotheses, and hence assumptions, is done through experimentation, but before
undertaking an experiment, the evidence required to confirm or refute each assumption needs to
be characterized. An MVP is the smallest product that can be built that allows assumptions
to be tested by giving it to a user group and seeing what happens. Experimentation and the
evidence collected are therefore based on actual use of the product, and this allows the team
to learn something.
As an example, Gothelf and Seiden (2016, pp. 76-77) describes an example of a company
that wanted to launch a monthly newsletter. Their assumption was that a monthly newsletter
would be attractive to their customers. To test this assumption, they spent half a day designing
and coding a sign-up form on their website and collected evidence in the form of the number
of sign-ups received. This form was an MVP that allowed them to collect evidence to support
or refute their assumption, that is, that a monthly newsletter would be attractive to their cus-
tomers. Having collected enough data, they planned to continue their experiments with fur-
ther MVPs that experimented with formats and content for the newsletter.
In this video, Laura Klein explains Lean UX, at http://youtu.be/7NkMm5WefBA.
Create
an MVP
Outcomes,
assumptions,
hypotheses
Research &
learning
Design it
Figure 13.1 The Lean UX process
Source: Gothelf and Seiden (2016). Used courtesy of O’Reilly Media
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E478
of requirements means that interaction elements (features, workflows, and options) may no
longer be needed or may require redesigning. To avoid unnecessary work on detailed design,
UX design activities need to be conducted alongside and around agile iterations. The chal-
lenge is how to organize this so that a good user experience is achieved and the product
vision is maintained (Kollman et al., 2009).
In response to this challenge, Miller (2006) and Sy (2007) proposed that UX design work is
done one iteration ahead of development work in parallel tracks (see Figure 13.2). The parallel
tracks approach to integrating UX design and agile processes originated at Alias—now part of
Autodesk. Note that in this diagram, iteration is referred to as Cycle. The principle of parallel
tracks development is quite simple: that design activity and user data collection for Cycle n+1 is
performed during Cycle n. This enables the design work to be completed just ahead of develop-
ment work, yet to be tightly coupled to it as the product evolves. Completing it much sooner
than this can result in wasted effort, as the product and understanding about its use evolves.
Cycle 0 and Cycle 1 are different from subsequent cycles because, before evolutionary
development can begin, the product vision needs to be created. This is handled in different
ways in different agile methods, but all agree that there needs to be some kind of work up
front to understand the product, its scope, and its overall design (both technical and UX).
Some general data about customers and their behavior may have been collected before Cycle 0,
but the vision and overall design is completed for the current project by the end of Cycle
0. The work required will depend on the nature of the product: whether it is a new version
of an existing product, a new product, or a completely new experience. Cycle 0 can also be
longer than other cycles to accommodate differing needs, but producing pixel-perfect designs
of the product before evolutionary development starts is not the aim for Cycle 0.
Plan and
gather
customer
data
Implement
high dev cost
low UI cost
features
Design for Cycle 2
Gather customer
data for cycle 3
Test cycle 1 code
Design for cycle 3
Gather customer
data for cycle 4
Test cycle 2 code
Design for cycle 4
Gather customer
data for cycle 5
Implement
designs
Implement
designs Developer Track
Interaction Designer Track
Cycle 0
Cycle 1 Cycle 2 Cycle 3
3
Code
Des
ign
Data Data
Des
ign
Code
Figure 13.2 Cycle 0 and its relationship to later cycles
Source: Sy (2017)
1 3 . 2 A G I L E U X 479
One of the originators of the parallel tracks development idea, Desiree Sy (2007), explained
this in the context of two different products. The first product is SketchBook Pro v2.0, a
sophisticated sketching, annotating, and presentation tool to support digital artists. The sec-
ond is Autodesk’s Showcase which, though no longer available, was a real-time automotive
3D visualization product. For SketchBook Pro v2.0, the team conducted a survey of users who
had downloaded v1.0 (a free trial version) but had not purchased v2.0. The results of the sur-
vey helped the team to refine 100 features into five major work streams, and this information
informed development and prioritization throughout the development process. For Showcase,
during Cycle 0, the team interviewed potential purchasers who performed work that the tool
was going to be designed to support. This data formed the foundation for the design principles
of the product as well as prioritization and design decisions as development progressed.
Cycle 1 usually involves technical setup activities in the developer track, which allows
the UX designers to get started on the design and user activities for Cycle 2. For subsequent
cycles, the team gets into a rhythm of design and user activities in Cycle n–1 and correspond-
ing technical activity in Cycle n.
For example, imagine that development of a smartphone app to support attendance at
a music festival is in Cycle n, and that Cycle n is scheduled to work on capturing reviews of
the acts performing at the festival. During Cycle n–1, UX designers will have produced initial
designs for capturing reviews of the acts by designing detailed icons, buttons, or other graph-
ics, and prototyping different interaction types. During Cycle n, they will answer specific
queries about these concrete designs, and they will revise them if necessary based on imple-
mentation feedback. Cycle n design work will be to develop concrete designs for the next
cycle, which might be focusing on identifying and displaying reviews on demand. Also during
Cycle n, UX designers will evaluate the implementation coming out of Cycle n–1. So, in any
one cycle, UX designers are handling three different types of activity: evaluating implementa-
tions from the previous cycle, producing concrete designs for the next cycle, and answering
queries on the designs being implemented in the current cycle.
The team at Alias found that the UX designers worked closely with the developers dur-
ing design and implementation to make sure that they designed something that could be
implemented and also what was indeed implemented was what had actually been designed.
The interaction designers felt that there were three big advantages to this process. First, no
design time was wasted on features that would not be implemented. Second, usability testing
(for one set of features) and contextual inquiry (for the next set) could be done on the same
customer visit, thus saving time. Third, the interaction designers received timely feedback
from all sides—both users and developers. More importantly, they had time to react to that
feedback because of the agile way of working. For example, the schedule could be changed
if something was going to take longer to develop than first thought, or a feature could be
dropped if it became apparent from the users that something else had higher priority. In sum-
mary, “Agile user-centered design resulted in better-designed software than waterfall user-
centered design” (Sy, 2007, p. 130).
These advantages have been realized by others too, and this parallel tracks way of work-
ing has become a popular way to implement agileUX. Sometimes, the UX designers work two
iterations ahead, depending on the work to be done, the length of the iteration, and external
factors such as time required to obtain appropriate user input. Working in this way does not
diminish the need for UX designers and other team members to collaborate closely together,
and although the tracks are parallel, they should not be seen as separate processes. This does,
however, raise a dilemma, as discussed in the Dilemma box.
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E480
DILEMMA
To Co-locate or Not to Co-locate, That Is the Question
UX designers in most large organizations are not numerous enough to have one UX designer
for every team, so where should the UX designer be located? Agile development emphasizes
regular communication and the importance of being informed about the project as it evolves.
Hence, it would be good for the UX designer to be located with the rest of the team. But which
team? Maybe a different agile team every day? Or each team for one iteration? Some organi-
zations, however, believe that it is better for UX designers to sit together in order to provide
discipline coherence: “UX designers work best when they are separated from the issues of
software construction because these issues hamper creativity” (Ferreira et al., 2011). Indeed,
this view is shared by some UX designers. If you, as a UX designer, were part of several agile
teams, needing to engage with each of them, where would you prefer to be located? What
might be the advantages and disadvantages of each, or maybe using a social awareness tool
such those introduced in Chapter 5, “Social Interaction,” would be more appropriate?
This video describes some case studies on the UX techniques used by Android
within agile iterations: http://youtu.be/6MOeVNbh9cY.
ACTIVITY 13.2
Compare Lean UX, agileUX, and evolutionary prototyping (introduced in Chapter 12, “Design,
Prototyping, and Construction”). In what ways are they similar and how do they differ?
Comment
Lean UX produces a MVP to test assumptions by releasing it to users as a finished product
and collecting evidence of users’ reactions. This evidence is then used to evolve subsequent
(larger) products based on the results of this experimentation. In this sense, Lean UX is a form
of evolutionary development, and it has similarities with evolutionary prototyping. However,
not all the MVPs developed to test assumptions may be incorporated into the final product,
just the results of the experiment.
AgileUX is an umbrella term for all efforts that focus on integrating UX design with agile
development. Agile software development is an evolutionary approach to development, and
hence agileUX is also evolutionary. Additionally, agileUX projects can employ prototyping to
answer questions and test ideas as with any other approach, as described in Chapter 12.
1 3 . 2 A G I L E U X 481
13.2.3 Documentation
The most common way for UX designers to capture and communicate their design has
been through documentation, for instance, user research results and resulting personas,
detailed interface sketches, and wireframes. Because UX designers view the design as
their main deliverable, a key indicator that their work is ready for sign-off is the avail-
ability of comprehensive documentation to show that their goals have been achieved.
This may include other forms of design capture, such as prototypes and simulations, but
documentation is still common. Agile development encourages only minimal documen-
tation so that more time can be spent on design, thus producing value to the user via a
working product.
Minimal documentation does not mean “no documentation,” and some documentation is
desirable in most projects. However, a key principle in agileUX is that documentation should
not replace communication and collaboration. To help identify the right level of documenta-
tion, Lindsay Ratcliffe and Marc McNeill (2012, p. 29) suggest asking a set of questions of
any documentation process.
1. How much time do you spend on documentation? Try to decrease the amount of time
spent on documentation and increase design time.
2. Who uses the documentation?
3. What is the minimum that customers need from the documentation?
4. How efficient is your sign-off process? How much time is spent waiting for documenta-
tion to be approved? What impact does this have on the project?
5. What evidence is there of document duplication? Are different parts of the business docu-
menting the same things?
6. If documentation is only for the purpose of communication or development, how polished
does it need to be?
They also use the example in Figure 13.3 to illustrate these points. Both images capture
a user journey, that is, one path a user might take through the product. The sketch in Fig-
ure 13.3(a) is constructed with sticky notes and string, and it was generated by all of the
team members during a discussion. The sketch in Figure 13.3(b) took hours of designer time
to draw and polish. It looks good, but that time could have been used to design the product
rather than the user journey sketch.
The question of how much documentation is needed in an agile project is not limited to
agileUX. Scott Ambler (Ambler, 2002) provides a detailed description of best practices for
agile documentation. These support the production of “good enough” documentation in an
efficient way, and they are intended to determine what documentation is needed. He pro-
poses questions such as these:
• What is the purpose of the documentation?
• Who is the customer of the documentation?
• When should documents be updated?”
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E482
A
rr
iv
al
Booking D
eparture
Onboard
Business Customer
Cross country train
Books 2 days before
Needs to work
during journey.
Too much upsell
Option to text details
Print-at-home tickets
Too many emails
Announcements – easy-to-�nd platform
Queue to check tickets
No help to �nd coach
No help with luggage
No help to �nd seat
Table and seat dirty
Plenty of space for luggage
Offered a drink
Offered food – poor quality
Little choice
No crockery or cutleryService on their time
No additional service
Rubbish collected before I �nished
On time
No help with luggage
No onward help
Easy-to-�nd prices
1st class cheaper
Can’t book facing seat
Save journey details
TrainCo Customer Journey
Good experience
Not good experience
(a)
(b)
Figure 13.3 (a) A low-fidelity user journey, (b) a high-fidelity user journey
Source: Ratcliffe and McNeill (2012)
1 3 . 2 A G I L E U X 483
DILEMMA
Quick, Quick, Slow?
One of the challenges for UX practice is how best to integrate with software and product
development conducted using an agile approach. Taking an agile approach is seen as beneficial
for a range of reasons, including an emphasis on producing something of use, customer (and
user) collaboration, rapid feedback, and minimal documentation—only areas of the product
that are definitely going to be implemented are designed in detail. However, focusing on short
timeboxes can lead to an impression that everything is being rushed. Creating an appropriate
balance between short timeboxes and a reflective design process requires careful planning so
that important aspects of UX design are not rushed.
Slow design is part of the slow movement, which advocates a cultural shift toward slow-
ing down the pace of life (Grosse-Hering et al., 2013). The main intent of slow design is to
focus on promoting well-being for individuals, society, and the natural environment by pro-
moting the design of products that are long-lived and sustainable. Working more slowly does
not, per se, address the impression of rushing, but slow design also emphasizes the importance
of providing time to reflect and think, for the user to engage and create their own products,
and for products and their use to evolve over time.
The agile movement is here to stay, but the importance of taking time to reflect and think,
when necessary, and not rushing to make decisions remains. The dilemma here is finding the
right balance between rapid feedback to identify solutions that work and providing the time
to stop and reflect.
CASE STUDY 13.1
Integrating UX Design into a Dynamic Systems Development
Method Project
Challenges, Working Practices, and Lessons Learned
This case study presents a portion of one organization’s journey to integrate UX design
into one agile software development approach: the Dynamic Systems Development Method
(DSDM) framework (see https://www.agilebusiness.org/what-is-dsdm for more details). It
describes the difficulties they faced, the working practices adopted, and the lessons learned
from their experiences of integrating UX designers into their DSDM agile process.
LShift is a high-tech software development company that works across a broad range of
industries, languages, and platforms. The company faced four main challenges while integrat-
ing UX design into the DSDM framework.
• Communication between developers and UX designers: What is the relevant information
that needs to be communicated, how best to communicate it, how to keep communication
channels open, and how to keep the emerging design implementation visible for feedback.
Difficulties in these areas can cause frustration, problems with the technical feasibility of
design solutions, and mistaken expectations by the client.
https://www.agilebusiness.org/what-is-dsdm
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E484
13.3 Design Patterns
Design patterns capture design experience, but they have a different structure and a differ-
ent philosophy from other forms of guidance or specific methods. One of the intentions of
the patterns community is to create a vocabulary based on the names of the patterns, which
designers can use to communicate with one another and with users. Another is to produce
literature in the field that documents experience in a compelling form.
The idea of patterns was first proposed by the architect Christopher Alexander, who
described patterns in architecture (Alexander, 1979). His hope was to capture the “quality
without a name” that is recognizable in something when you know it is good.
But what is a design pattern? One simple definition is that it is a solution to a problem in
a context; that is, a pattern describes a problem, a solution, and where this solution has been
found to work. Users of the pattern can therefore not only see the problem and solution but
can also understand the circumstances under which the idea has worked before and access a
rationale for why it worked. A key characteristic of design patterns is that they are genera-
tive; that is, they can be instantiated or implemented in many different ways. The application
of patterns to interaction design has grown steadily since the late 1990s (for instance, Borch-
ers, 2001; Tidwell, 2006; Crumlish and Malone, 2009) and have continued to be actively
developed (for example, Josh et al., 2017).
Patterns on their own are interesting, but they are not as powerful as a pattern language.
A pattern language is a network of patterns that reference one another and work together
to create a complete structure. Pattern languages are not common in interaction design, but
there are several pattern collections, that is, sets of patterns that are independent of each other.
Patterns are attractive to designers because they are tried and tested solutions to common
problems. It is common (although not obligatory) for pattern collections to be associated with
software components that can be used with little modification, and as they are common solu-
tions, many users are already familiar with them, which is a great advantage for a new app or
product on the market. See Box 13.3 for an example pattern: Swiss Army Knife Navigation.
• Level of precision in up-front design: Developers suggested five main reasons why “less is
more” when it comes to design documentation ready for the start of developer involvement.
• Prioritization and de-scoping can lead to a waste of pixel-perfect designs.
• Some design issues will be found only once you start implementing.
• Pixel-perfect designs may increase resistance to making design changes.
• It is better to focus on functionality first and design as you go along.
• Quality of designs can benefit from early input by developers.
• Design documentation: The amount and detail of documentation needs to be discussed
early on so that it meets both developers’ and designers’ requirements.
• User testing: User testing can be a challenge in a product development setting if the business
does not yet have any customers. This can be addressed at least partially using personas
and user representatives.
This case study describes the background to these challenges, provides more detail about these
challenges, and introduces some practices that the company used to address them. The case
study is available in full at http://tinyurl.com/neehnbk.
http://tinyurl.com/neehnbk
1 3 . 3 D E S I G N PAT T E R N S 485
BOX 13.3
Swiss Army Knife Navigation: An Example Design Pattern for Mobile
Devices (Nudelman (2013). Used courtesy of John Wiley & Sons, Inc.)
The principle behind the Swiss Army Knife Navigation design pattern is to maximize produc-
tive use of the screen space and keep users engaged in the content of what they are doing. For
example, in a game design, the user does not want to be side-tracked by navigation bars and
menu pop-ups. Having a mechanism that allows the controls to fade in and out of view is a
much more engaging design.
This design pattern is commonly instantiated as “off-canvas” or “side drawer” navigation
(Neil, 2014), where the control bar slides in, overlaying the main screen contents. It is useful because
it is a “transient” navigation bar and takes up screen space only temporarily; that is, it can be swiped
in over the top of the main app screen and then swiped back once the user has finished the action. It
is good from an interaction design point of view because it supports the use of both text and icons
to represent actions. It is also good from the point of view of screen layout because it takes up space
only when the menu is needed. It can also be used for interactions other than navigation (Peatt,
2014). This is an example of a common design pattern that is also evolving in a range of different
directions. Exactly how this navigation bar is implemented varies with different platforms.
In figure 13.4, the left image, the menu is represented as a list of lines at the top left of the
screen, while in the right image, the menu items have pushed the user view to the right.
Figure 13.4 Example of Swiss Army Knife Navigation pattern, instantiated as off-canvas navigation
Source: Used courtesy of Aidan Zealley
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E486
Pattern collections, libraries, and galleries relevant to interaction design are commonly
used in practice (for instance, Nudelman, 2013) and are often accompanied by code snip-
pets available through open source repositories such as GitHub (https://github.com/) or
through platform websites such as https://developer.apple.com/library/iOS/documentation/
userexperience/conceptual/mobilehig/ for iOS on an iPhone.
Patterns are a “work in progress,” because they continue to evolve as more people use
them, experience increases, and users’ preferences change. Patterns can continue to evolve for
some time, but they can also be deprecated, that is, become outdated and no longer consid-
ered good interaction design. Reusing ideas that have proved to be successful in the past is a
good strategy in general, particularly as a starting point, but it should not be used blindly. In
custom applications, the design team may also create their own libraries. As with many areas
of design, there is disagreement about which patterns are current and which are outdated.
Examples of interaction design guidelines and pattern libraries plus downloadable
collections of screen elements are available at:
Windows: https://developer.microsoft.com/en-us/windows/apps/design
Mac: https://developer.apple.com/design/human-interface-guidelines/
General UI Design Patterns: https://www.interaction-design.org/literature/
article/10-great-sites-for-ui-design-patterns
Google Material Design: https://design.google/
For a humorous discussion of the hamburger icon, see:
https://icons8.com/articles/most-hated-ui-ux-design-pattern/
For a discussion of the tab bar, see:
https://uxplanet.org/tab-bars-are-the-new-hamburger-menus-9138891e98f4
ACTIVITY 13.3
One design pattern for mobile devices that is deprecated by some and not others is the Car-
ousel navigation pattern, in which the user is presented with several images (of products, for
example) horizontally across the screen, or one at a time in the same screen location. Swiping
(or clicking) left or right displays other images, just like a carousel.
This design pattern has provoked different reactions by different designers. Search for
information on this design pattern using your favorite browser and read at least two articles
or blog posts about it: one arguing that it should be deprecated and one that explains how it
can be used successfully. Decide for yourself whether the Carousel pattern should be labeled
outdated or kept alive.
https://developer.microsoft.com/en-us/windows/apps/design
https://developer.apple.com/design/human-interface-guidelines/
https://www.interaction-design.org/literature/article/10-great-sites-for-ui-design-patterns
https://www.interaction-design.org/literature/article/10-great-sites-for-ui-design-patterns
https://design.google/
https://icons8.com/articles/most-hated-ui-ux-design-pattern/
https://uxplanet.org/tab-bars-are-the-new-hamburger-menus-9138891e98f4
1 3 . 3 D E S I G N PAT T E R N S 487
(a)
(b)
Figure 13.5 Two example carousel navigation styles (a) shows pictures of a
house for sale. Note the arrows to the left and right of the row of photos at the
bottom. (b) shows a weather application for a mobile phone that can be swiped
left and right for other locations. Note the line of dashes in the bottom middle of
the screen that indicate there are other screens.
Source: Helen Sharp
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E488
Design patterns are a distillation of previous common practice, but one of the problems
with common practice is that it is not necessarily good practice. Design approaches that
represent poor practice are referred to as anti-patterns. The quality of interaction design
and user experience in general has improved immensely since the first edition of this book
in 2002, so why are anti-patterns still a problem? Basically, the technology is changing and
design solutions that work on one platform don’t necessarily work on another. A common
source of anti-patterns for mobile devices is where websites or other software have been
migrated from a large screen, such as a laptop, to a smartphone. One example of this is the
untappable phone number that displays on a smartphone pop-up (see Figure 13.6).
Another kind of pattern that was introduced in Chapter 1 (see Figure 1.10) is the dark pat-
tern. Dark patterns are not necessarily poor design, but they have been designed carefully to trick
people, championing stakeholder value over user value, for instance. Some apparent dark patterns
are just mistakes, in which case they will be corrected relatively quickly once identified. However,
when a UX designer’s knowledge of human behavior is deliberately used to implement deceptive
functionality that is not in the user’s best interests, that is a dark pattern. Colin Gray et al. (2018)
collated and analyzed a set of 118 dark pattern examples identified by practitioners and identified
five strategies: nagging, obstruction, sneaking, interface interference, and forced action.
Comment
The Nielsen Norman Group has two articles about the carousel on its website (see the URLs
at the end of this paragraph).
One presents evidence from a usability trial with one user that shows carousels can fail,
and the other presents a balanced view of how to design a good carousel. This second article
focuses on the version of the carousel where several images are displayed at the same location
on the screen, one at a time. They identify the greatest advantages as being the good use of
screen space (because several elements occupy the same space) and that having information at
the top of the screen means that visitors are more likely to see it. Disadvantages include that
users often navigate past the carousel and that even if users do see the image, it is usually only
the first one. The article does suggest using an alternative design, and it goes on to provide
some useful examples and guidelines for good carousels.
There is a thread of posts and articles arguing that the carousel should not be used. These
also point to evidence that users rarely use the carousel, but if they do they focus only on the
first image. Nevertheless, there seems to be no solid set of data to support or refute the usabil-
ity of the carousel in all of its various forms.
On balance, it seems that some forms of carousel meet the product’s goals more readily than
others, for example, because only the first image in a series is viewed by most users. Assuming
appropriate design and, maybe more importantly, an evaluation with potential users and your
content, it seems plausible that the carousel navigation pattern is not yet ready to be deprecated.
www.nngroup.com/articles/designing-effective-carousels/
www.nngroup.com/articles/auto-forwarding/
http://www.nngroup.com/articles/designing-effective-carousels/
http://www.nngroup.com/articles/auto-forwarding/
1 3 . 4 O P E N S O U R C E R E S O U R C E S 489
13.4 Open Source Resources
Open source software refers to source code for components, frameworks, or whole systems
that is available for reuse or modification free of charge. Open source development is a
community-driven endeavor in which individuals produce, maintain, and enhance code,
which is then given back to the community through an open source repository for further
development and use. The community of open source committers (that is, those who write
and maintain this software) are mostly software developers who give their time for free.
The components are available for (re)use under software licenses that allow anyone to
use and modify the software for their own requirements without the standard copyright
restrictions.
Many large pieces of software underlying our global digital infrastructure are pow-
ered by open source projects. For example, the operating system Linux, the develop-
ment environment Eclipse, and the NetBeans development tools are all examples of open
source software.
Perhaps more interesting for interaction designers is that there is a growing proportion
of open source software available for designing good user experiences. The design pattern
implementation libraries introduced in section 13.3 are but one example of how open source
software is affecting user experience design. Another example is the Bootstrap framework for
front-end web development, released as open source in August 2011 and actively updated on
a regular basis; see Figure 13.7 for an example of its use. This framework contains reusable
code snippets, a screen layout grid that supports multiple screen sizes, and pattern libraries
that include predefined sets of navigational patterns, typefaces, buttons, tabs, and so on. The
framework and documentation are available through the GitHub open source repository
(https://github.com/twbs/bootstrap#community).
Open source resources require a suitable repository, that is, somewhere for the source
code to be stored and made accessible to others. More than this, the repository needs to serve
a huge number of users (GitHub was reported to have 31 million users in 2018) who will
want to build, review, modify, and extend software products. Managing this level of activ-
ity also requires version control, such as a mechanism that retains and can reinstate previ-
ous versions of the software. For example, GitHub is based on the version control system
called Git. Communities form around these repositories, and submitting code to a repository
requires an account. For example, each developer on GitHub can set up a profile that will
keep track of their activity for others to see and comment upon.
Error
We’re unable to verify the
information you entered. Please try
again. For help, call 823-872-0048
OK
Figure 13.6 An untappable phone number for help when smartphone installation goes wrong
https://github.com/twbs/bootstrap#community
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E490
Most repositories support both public and private spaces. Submitting code to a pub-
lic space means that anyone in the community can see and download the code, but in a
private space the source will be “closed.” One of the advantages of putting code on an open
source repository is that many eyes can see, use, and modify your work—spotting security
vulnerabilities or inefficient coding practices as well as contributing to, extending, or improv-
ing its functionality. Other popular open source repositories are BitBucket, Team Foundation
Server, and GitLab.
The GitHub repository itself may look a little daunting for those who first come across
it, but there is a community of developers behind it who are happy to help and support
newcomers.
Figure 13.7 An example website built using the Bootstrap framework
Source: Didier Garcia/Larson Associates.
Some advantages of using GitHub over other source repositories are dis-
cussed here:
https://www.thebalancecareers.com/what-is-github-and-why-should-i-use-
it-2071946
An introduction to using GitHub is available here:
https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners
https://www.thebalancecareers.com/what-is-github-and-why-should-i-use-it-2071946
https://www.thebalancecareers.com/what-is-github-and-why-should-i-use-it-2071946
https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners
1 3 . 5 T O O L S F O R I N T E R A C T I O N D E S I G N 491
13.5 Tools for Interaction Design
Many types of digital tools are used in practice by UX designers, and the tooling landscape
changes all the time (Putnam et al., 2018). These tools support creative thinking, design sketch-
ing, simulation, video capture, automatic consistency checking, brainstorming, library search,
and mind mapping. In fact, any aspect of the design process will have at least one associated
support tool. For example, Microsoft Visio and OmniGraffle support the creation of a wide
range of drawings and screen layouts, while FreeMind is an open source, mind-mapping tool.
In and of themselves, these tools provide significant support for UX design, but they can also
work together to speed up the process of creating prototypes of various levels of fidelity.
Elsewhere in this book, we have emphasized the value of low-fidelity prototyping and its
use in getting user feedback. As with any prototype, however, paper-based prototypes have their
limitations, and they do not support user-driven interaction (see, for example, Lim et al., 2006). In
recognition of this, developing interactive, low-fidelity prototypes has been investigated through
research for many years (see Lin et al., 2000, or Segura et al., 2012). In recent years, tools to sup-
port the creation of interactive prototypes from static graphical elements have become available
commercially. For example, overflow.io supports the production of playable user flow diagrams.
Commercial packages that support the quick and easy development of interactive wire-
frames, or mock-ups, are widely used in practice for demonstration and evaluation. Some
commonly used tools are Balsamiq® (https://balsamiq.com/), Axure RP (https://www.axure
.com/), and Sketch (https://sketchapp.com/). Activity 13.4 invites you to try one or more of
the tools available to create a simple prototype.
Having created an interactive wireframe using one of these tools, it is then possible to
generate a higher-fidelity prototype by implementing the next prototype using a ready-made
pattern library or framework, like those introduced in section 13.3 and section 13.4, to pro-
vide a coherent look and feel. This means going from a low-fidelity mockup to a working,
styled prototype in one step. Other open source resources can also be used to provide a wider
choice of interface elements or design components with which to create the product.
Paper-based prototypes are also not very good if technical performance issues such as
component interfaces need to be prototyped—software-based prototypes are better. For
example, Gosper et al. (2011) describes how, at SAP, employees often use a drawing or graph-
ics package to mock up key use cases and their interfaces, interactions, and task flows and
then output that to PowerPoint. This creates a set of slides that can be viewed to give an
Tools available for UX designers, many of which have free trial versions and tutori-
als, are available at the following links:
https://support.balsamiq.com/tutorials/
www.axure.com/learn/core/getting-started
https://www.digitalartsonline.co.uk/features/interactive-design/16-best-ux-
tools-for-designers/
https://blog.prototypr.io/meet-overflow-9b2d926b6093
https://balsamiq.com/
https://sketchapp.com/
https://support.balsamiq.com/tutorials/
http://www.axure.com/learn/core/getting-started
https://www.digitalartsonline.co.uk/features/interactive-design/16-best-uxtools-for-designers/
https://blog.prototypr.io/meet-overflow-9b2d926b6093
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E492
overall sense of a user session. However, when they developed a business intelligence tool
with key performance and “backend” implications, this form of prototyping was not suf-
ficient for them to assess their product goals. Instead, the UX designer worked with a devel-
oper who prototyped some of the elements in Java.
ACTIVITY 13.4
Choose one of the commercially available tools that supports the generation of interactive
wireframes or low-fidelity prototypes and generate a wireframe for a simple app, for instance,
one that allows visitors to a local music festival to review the acts. Explore the different fea-
tures offered by the tool, and note those that were particularly useful. Unless your employer
or university has a license for these tools, you may not have access to all of their features.
Comment
We used the moqups web demo at https://app.moqups.com/edit/page/ad64222d5 to create the
design in Figure 13.8. Two features of note for this tool are (1) that it appears to be similar to
other generic graphics packages, and hence it was easy to get started and produce an initial
wireframe, and (2) the ruler settings automatically allowed precise positioning of interface
elements in relation to each other.
Figure 13.8 A screenshot of our interactive wireframe for the event review app, gener-
ated using moqups
Source: https://moqups.com
https://app.moqups.com/edit/page/ad64222d5
1 3 . 5 T O O L S F O R I N T E R A C T I O N D E S I G N 493
In-Depth Activity
This in-depth activity continues the work begun on the booking facility introduced at the end
of Chapter 11.
1. Assume that you will produce the online booking facility using an agile approach.
a. Suggest the type of user research to conduct before iteration cycles begin.
b. Prioritize requirements for the product according to business value, in particular, which
requirements are likely to provide the greatest business benefit, and sketch out the UX
design work you would expect to undertake during the first four iteration cycles, that
is, Cycle 0 and Cycles 1 to 3.
2. Using one of the mock-up tools introduced, generate a mock-up of the product’s initial
interface, as developed in the assignment for Chapter 12.
3. Using one of the patterns websites listed previously, identify suitable interaction patterns
for elements of the product and develop a software-based prototype that incorporates all
of the feedback and the results of the user experience mapping achieved at the end of
Chapter 12. If you do not have experience in using any of these, create a few HTML web
pages to represent the basic structure of the product.
Summary
This chapter explored some of the issues faced when interaction design is carried out in prac-
tice. The move toward agile development has led to a rethinking of how UX design tech-
niques and methods may be integrated into and around agile’s tight iterations. The existence
of pattern and code libraries, together with open source components and automated tools,
means that interactive prototypes with a coherent and consistent design can be generated
quickly and easily, ready for demonstration and evaluation.
Key Points
• AgileUX refers to approaches that integrate UX design activities with an agile approach to
product development.
• A move to agileUX requires a change in mind-set because of repeated reprioritization of
requirements and short timeboxed implementation, which seeks to avoid wasted effort.
• AgileUX requires a rethinking of UX design activities: when to perform them, how much
detail to undertake and when, and how to feed back results into implementation cycles.
• Design patterns present a solution to a problem in a context, and there are many UX design
pattern libraries available.
• Dark patterns are designed to trick users into making choices that have unexpected conse-
quences, for instance, by automatically signing them up for marketing newsletters.
• Open source resources, such as those on GitHub, make the development of standard appli-
cations and libraries with consistent interfaces easier and quicker.
• A variety of digital tools to support interaction design in practice are available.
1 3 I N T E R A C T I O N D E S I G N I N P R A C T I C E494
Further Reading
GOTHELF, J., and SEIDEN, J. (2016) Lean UX: Designing Great Products with Agile Teams
(2nd ed.), O’Reilly. This book focuses on the lean UX approach to development (see Box
13.2), but it also includes a wide range of practical examples and experiences from read-
ers of the first edition of the book as to how agile development and UX design can work
well together.
KRUCHTEN, P., NORD, R.L. and OZKAYA, I. (2012) “Technical Debt: From Metaphor to
Theory and Practice,” IEEE Software, November/December (2nd ed.). This paper is the edi-
tors’ introduction to a special issue on technical debt. This topic has been largely discussed
and written about in the context of software development, with very little mention of inter-
action design or UX debt. However, these issues are relevant to interaction design practice
today, and this paper provides an accessible starting point for anyone wanting to investigate
this area further.
PUTNAM, C., BUNGUM, M., SPINNER, D., PARELKAR, A.N., VIPPARTI, S. and CASS, P.
(2018), “How User Experience Is Practiced: Two Case Studies from the Field,” Proceedings
of CHI 2018. This short paper provides some useful insights into UX practice based on two
case studies from consumer-facing companies in Illinois.
RAYMOND, E.S. (2001) The Cathedral and the Bazaar. O’Reilly. This seminal book is a set
of essays introducing the open source movement.
SANDERS, L. and STAPPERS, P. J. (2014) “From Designing to Co-Designing to Collective
Dreaming: Three Slices in Time,” interactions, Nov–Dec, p. 25–33. This provides a fascinat-
ing account of the changes in design practice over the last 30 years, a reflection on what
design practice is like in 2014, and then a projection into the future to see what design prac-
tice may be like 30 years from now. It considers the role of the customer and the designer and
how the object being designed emerges from the design process.
SY, D. (2007) “Adapting Usability Investigations for Development,” Journal of Usability
Studies 2(3), May, 112–130. This short paper is a good introduction to some of the key
issues faced when trying to perform UX design alongside an agile project. It describes the
well-established dual-track process model for agileUX, shown in Figure 13.2.
Chapter 14
I N T R O D U C I N G E V A L U A T I O N
14.1 Introduction
14.2 The Why, What, Where, and When of Evaluation
14.3 Types of Evaluation
14.4 Evaluation Case Studies
14.5 What Did We Learn from the Case Studies?
14.6 Other Issues to Consider When Doing Evaluation
Objectives
The main goals of this chapter are to accomplish the following:
• Explain the key concepts and terms used in evaluation.
• Introduce a range of different types of evaluation methods.
• Show how different evaluation methods are used for different purposes at different
stages of the design process and in different contexts of use.
• Show how evaluation methods are mixed and modified to meet the demands of evalu-
ating novel systems.
• Discuss some of the practical challenges of doing evaluation.
• Illustrate through short case studies how methods discussed in more depth in Chap-
ters 8, 9, and 10 are used in evaluation and describe some methods that are specific to
evaluation.
• Provide an overview of methods that are discussed in detail in the next two chapters.
14.1 Introduction
Imagine that you designed an app for teenagers to share music, gossip, and photos. You pro-
totyped your first design and implemented the core functionality. How would you find out
whether it would appeal to them and whether they will use it? You would need to evaluate
it—but how? This chapter presents an introduction to the main types of evaluation and the
methods that you can use to evaluate design prototypes and design concepts.
1 4 I N T R O D U C I N G E V A L U AT I O N496
Evaluation is integral to the design process. It involves collecting and analyzing data
about users’ or potential users’ experiences when interacting with a design artifact such as
a screen sketch, prototype, app, computer system, or component of a computer system. A
central goal of evaluation is to improve the artifact’s design. Evaluation focuses on both the
usability of the system (that is, how easy it is to learn and to use) and on the users’ experi-
ences when interacting with it (for example, how satisfying, enjoyable, or motivating the
interaction is).
Devices such as smartphones, iPads, and e-readers, together with the pervasiveness of
mobile apps and the emergence of IoT devices, have heightened awareness about usability
and interaction design. However, many designers still assume that if they and their colleagues
can use a product and find it attractive, others will too. The problem with this assumption
is that designers may then design only for themselves. Evaluation enables them to check that
their design is appropriate and acceptable for the target user population.
There are many different evaluation methods. Which to use depends on the goals of the
evaluation. Evaluations can occur in a range of places such as in labs, people’s homes, out-
doors, and work settings. Evaluations usually involve observing participants and measuring
their performance during usability testing, experiments, or field studies in order to evaluate
the design or design concept. There are other methods, however, that do not involve par-
ticipants directly, such as modeling users’ behavior and analytics. Modeling users’ behavior
provides an approximation of what users might do when interacting with an interface; these
models are often done as a quick way of assessing the potential of different interface con-
figurations. Analytics provide a way of examining the performance of an already existing
product, such as a website, so that it can be improved. The level of control on what is evalu-
ated varies; sometimes there is none, such as for studies in the wild, and in others there is
considerable control over which tasks are performed and the context, such as in experiments.
In this chapter, we discuss why evaluation is important, what needs to be evaluated, where
evaluation should take place, and when in the product lifecycle evaluation is needed. Some
examples of different types of evaluation studies are then illustrated by short case studies.
14.2 The Why, What, Where, and When of Evaluation
Conducting evaluations involves understanding not only why evaluation is important but
also what aspects to evaluate, where evaluation should take place, and when to evaluate.
14.2.1 Why Evaluate?
User experience involves all aspects of the user’s interaction with the product. Nowadays
users expect much more than just a usable system—they also look for a pleasing and engag-
ing experience from more products. Simplicity and elegance are valued so that the product is
a joy to own and use.
From a business and marketing perspective, well-designed products sell. Hence, there
are good reasons for companies to invest in evaluating the design of products. Designers can
focus on real problems and the needs of different user groups and make informed decisions
about the design, rather than on debating what each other likes or dislikes. It also enables
problems to be fixed before the product goes on sale.
1 4 . 2 T h E W h y, W h AT, W h E R E , A N D W h E N O f E V A L U AT I O N 497
14.2.2 What to Evaluate
What to evaluate ranges from low-tech prototypes to complete systems, from a particular
screen function to the whole workflow, and from aesthetic design to safety features. Devel-
opers of a new web browser may want to know whether users find items faster with their
product. Developers of an ambient display may be interested in whether it changes people’s
ACTIVITy 14.1
Identify two adults and two teenagers prepared to talk with you about their Facebook usage
(these may be family members or friends). Ask them questions such as these: How often do
you look at Facebook each day? How many photos do you post? What kind of photos do you
have in your albums? What kind of photo do you have as your profile picture? How often do
you change it? How many Facebook friends do you have? What books and music do you list?
Are you a member of any groups?
Comment
As you may know, some teenagers are leaving Facebook, while some adults, often parents and
grandparents, continue to be avid users, so you may have had to approach several teenagers
before finding two who would be worth talking to. Having found people who use Facebook,
you probably heard about different patterns of use between the adults and the teenagers.
Teenagers are more likely to upload selfies and photos of places they have just visited on sites
such as Instagram or that they sent to friends on WhatsApp. Adults tend to spend time dis-
cussing family issues, the latest trends in fashion, news, and politics. They frequently post
pictures and descriptions for family members and friends about where they went on vacation
or of their children and grandchildren. After doing this activity, you should be aware that dif-
ferent kinds of users may use the same software in different ways. It is therefore important to
include a range of different types of users in your evaluations.
Source: © Glasbergen. Reproduced with permission of Glasbergen
Cartoon Service
1 4 I N T R O D U C I N G E V A L U AT I O N498
behavior. Game app developers will want to know how engaging and fun their games are
compared with those of their competitors and how long users will play them. Government
authorities may ask if a computerized system for controlling traffic lights results in fewer acci-
dents or if a website complies with the standards required for users with disabilities. Makers
of a toy may ask whether 6-year-olds can manipulate the controls, whether they are engaged
by its furry cover, and whether the toy is safe for children. A company that develops personal,
digital music players may want to know whether people from different age groups and living
in different countries like the size, color, and shape of the casing. A software company may
want to assess market reaction to its new home page design. A developer of smartphone apps
for promoting environmental sustainability in the home may want to know if their designs
are enticing and whether users continue to use their app. Different types of evaluations will be
needed depending on the type of product, the prototype or design concept, and the value of
the evaluation to the designers, developers, and users. In the end, the main criteria are whether the
design does what the users need and want it to do; that is, will they use it?
14.2.3 Where to Evaluate
Where evaluation takes place depends on what is being evaluated. Some characteristics,
such as web accessibility, are generally evaluated in a lab because it provides the control
necessary to investigate systematically whether all of the requirements are met. This is also
true for design choices, such as choosing the size and layout of keys for a small handheld
device for playing games. User experience aspects, such as whether children enjoy playing
with a new toy and for how long before they get bored, can be evaluated more effectively
in natural settings, which are often referred to as in-the-wild studies. Unlike a lab study,
seeing children play in a natural setting will reveal when the children get bored and stop
playing with the toy. In a lab study, the children are told what to do, so the UX research-
ers cannot easily see how the children naturally engage with the toy and when they get
ACTIVITy 14.2
What aspects would you want to evaluate for the following systems:
1. A personal music service?
2. A website for selling clothes?
Comment
1. You would need to discover how well users can select tracks from potentially thousands of
tunes and whether they can easily add and store new music.
2. Navigation would be a core concern for both examples. Users of a personal music service
will want to find tracks to select quickly. Users wanting to buy clothes will want to move
quickly among pages displaying clothes, comparing them, and purchasing them. In addi-
tion, do the clothes look attractive enough to buy? Other core aspects include how trust-
worthy and how secure the procedure is for taking customer credit card details.
1 4 . 2 T h E W h y, W h AT, W h E R E , A N D W h E N O f E V A L U AT I O N 499
bored. Of course, the UX researchers can ask the children whether they like it or not, but
sometimes children will not say what they really think because they are afraid of caus-
ing offense.
Remote studies of online behavior, such as social networking, can be conducted to evalu-
ate natural interactions of participants in the context of their interaction, for example, in
their own homes or place of work. Living labs (see Box 14.1) have also been built that are a
compromise between the artificial, controlled context of a lab and the natural, uncontrolled
nature of in-the-wild studies. They provide the setting of a particular type of environment,
such as the home, a workplace, or a gym, while also giving the ability to control, measure,
and record activities through embedding technology in them.
14.2.4 When to Evaluate
The stage in the product lifecycle when evaluation takes place depends on the type of prod-
uct and the development process being followed. For example, the product being developed
could be a new concept, or it could be an upgrade to an existing product. It could also
be a product in a rapidly changing market that needs to be evaluated to see how well the
design meets current and predicted market needs. If the product is new, then considerable
time is usually invested in market research and discovering user requirements. Once these
requirements have been established, they are used to create initial sketches, a storyboard, a
series of screens, or a prototype of the design ideas. These are then evaluated to see whether
the designers have interpreted the users’ requirements correctly and embodied them in their
designs appropriately. The designs will be modified according to the evaluation feedback and
new prototypes developed and subsequently evaluated.
When evaluations are conducted during design to check that a product continues to meet
users’ needs, they are known as formative evaluations. Formative evaluations cover a broad
range of design processes, from the development of early sketches and prototypes through to
tweaking and perfecting a nearly finished design.
ACTIVITy 14.3
A company is developing a new car seat to monitor whether a person is starting to fall asleep
while driving and to provide a wake-up call using olfactory and haptic feedback. Where
would you evaluate it?
Comment
It would be initially important to conduct lab-based experiments using a car simulator to see
the effectiveness of the new type of feedback—in a safe setting, of course! You would also
need to find a way to try to get the participants to fall asleep at the wheel. Once established as
an effective mechanism, you would then need to test it in a more natural setting, such as a race
track, airfield, or safe training circuit for new drivers, which can be controlled by the experi-
menter using a dual-control car.
1 4 I N T R O D U C I N G E V A L U AT I O N500
Evaluations that are carried out to assess the success of a finished product are known as
summative evaluations. If the product is being upgraded, then the evaluation may not focus on
discovering new requirements but may instead evaluate the existing product to ascertain what
needs improving. Features are then often added, which can result in new usability problems. At
other times, attention is focused on improving specific aspects, such as enhanced navigation.
As discussed in earlier chapters, rapid iterations of product development that embed
evaluations into short cycles of design, build, and test (evaluate) are common. In these cases,
the evaluation effort may be almost continuous across the product’s development and deploy-
ment lifetime. For example, this approach is sometimes adopted for government websites
that provide information about Social Security, pensions, and citizens’ voting rights.
Many agencies, such as the National Institute of Standards and Technology (NIST) in
the United States, the International Standards Organization (ISO), and the British Standards
Institute (BSI) set standards by which particular types of products, such as aircraft navigation
systems and consumer products that have safety implications for users, have to be evaluated.
The Web Content Accessibility Guidelines (WCAG) 2.1 describe how to design websites so
that they are accessible. WCAG 2.1 is discussed in more detail in Box 16.2.
14.3 Types of Evaluation
We classify evaluations into three broad categories, depending on the setting, user involve-
ment, and level of control. These are as follows:
• Controlled settings directly involving users (examples are usability labs and research labs):
Users’ activities are controlled to test hypotheses and measure or observe certain behav-
iors. The main methods are usability testing and experiments.
• Natural settings involving users (examples are online communities and products that
are used in public places): There is little or no control of users’ activities to determine
how the product would be used in the real world. The main method used is field studies
(for example in-the-wild studies).
• Any settings not directly involving users: Consultants and researchers critique, predict, and
model aspects of the interface to identify the most obvious usability problems. The range
of methods includes inspections, heuristics, walk-throughs, models, and analytics.
There are pros and cons of each evaluation category. For example, lab-based studies are
good at revealing usability problems, but they are poor at capturing context of use; field stud-
ies are good at demonstrating how people use technologies in their intended setting, but they
are often time-consuming and more difficult to conduct (Rogers et al., 2013); and modeling
and predicting approaches are relatively quick to perform, but they can miss unpredictable
usability problems and subtle aspects of the user experience. Similarly, analytics are good for
tracking the use of a website but are not good for finding out how users feel about a new
color scheme or why they behave as they do.
Deciding on which evaluation approach to use is determined by the goals of the project and
on how much control is needed to find out whether an interface or device meets those goals
and can be used effectively. For example, in the case of the music service mentioned earlier, this
includes finding out how users use it, whether they like it, and what problems they experience
with the functions. In turn, this requires determining how they carry out various tasks using
1 4 . 3 T y p E s O f E V A L U AT I O N 501
the interface operations. A degree of control is needed when designing the evaluation study to
ensure participants try all of the tasks and operations for which the service is designed.
14.3.1 Controlled Settings Involving Users
Experiments and user tests are designed to control what users do, when they do it, and for
how long. They are designed to reduce outside influences and distractions that might affect
the results, such as people talking in the background. The approach has been extensively
and successfully used to evaluate software applications running on laptops and other devices
where participants can be seated in front of them to perform a set of tasks.
Usability Testing
This approach to evaluating user interfaces involves collecting data using a combination
of methods in a controlled setting, for example, experiments that follow basic experimen-
tal design, observation, interviews, and questionnaires. Often, usability testing is conducted
in labs, although increasingly interviews and other forms of data collection are being done
remotely via phone and digital communication (for instance, through Skype or Zoom) or in
natural settings. The primary goal is to determine whether an interface is usable by the intended
user population to carry out the tasks for which it was designed. This involves investigating
how typical users perform on typical tasks. By typical, we mean the users for whom the system
is designed (for example, teenagers, adults, and so on) and the activities that it is designed for
them to be able to do (such as, purchasing the latest fashions). It often involves comparing the
number and kinds of errors that users make between versions and recording the time that it
takes them to complete the task. As users perform the tasks, they may be recorded on video.
Their interactions with the software may also be recorded, usually by logging software. User
satisfaction questionnaires and interviews can also be used to elicit users’ opinions about how
they liked the experience of using the system. This data can be supplemented by observation
at product sites to collect evidence about how the product is being used in the workplace or
in other environments. Observing users’ reactions to an interactive product has helped devel-
opers reach an understanding of usability issues, which would be difficult for them to glean
simply by reading reports or listening to presentations. The qualitative and quantitative data
that is collected using these different techniques are used in conjunction with each other to
form conclusions about how well a product meets the needs of its users.
Usability testing is a fundamental, essential HCI process. For many years, usability testing
has been a staple of companies, which is used in the development of standard products that
go through many generations, such as word processing systems, databases, and spreadsheets
(Johnson, 2014; Krug, 2014; Redish, 2012). The findings from usability testing are often
summarized in a usability specification that enables developers to test future prototypes or
versions of the product against it. Optimal performance levels and minimal levels of accept-
ance are generally specified, and current levels are noted. Changes in the design can then be
implemented, such as navigation structure, use of terms, and how the system responds to
users. These changes can then be tracked.
While usability testing is well established in UX design, it has also started to gain more
prominence in other fields such as healthcare, particularly as mobile devices take an increas-
ingly central role (Schnall et al., 2018) in hospitals and for monitoring one’s own health (for
instance, Fitbit and the Polar series, Apple Watch, and so forth). Another trend reported by
Kathryn Whitenton and Sarah Gibbons (2018) from the Nielsen/Norman (NN/g) Usability
1 4 I N T R O D U C I N G E V A L U AT I O N502
Consulting Group is that while usability guidelines have tended to be stable over time, audi-
ence expectations change. For example, Whitenton and Gibbons report that since the last
major redesign of the NN/g home page a few years ago, both the content and the audience’s
expectations about the attractiveness of the visual design have evolved. However, they stress
that even though visual design has assumed a bigger and more important role in design, it
should never replace or compromise basic usability. Users still need to be able to carry out
their tasks effectively and efficiently.
ACTIVITy 14.4
Look at Figure 14.1, which shows two similarly priced devices for recording activity and
measuring heart rate: (a) Fitbit Charge and (b) Polar A370. Assume that you are considering
buying one of these devices. What usability issues would you want to know about, and what
aesthetic design issues would be important to you when deciding which one to purchase?
Comment
There are several usability issues to consider. Some that you might be particularly interested in
finding out about include how comfortable the device is to wear, how clearly the information
is presented, what other information is presented (for example, time), how long the battery
lasts before it needs to be recharged, and so forth. Most important of all might be how accu-
rate the device is, particularly for recording heart rate if that is a concern for you.
Since these devices are worn on your wrist, they can be considered to be fashion items.
Therefore, you might want to know whether they are available in different colors, whether
they are bulky and likely to rub on clothes and cause damage, and whether they are discrete
or clearly noticeable.
(a) (b)
figure 14.1 Devices for monitoring activity and heart rate (a) Fitbit Charge and (b) Polar A370
Source: Figure 14.1 (a) Fitbit (b) Polar
http://www.nngroup.com/
1 4 . 3 T y p E s O f E V A L U AT I O N 503
Experiments are typically conducted in research labs at universities or commercial labs
to test such hypotheses. These are the most controlled evaluation settings, where research-
ers try to remove any extraneous variables that may interfere with the participant’s per-
formance. The reason for this is so that they can reliably say that the findings arising from
the experiment are due to the particular interface feature being measured. For example,
in an experiment comparing which is the best way for users to enter text when using an
iPad or other type of tablet interface, researchers would control all other aspects of the
setting to ensure that they do not affect the user’s performance. These include providing
the same instructions to all of the participants, using the same tablet interface, and asking the
participants to do the same tasks. Depending on the available functionality, conditions
that might be compared could be typing using a virtual keyboard, typing using a physical
keyboard, and swiping using a virtual keyboard. The goal of the experiment would be to
test whether one type of text input is better than the others in terms of speed of typing
and number of errors. A number of participants would be brought into the lab separately
to carry out the predefined set of text entry tasks, and their performance would be meas-
ured in terms of time consumed and how many errors are made, for example, selecting the
wrong letter. The data collected would then be analyzed to determine whether the scores
for each condition were significantly different. If the performance measures obtained for
the virtual keyboard were significantly faster than those for the other two and had the
least number of errors, one could say that this method of text entry is the best. Testing in
a laboratory may also be done when it is too disruptive to evaluate a design in a natural
setting, such as in a military conflict.
BOX 14.1
Living Labs
Living labs have been developed to evaluate people’s everyday lives, which would be simply
too difficult to assess in usability labs, for example, to investigate people’s habits and rou-
tines over a period of several months. An early example of a living lab was the Aware Home
(Abowd et al., 2000) in which the house was embedded with a complex network of sensors and
audio/video recording devices that recorded the occupants’ movements throughout the house
and their use of technology. This enabled their behavior to be monitored and analyzed, for example,
their routines and deviations. A primary motivation was to evaluate how real families would
respond and adapt to such a setup over a period of several months. However, it proved difficult
to get families to agree to leave their own homes and live in a living lab home for that long.
Ambient-assisted homes have also been developed where a network of sensors is embed-
ded throughout someone’s home rather than in a special, customized building. One rationale
is to enable disabled and elderly people to lead safe and independent lives by providing a non-
intrusive system that can remotely monitor and provide alerts to caregivers in the event of an
accident, illness, or unusual activities (see Fernández-Luque et al., 2009). The term living lab
is also used to describe innovation networks in which people gather in person and virtually
to explore and form commercial research and development collaborations (Ley et al., 2017).
Nowadays, many living labs have become more like commercial enterprises, which offer
facilities, infrastructure, and access to participating communities, bringing together users,
developers, researchers, and other stakeholders. Living labs are being developed that form an
1 4 I N T R O D U C I N G E V A L U AT I O N504
14.3.2 Natural Settings Involving Users
The goal of field studies is to evaluate products with users in their natural settings. Field studies
are used primarily to
• Help identify opportunities for new technology
• Establish the requirements for a new design
• Facilitate the introduction of technology or inform deployment of existing technology in
new contexts
integral part of a smart building that can be adapted for different conditions in order to inves-
tigate the effects of different configurations of lighting, heating, and other building features
on the inhabitant’s comfort, work productivity, stress levels, and well-being. The Smart Living
Lab in Switzerland, for example, is developing an urban block, including office buildings,
apartments, and a school, to provide an infrastructure that offers opportunities for researchers
to investigate different kinds of human experiences within built environments (Verma et al.,
2017). Some of these spaces are large and may house hundreds and even thousands of people.
People themselves are also being fitted with mobile and wearable devices that measure heart
rate, activity levels, and so on, which can then be aggregated to assess the health of a popula-
tion (for example, students at a university) over long periods of time. Hofte et al. (2009) call
this a mobile living lab approach, noting how it enables more people to be studied for longer
periods and at the times and locations where observation by researchers is difficult.
Citizen science, in which volunteers work with scientists to collect data on a scientific
research issue, such as biodiversity (for instance, iNaturalist.org), monitoring the flowering
times of plants over tens or hundreds of years (Primak, 2014), or identifying galaxies online
(for example, https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/), can also be thought
of as a type of living lab, especially when the behavior of the participants, their use of technol-
ogy, and the design of that technology are also being studied. Lab in the Wild (http://www
.LabintheWild.org), for example, is an online site that hosts volunteers who participate in a
range of projects. Researchers analyzed more than 8,000 comments from volunteers involved
in four experiments. They concluded that such online sites have potential as online research
labs (Oliveira et al., 2017) that can be studied over time and hence form a type of living lab.
DILEMMA
Is a Living Lab Really a Lab?
The concept of a living lab differs from a traditional view of a lab insofar as it is trying to be
both natural and experimental and the goal is to bring the lab into the home (or other natural
setting) or online. The dilemma is how artificial to make the more natural setting; where does
the balance lie in setting it up to enable the right level of control to conduct research and
evaluation without losing the sense of it being natural?
https://www.zooniverse.org/projects/zookeeper/galaxy-zoo/
http://www.labinthewild.org
http://www.labinthewild.org
http://iNaturalist.org
1 4 . 3 T y p E s O f E V A L U AT I O N 505
Methods that are typically used are observation, interviews, and interaction logging (see
Chapters 8 and 9). The data takes the form of events and conversations that are recorded
by the researchers as notes, or through audio or video recording, or by the participants as
diaries and notes. The goal is to be unobtrusive and not to affect what people do during the
evaluation. However, it is inevitable that some methods will influence how people behave.
For example, diary studies require people to document their activities or feelings at certain
times, and this can make them reflect on and possibly change their behavior.
During the last 15 years, there has been a trend toward conducting in-the-wild studies.
These are essentially user studies that look at how new technologies or prototypes have been
deployed and used by people in various settings, such as outdoors, in public places, and in
homes. Sometimes, a prototype that is deployed is called a disruptive technology, where the aim
is to determine how it displaces an existing technology or practice. In moving into the wild,
researchers inevitably have to give up control of what is being evaluated in order to observe
how people approach and use—or don’t use—technologies in their everyday lives. For exam-
ple, a researcher might be interested in observing how a new mobile navigation device will
be used in urban environments. To conduct the study, they would need to recruit people who
are willing to use the device for a few weeks or months in their natural surroundings. They
might then tell the participants what they can do with the device. Other than that, it is up
to the participants to decide how to use it and when, as they move among work or school,
home, and other places.
The downside of handing over control is that it makes it difficult to anticipate what is
going to happen and to be present when something interesting does happen. This is in con-
trast to usability testing where there is always an investigator or camera at hand to record
events. Instead, the researcher has to rely on the participants recording and reflecting on how
they use the product, by writing up their experiences in diaries, filling in online forms, and/or
taking part in intermittent interviews.
Field studies can also be virtual, where observations take place in multiuser games such
as World of Warcraft, online communities, chat rooms, and so on. A goal of this kind of field
study is to examine the kinds of social processes that occur in them, such as collaboration,
confrontation, and cooperation. The researcher typically becomes a participant and does not
control the interactions (see Chapters 8 and 9). Virtual field studies have also become popu-
lar in the geological and biological sciences because they can supplement studies in the field.
Increasingly, online is partnered with a real-world experience so that researchers and students
get the best of both situations (Cliffe, 2017).
14.3.3 Any Settings Not Involving Users
Evaluations that take place without involving users are conducted in settings where the
researcher has to imagine or model how an interface is likely to be used. Inspection methods
are commonly employed to predict user behavior and to identify usability problems based on
knowledge of usability, users’ behavior, the contexts in which the system will be used, and the
kinds of activities that users undertake. Examples include heuristic evaluation that applies
knowledge of typical users guided by rules of thumb and walkthroughs that involve stepping
through a scenario or answering a set of questions for a detailed prototype. Other techniques
include analytics and models.
The original heuristics used in heuristic evaluation were for screen-based applications
(Nielsen and Mack, 1994; Nielsen and Tahir, 2002). These have been adapted to develop
new sets of heuristics for evaluating web-based products, mobile systems, collaborative
1 4 I N T R O D U C I N G E V A L U AT I O N506
technologies, computerized toys, information visualizations (Forsell and Johansson, 2010),
and other new types of systems. One of the problems with using heuristics is that designers
can sometimes be led astray by findings that are not as accurate as they appeared to be at first
(Tomlin, 2010). This problem can arise from different sources, such as a lack of experience
and the biases of UX researchers who conduct the heuristic evaluations.
Cognitive walk-throughs involve simulating a user’s problem-solving process at each step
in the human-computer dialogue and checking to see how users progress from step to step in
these interactions (see Wharton et al., 1994 in Nielsen and Mack, 1994). During the last 15
years, cognitive walk-throughs have been used to evaluate smartphones (Jadhav et al., 2013),
large displays, and other applications, such as public displays (Parker et al., 2017). A key fea-
ture of cognitive walk-throughs is that they focus on evaluating designs for ease of learning.
Analytics is a technique for logging and analyzing data either at a customer’s site or
remotely. Web analytics is the measurement, collection, analysis, and reporting of Internet
data to understand and optimize web usage. Examples of web analytics include the num-
ber of visitors to a website home page over a particular time period, the average time users
spend on the home page, which other pages they visit, or whether they leave after visiting the
homepage. For example, Google provides a commonly used approach for collecting analytics
data that is a particularly useful method for evaluating design features of a website (https://
marketingplatform.google.com/about/analytics/). As part of the massive open online courses
(MOOCs) and open educational resources (OERs) movement, learning analytics has evolved
and gained prominence for assessing the learning that takes place in these environments. The
Open University in the United Kingdom, along with others, has published widely on this
topic, describing how learning analytics are useful for guiding course and program design
and for evaluating the impact of pedagogical decision-making (Toetenel and Bart, 2016).
Models have been used primarily for comparing the efficacy of different interfaces for
the same application, for example, the optimal arrangement and location of features. A well-
known approach uses Fitts’ law to predict the time it takes to reach a target using a pointing
device (MacKenzie, 1995) or using the keys on a mobile device or game controller (Ram-
charitar and Teather, 2017).
14.3.4 Selecting and Combining Methods
The three broad categories identified previously provide a general framework to guide the
selection of evaluation methods. Often, combinations of methods are used across the catego-
ries to obtain a richer understanding. For example, sometimes usability testing conducted
in labs is combined with observations in natural settings to identify the range of usability
problems and find out how users typically use a product.
This web page provides information about learning analytics and learning design:
https://iet.open.ac.uk/themes/learning-analytics-and-learning-design
https://marketingplatform.google.com/about/analytics/
https://marketingplatform.google.com/about/analytics/
https://iet.open.ac.uk/themes/learning-analytics-and-learning-design
1 4 . 4 E V A L U AT I O N C A s E s T U D I E s 507
There are both pros and cons for controlled and uncontrolled settings. The benefits of
controlled settings include being able to test hypotheses about specific features of the inter-
face where the results can be generalized to the wider population. A benefit of uncontrolled
settings is that unexpected data can be obtained that provides quite different insights into
people’s perceptions and their experiences of using, interacting, or communicating through
the new technologies in the context of their everyday and working lives.
14.3.5 Opportunistic Evaluations
Evaluations may be detailed, planned studies, or opportunistic explorations. The latter explo-
rations are generally done early in the design process to provide designers with feedback
quickly about a design idea. Getting this kind of feedback early in the design process is
important because it confirms whether it is worth proceeding to develop an idea into a pro-
totype. Typically, these early evaluations are informal and do not require many resources. For
example, the designers may recruit a few local users and ask their opinions. Getting feedback
this early in design provides feedback early on when it is easier to make changes to an evolv-
ing design. Opportunistic evaluations with users can also be conducted to hone the target
audience so that subsequent evaluation studies can be more focused. Opportunistic evalua-
tions can also be conducted in addition to more formal evaluations.
14.4 Evaluation Case Studies
Two contrasting case studies are described in this section to illustrate how evaluations can
take place in different settings with different amounts of control over users’ activities. The
first case study (section 14.4.1) describes a classic experiment that tested whether it was
more exciting playing against a computer versus playing against a friend in a collaborative
computer game (Mandryk and Inkpen, 2004). Though published more than 15 years ago,
we are keeping this case study in this edition of the book because it provides a concise and
clear description about a variety of measures that were used in the experiment. The second
case study (section 14.4.2) describes an ethnographic field study in which a bot, known as
Ethnobot, was developed to prompt participants to answer questions about their experiences
while walking around a large outdoor show (Tallyn et al., 2018).
14.4.1 Case Study 1: An Experiment Investigating a Computer Game
For games to be successful, they must engage and challenge users. Criteria for evaluating
these aspects of the user experience are therefore needed. In this case study, physiological
responses were used to evaluate users’ experiences when playing against a friend and when
playing alone against the computer (Mandryk and Inkpen, 2004). Regan Mandryk and Kori
Inkpen conjectured that physiological indicators could be an effective way of measuring a
player’s experience. Specifically, they designed an experiment to evaluate the participants’
engagement while playing an online ice-hockey game.
Ten participants, who were experienced game players, took part in the experiment. Dur-
ing the experiment, sensors were placed on the participants to collect physiological data.
The data collected included measurements of the moisture produced by sweat glands of
their hands and feet and changes in heart and breathing rates. In addition, they videoed the
1 4 I N T R O D U C I N G E V A L U AT I O N508
participants and asked them to complete user satisfaction questionnaires at the end of the
experiment. To reduce the effects of learning, half of the participants played first against a
friend and then against the computer, and the other half played against the computer first.
Figure 14.2 shows the setup for recording data while the participants were playing the game.
Results from the user satisfaction questionnaire revealed that the mean ratings on a 1–5 scale
for each item indicated that playing against a friend was the favored experience (Table 14.1). Data
recorded from the physiological responses was compared for the two conditions and in general
revealed higher levels of excitement when participants played against a friend than when they
played against the computer. The physiological recordings were also compared across participants
and, in general, indicated the same trend. Figure 14.3 shows a comparison for two participants.
figure 14.2 The display shows the physiological data (top right), two participants, and a screen of
the game they played.
Source: Mandryk and Inkpen (2004). Physiological Indicators for the Evaluation of Co-located Collaborative Play,
CSCW’2004, pp. 102–111. Reproduced with permission of ACM Publications
Playing Against Computer Playing Against Friend
Mean St. Dev. Mean St. Dev.
Boring 2.3 0.949 1.7 0.949
Challenging 3.6 1.08 3.9 0.994
Easy 2.7 0.823 2.5 0.850
Engaging 3.8 0.422 4.3 0.675
Exciting 3.5 0.527 4.1 0.568
Frustrating 2.8 1.14 2.5 0.850
Fun 3.9 0.738 4.6 0.699
Table 14.1 Mean subjective ratings given on a user satisfaction questionnaire using a five-point
scale, in which 1 is lowest and 5 is highest for the 10 players
1 4 . 4 E V A L U AT I O N C A s E s T U D I E s 509
Identifying strongly with an experience state is indicated by a higher mean. The standard
deviation indicates the spread of the results around the mean. Low values indicate little vari-
ation in participants’ responses; high values indicate more variation.
Because of individual differences in physiological data, it was not possible to compare
directly the means of the two sets of data collected: subjective questionnaires and physiologi-
cal measures. However, by normalizing the results, it was possible to correlate the results
across individuals. This indicated that the physiological data gathering and analysis methods
were effective for evaluating levels of challenge and engagement. Although not perfect, these
two kinds of measures offer a way of going beyond traditional usability testing in an experi-
mental setting to get a deeper understanding of user experience goals.
Time (seconds)
Friend
Computer
Goal
1.80
1.60
1.40
1.20
1.00
0.80
0.60
0.40
0.20
0.00
When Goal Scored
Participant 2
Time (seconds)
Friend
Computer
Fight begin Fight end2.60
2.50
2.40
2.30
2.20
2.10
2.00
Fight Sequence
Participant 9
(a) (b)
figure 14.3 (a) A participant’s skin response when scoring a goal against a friend versus against the
computer, and (b) another participant’s response when engaging in a hockey fight against a friend
versus against the computer
Source: Mandryk and Inkpen (2004). Physiological Indicators for the Evaluation of Co-located Collaborative Play,
CSCW’2004, pp. 102–111. Reproduced with permission of ACM Publications
ACTIVITy 14.5
1. What kind of setting was used in this experiment?
2. How much control did the researchers exert?
3. Which types of data were collected?
Comment
1. The experiment took place in a research lab, which is a controlled setting.
2. The evaluation was strongly controlled by the evaluators. They specified which of the two
gaming conditions was assigned to each participant. The participants also had sensors
placed on them to collect physiological data as they played the game, for example to moni-
tor changes in heart rate and breathing.
3. Physiological measures of the participants while playing the game were collected together
with data collected afterward using a user satisfaction questionnaire that asked questions
about how satisfied they were with the game and how much they enjoyed it.
1 4 I N T R O D U C I N G E V A L U AT I O N510
14.4.2 Case Study 2: Gathering Ethnographic Data at the Royal
Highland Show
Field observations, including in-the-wild and ethnographic studies, provide data about how
users interact with technology in their natural environments. Such studies often provide
insights not available in lab settings. However, it can be difficult to collect participants’
thoughts, feelings, and opinions as they move about in their everyday lives. Usually, it
involves observations and asking them to reflect after an event, for example through inter-
views and diaries. In this case study, a novel evaluation approach—a live chatbot—was used
to address this gap by collecting data about people’s experiences, impressions, and feelings
as they visited and moved around the Royal Highland Show (RHS) (Tallyn et al., 2018).
The RHS is a large agricultural show that runs every June in Scotland. The chatbot, known
as Ethnobot, was designed as an app that runs on a smartphone. In particular, Ethnobot
was programmed to ask participants pre-established questions as they wandered around the
show and to prompt them to expand on their answers and take photos. It also directed them
to particular parts of the show that the researchers thought would interest the participants.
This strategy also allowed the researchers to collect data from all of the participants in the
same place. Interviews were also conducted by human researchers to supplement the data
collected online by the Ethnobot.
The overall purpose of the study was to find out about participants’ experiences of, and
feelings about, the show and of using Ethnobot. The researchers also wanted to compare the
data collected by the Ethnobot with the interview data collected by the human researchers.
The study consisted of four data collection sessions using the Ethnobot over two days
and involved 13 participants, who ranged in age and came from diverse backgrounds. One
session occurred in the early afternoon and the other in the late afternoon on each day of
the study. Each session lasted several hours. To participate in the study, each participant was
given a smartphone and shown how to use the Ethnobot app (Figure 14.4), which they could
experience on their own or in groups as they wished.
Two main types of data were collected.
• The participants’ online responses to a short list of pre-established questions that they
answered by selecting from a list of prewritten comments (for example, “I enjoyed some-
thing” or “I learned something”) presented by the Ethnobot in the form of buttons called
experience buttons, and the participants’ additional open-ended, online comments and
photos that they offered in response to prompts for more information from Ethnobot. The
participants could contribute this data at any time during the session.
• The participants’ responses to researchers’ in-person interview questions. These questions
focused on the participants’ experiences that were not recorded by the Ethnobot, and their
reactions to using the Ethnobot.
A lot of data was collected that had to be analyzed. The pre-established comments col-
lected in the Ethnobot chatlogs were analyzed quantitatively by counting the responses. The
in-person interviews were audio-recorded and transcribed for analysis, and that involved
coding them, which was done by two researchers who cross-checked each other’s analysis
for consistency. The open-ended online comments were analyzed in a similar way to the in-
person interview data.
1 4 . 4 E V A L U AT I O N C A s E s T U D I E s 511
Overall, the analyses revealed that participants spent an average of 120 minutes with the
Ethnobot on each session and recorded an average of 71 responses, while submitting an aver-
age of 12 photos. In general, participants responded well to prompting by the Ethnobot and
were eager to add more information. For example, P9 said, “ I really enjoyed going around
and taking pictures and [ to the question] ‘have you got something to add’ [said] yeah! I
have, I always say ‘yes’. . . .” A total of 435 pre-established responses were collected, includ-
ing 70 that were about what the participants did or experienced (see Figure 14.5). The most
frequent response was “I learned something” followed by “I tried something” and “I enjoyed
something.” Some participants also supplied photos to illustrate their experiences.
When the researchers asked the participants about their reactions to selecting prewritten
comments, eight participants remarked that they were rather restrictive and that they would
like more flexibility to answer the questions. For example, P12 said, “maybe there should
have been more options, in terms of your reactions to the different parts of the show.” How-
ever, in general participants enjoyed their experience of the RHS and of using Ethnobot.
figure 14.4 The Ethnobot used at the Royal Highland Show in Scotland. Notice that the Ethnobot
directed participant Billy to a particular place (that is, Aberdeenshire Village). Next, Ethnobot asks
“. . . What’s going on?” and the screen shows five of the experience buttons from which Billy needs
to select a response
Source: Tallyn et al. (2018). Reproduced with permission of ACM Publications
1 4 I N T R O D U C I N G E V A L U AT I O N512
When the researchers compared the data collected by Ethnobot with that from the
interviews collected by the human researchers, they found that the participants provided
more detail about their experiences and feelings in response to the in-person interview
questions than to those presented by Ethnobot. Based on the findings of this study, the
researchers concluded that while there are some challenges to using a bot to collect in-
the-wild evaluation data, there are also advantages, particularly when researchers cannot
be present or when the study involves collecting data from participants on the move or in
places that are hard for researchers to access. Collecting data with a bot and supplementing
it with data collected by human researchers appears to offer a good solution in circum-
stances such as these.
252015
17
9
0
20
16
8
1050
I tried something
I didn’t like something
I experienced something
I learned something
I enjoyed something
I bought something
figure 14.5 The number of prewritten experience responses submitted by participants to the pre-
established questions that Ethnobot asked them about their experiences
Source: Tallyn et al. (2018). Reproduced with permission of ACM Publications
ACTIVITy 14.6
1. What kind of setting was used in this evaluation?
2. How much control did the researchers exert?
3. Which types of data were collected?
Comment
1. The evaluation took place in a natural outdoor setting at the RHS.
2. The researchers imposed less control on the participants than in the previous case study,
but the Ethnobot was programmed to ask specific questions, and a range of responses was
provided from which participants selected. The Ethnobot was also programmed to request
additional information and photos. In addition, the Ethnobot was programmed to guide
the participants to particular areas of the show, although some participants ignored this
guidance and went where they pleased.
3. The Ethnobot collected answers to a specific set of predetermined questions (closed ques-
tions) and prompted participants for additional information and photographs. In addition,
1 4 . 4 E V A L U AT I O N C A s E s T U D I E s 513
participants were interviewed by the researchers using semi-structured, open-ended inter-
views. The data collected was qualitative, but counts of the response categories produced
quantitative data (see Figure 14.5). Some demographic data was also quantitative (for
instance, participants’ ages, gender, and so forth), which is provided in the full paper (Tal-
lyn et al., 2018).
BOX 14.2
Crowdsourcing
The Internet makes it possible to gain access to hundreds of thousands of participants who will
perform tasks or provide feedback on a design or experimental task quickly and almost imme-
diately. Mechanical Turk is a service hosted by Amazon that has thousands of people regis-
tered (known as Turkers), who have volunteered to take part by performing various activities
online, known as human intelligence tasks (HITs), for a very small reward. HITs are submit-
ted by researchers or companies that pay a few cents for simple tasks (such as tagging pic-
tures) to a few dollars (for taking part in an experiment). Advantages of using crowdsourcing
in HCI is that it is more flexible, relatively inexpensive, and often much quicker to enroll
participants than with traditional lab studies. Another benefit is that many more participants
can be recruited.
Early in the history of online crowdsourcing, Jeff Heer and Michael Bostock (2010) used
it to determine how reliable it was to ask random people over the Internet to take part in an
experiment. Using Mechanical Turk, they asked the Turkers to perform a series of perception
tasks using different visual display techniques. A large number agreed, enabling them to ana-
lyze their results statistically and generalize from their findings. They also developed a short
set of test questions that generated 2,880 responses. They then compared their findings from
using crowdsourcing with those reported in published lab-based experiments. They found
that while the results from their study using Turkers showed wider variance than in the lab
study, the overall results across the studies were the same. They also found that the total cost
of their experiment with Turkers was one-sixth the cost of a typical lab study involving the
same number of people. While these results are important, online crowdsourcing studies have
also raised ethical questions about whether Turkers are being appropriately rewarded and
acknowledged—an important question that continues to be discussed (see, for example, the
Brookings Institute article by Vanessa Williamson, 2016).
Since Jeff Heer’s and Michael Bostock’s 2010 study, crowdsourcing has become increas-
ingly popular and has been used in a wide range of applications including collecting design
ideas, such as ideation, for developing a citizen science app (Maher et al., 2014); managing
volunteers for disaster relief (Ludwig et al., 2016); and delivering packages (Kim, 2015). Both
the number and diversity of useful contributions and ideas generated make crowdsourcing
particularly attractive for getting timely feedback from the public. For example, in a study to
collect and improve the design of a street intersection, a system called CommunityCrit was
used to collect opinions from members of the community and to draw on their skills and
availability (Mahyar et al., 2018). Those who contributed were empowered by getting to see
the planning process.
1 4 I N T R O D U C I N G E V A L U AT I O N514
14.5 What Did We Learn from the Case Studies?
The case studies along with Box 14.1 and Box 14.2 provide examples of how different
evaluation methods are used in different physical settings that involve users in different ways
to answer various kinds of questions. They demonstrate how researchers exercise different
levels of control in different settings. The case studies also show how it is necessary to be
creative when working with innovative systems and when dealing with constraints created by
the evaluation setting (for example, online, distributed, or outdoors where people are on the
move) and the technology being evaluated. In addition, the case studies and boxes discussed
illustrate how to do the following:
• Observe users in the lab and in natural settings
• Develop different data collection and analysis techniques to evaluate user experience goals,
such as challenge and engagement and people on the move
• Run experiments on the Internet using crowdsourcing, thereby reaching many more par-
ticipants while being straightforward to run
• Recruit a large number of participants who contribute to a wide range of projects with
different goals using crowdsourcing
Citizen science is also a form of crowdsourcing. Originally dating back to the days of
Aristotle and Darwin, the data was collected by humans, who were sometimes referred to as
sensors. During the last 10 years, the volume of data collected has increased substantially, lev-
eraged by technology, particularly smartphones, and a range of other digital devices (Preece,
2017). For example, iSpotNature.org, iNaturalist.com, and eBird.com are apps that are used
across the world for collecting biodiversity data and data about bird behavior.
These examples illustrate how crowdsourcing can be a powerful tool for improving,
enhancing, and scaling up a wide range of tasks. Crowdsourcing makes it possible to recruit
participants to generate a large pool of potential ideas, collect data, and make other useful
inputs that would be difficult to achieve in other ways. Several companies, including Google,
Facebook, and IDEO, use crowdsourcing to try ideas and to gather evaluation feedback
about designs.
BOX 14.3
The Language of Evaluation
Sometimes terms describing evaluation are used interchangeably and have different meanings.
To avoid this confusion, we define some of these terms here in alphabetical order. (You may
find that other books use different terms.)
Analytics Data analytics refers to examining large volumes of raw data with the purpose of
drawing inferences about a situation or a design. Web analytics is commonly used to meas-
ure website traffic through analyzing users’ click data.
Analytical evaluation This type of evaluation models and predicts user behavior. This term
has been used to refer to heuristic evaluation, walk-throughs, modeling, and analytics.
http://iNaturalist.com
http://eBird.com
http://iSpotNature.org
1 4 . 5 W h AT D I D W E L E A R N f R O M T h E C A s E s T U D I E s ? 515
Bias The results of an evaluation are distorted. This can happen for several reasons. For
example, selecting a population of users who have already had experience with the new
system and describing their performance as if they were new users.
Controlled experiment This is a study that is conducted to test hypotheses about some
aspect of an interface or other dimension. Aspects that are controlled typically include the
task that participants are asked to perform, the amount of time available to complete
the tasks, and the environment in which the evaluation study occurs.
Crowdsourcing This can be done in person (as was typical in citizen science for decades) or
online via the web and mobile apps. Crowdsourcing provides the opportunity for hundreds,
thousands, or even millions of people to evaluate a product or take part in an experiment.
The crowd may be asked to perform a particular evaluation task using a new product or to
rate or comment on the product.
Ecological validity This is a particular kind of validity that concerns how the environment
in which an evaluation is conducted influences or even distorts the results.
Expert review or crit This is an evaluation method in which someone (or several people)
with usability expertise and knowledge of the user population reviews a product looking
for potential problems.
Field study This type of evaluation study is done in a natural environment such as in a per-
son’s home or in a work or leisure place.
Formative evaluation This type of evaluation is done during design to check that the product
fulfills requirements and continues to meet users’ needs.
Heuristic evaluation This is an evaluation method in which knowledge of typical users is
applied, often guided by heuristics, to identify usability problems.
Informed consent form This form describes what a participant in an evaluation study will be
asked to do, what will happen to the data collected about them, and their rights while
involved in the study.
In-the-wild study This is a type of field study in which users are observed using products or
prototypes within their everyday context.
Living lab This place is configured to measure and record people’s everyday activities in a
natural setting, such as in the home.
Predictive evaluation This type of evaluation is where theoretically based models are used to
predict user performance.
Reliability The reliability or consistency of a method is how well it produces the same results
on separate occasions under the same circumstances.
Scope This refers to how much the findings from an evaluation can be generalized.
Summative evaluation This evaluation is done when the design is complete.
Usability lab This lab is specially designed for usability testing.
Usability testing This involves measuring how well a design supports users’ performance on
various tasks.
User studies This generic term covers a range of evaluations involving users, including field
studies and experiments.
Users or participants In this context, these terms are used interchangeably to refer to the
people who take part in evaluation studies.
Validity Validity is concerned with whether the evaluation method measures what it is
intended to measure.
1 4 I N T R O D U C I N G E V A L U AT I O N516
14.6 Other Issues to Consider When Doing Evaluation
Reading the case studies may have raised other issues, such as the importance of asking good
question to focus the evaluation. A good question is important because it helps to focus the
evaluation and decide on the best approach and methods to use. Another issue is how to find
suitable participants and, having found them, how to approach them. Can you just ask chil-
dren in a café to participate, or do you need permission from their parents? What do you have
to tell participants, and what if they decide part way through the study that they don’t want
to continue to the end? Can they stop, or do they have to continue? Two central issues are:
• Informing participants about their rights
• Making sure you take into account biases and other influences that impact how you
describe your evaluation findings
14.6.1 Informing Participants About Their Rights and Getting
Their Consent
Most professional societies, universities, government, and other research offices require
researchers and those performing evaluation studies to provide information about activities
in which human participants will be involved. They do this to protect participants by ensur-
ing that they are not endangered physically or emotionally and that their right to privacy
is protected, particularly the details about how participants’ data is collected and will be
treated. Drawing up such an agreement is mandatory in many universities and major organi-
zations. Indeed, special review boards generally prescribe the format required, and many
provide a detailed form that must be completed. Once the details are accepted, the review
board checks periodically to oversee compliance. In American universities, these are known
as institutional review boards (IRBs).
Institutions in other countries use different names, forms, and processes to protect users,
and some countries have different laws that govern areas such as users’ privacy, mentioned in
Chapter 8, “Data Gathering.” For example, the General Data Protection Regulation (GDPR)
was introduced in 2018 to strengthen data protection and privacy for all individuals liv-
ing within the European Union. Such laws influence not just the countries directly involved
but also people in other countries who collaborate with EU countries on research projects or
commercial software development.
Over the years, IRB forms have become increasingly detailed, particularly now that
much research involves the Internet and people’s interaction via social media and other com-
munications technologies. IRB reviews are especially stringent when a research or evaluation
study involves people who could be considered vulnerable (such as children, older adults, and
people with disabilities).
Several lawsuits at prominent universities have heightened attention to IRB and similar
compliance laws and standards to the extent that it sometimes takes several months and
multiple amendments to get IRB acceptance. Not only are IRB reviewers interested in the
more obvious issues of how participants will be treated and what they will be asked to
do; they also want to know how the data will be analyzed and stored. For example, data
about participants must be coded and stored to prevent linking participants’ names with
that data.
1 4 . 6 O T h E R I s s U E s T O C O N s I D E R W h E N D O I N G E V A L U AT I O N 517
Participants must be told what they will be asked to do, the conditions under which data
will be collected, and what will happen to their data when they finish the task. Participants
must also be told their rights, for instance, that they may withdraw from the study at any
time if they want. This information is usually presented to participants on a form, often
referred to as a consent form, that each participant reads and signs before the study starts.
When new laws come into existence, such as the EU’s GDPR mentioned earlier, it is particu-
larly important to be aware of how such laws will be enacted and their potential impact on
research and evaluation studies.
Some companies have “boilerplate” templates that UX researchers and designers can use
that describe how participants will be treated and how the data collected will be used so that
new documents do not have to be created for each evaluation study. Many companies also
ask the evaluation participants to sign a nondisclosure agreement, which requires that they
do not talk about the product and their experience of evaluating it with anyone after com-
pleting the evaluation. Companies require this because they do not want their competitors
and the public to know about the product before it is launched or modified.
14.6.2 Issues That Influence the Choice of Method and How the Data
Is Interpreted
Decisions have to be made about what data is needed to answer the study questions, how
the data will be analyzed, and how the findings will be presented (see Chapters 8 and 9).
To a great extent, the method used determines the type of data collected, but there are still
some choices. For example, should the data be treated statistically? Ideally, this question is
DILEMMA
When is a person considered vulnerable, and how might this
affect them?
Who is vulnerable? The answer is all of us at various times and stages in our lives. At any
particular time, however, some people are more vulnerable than others (for example, children
and people with emotional and certain physical disabilities). Furthermore, definitions of peo-
ple who are vulnerable varies from country to country, state to state, and policy to policy, so
the following two scenarios are broad categories to get you thinking about this important
issue. At what age can children read and sign their own consent forms? Is it when they are
considered to be old enough to understand what they are being asked to do? This could be 12
years of age, or at other times and places 16 or even 18 or 21. It also depends on the kind of
study. In some parts of the world, a 17-year-old can get married but may need their parents
to sign a form saying that they can take part in an evaluation study to rate the realism of a
social robot’s expressions. What is the balance here between seeking reasonable consent and
respecting individuals’ rights to privacy for themselves and their families?
1 4 I N T R O D U C I N G E V A L U AT I O N518
addressed before the data is collected, but if unexpected data arises, for instance from an
in-the-wild study, this question may need to be considered afterward. For example, in-the-
wild studies sometimes generate demographic data or counts (categorical data) that can be
analyzed and presented using descriptive statistics (for example, the percentage of people in
different age groups). Some general questions also need to be asked. Is the method reliable?
Has the method produced the kind of data intended? Is the evaluation study ecologically
valid, or is the fundamental nature of the process being changed by studying it? Are biases
creeping in that will distort the results? Will the results be generalizable; that is, what is
their scope?
Reliability
The reliability or consistency of a method is how well it produces the same results on separate
occasions under the same circumstances. Another evaluator or researcher who follows the
same procedure should get similar results. Different evaluation methods have different degrees
of reliability. For example, a carefully controlled experiment will have high reliability, whereas
observing users in their natural setting will be variable. An unstructured interview will have
low reliability—it would be difficult if not impossible to repeat exactly the same discussion.
Validity
Validity is concerned with whether the evaluation method measures what it is intended to
measure. This encompasses both the method itself and the way it is implemented. If, for
example, the goal of an evaluation study is to find out how users use a new product in their
homes, then it is not appropriate to plan a lab experiment. An ethnographic study in users’
homes would be more appropriate. If the goal is to find average performance times for com-
pleting a task, then a method that only recorded the number of user errors would be invalid.
These examples are deliberately extreme, but subtler mistakes can be made, and it’s good to
consider these questions for each study.
Ecological Validity
Ecological validity is a particular kind of validity that concerns how the environment in
which an evaluation is conducted influences or even distorts the results. For example,
lab experiments are controlled, so what the participants do and how they behave is quite
different from what happens naturally in their workplace, at home, or in leisure environ-
ments. Lab experiments therefore have low ecological validity because the results are
unlikely to represent what happens in the real world. In contrast, ethnographic studies
do not impact the participants or the study location as much, so they have high ecologi-
cal validity.
Ecological validity is also affected when participants are aware of being studied. This is
sometimes called the Hawthorne effect after a series of experiments at the Western Electric
Company’s Hawthorne factory in the United States in the 1920s and 1930s. The stud-
ies investigated changes in length of working day, heating, lighting, and so on; however,
eventually it was discovered that the workers were reacting positively to being given spe-
cial treatment rather than just to the experimental conditions. Similar findings sometimes
1 4 . 6 O T h E R I s s U E s T O C O N s I D E R W h E N D O I N G E V A L U AT I O N 519
occur in medical trials. Patients given the placebo dose (a false dose in which no drug is
administered) show improvement that is due to receiving extra attention that makes them
feel good.
Bias
Bias occurs when the results are distorted. For example, expert evaluators performing a
heuristic evaluation may be more sensitive to certain kinds of design flaws than others, and
this will be reflected in the results. When collecting observational data, researchers may con-
sistently fail to notice certain types of behavior because they do not deem them important.
Put another way, they may selectively gather data that they think is important. Interviewers
may subconsciously influence responses from interviewees by their tone of voice, their facial
expressions, or the way questions are phrased, so it is important to be sensitive to the pos-
sibility of biases.
Scope
The scope of an evaluation study refers to how much its findings can be generalized. For exam-
ple, some modeling methods, like Fitts’ Law (also discussed in Chapter 16, “Evaluation: Inspec-
tions, Analytics, and Models”), which is used to evaluate keypad design, have a narrow, precise
scope. (The problems of overstating results are discussed in Chapter 9, “Data Analysis”).
In-Depth Activity
In this activity, think about the case studies and reflect on the evaluation methods used.
1. For the two case studies discussed in this chapter, think about the role of evaluation in the
design of the system and note the artifacts that were evaluated: when during the design
were they evaluated, which methods were used, and what was learned from the evalua-
tions? Note any issues of particular interest. You may find that constructing a table like the
one shown here is a helpful approach.
Name of the
study or
artifact
evaluated
When during
the design the
evaluation
occurred?
How
controlled was
the study and
what role did
users have?
Which
methods
were
used?
What kind
of data
was
collected,
and how
was it
analyzed?
What was
learned
from
the study?
Notable
issues
2. What were the main constraints that influenced the evaluations?
3. How did the use of different methods build on and complement each other to give a
broader picture of the evaluations?
4. Which parts of the evaluations were directed at usability goals and which at user experi-
ence goals?
1 4 I N T R O D U C I N G E V A L U AT I O N520
Further Reading
KRUGE, S. (2014) Don’t Make Me Think: A Common Sense Approach to Web Usability (3rd
ed.). New Riders. This book provides many useful practical examples of usability issues and
how best to avoid them.
LAZAR, J., FENG, J. H. and HOCHHEISER, H. (2017) Research Methods in Human–
Computer Interaction (2nd ed.). Cambridge, MA: Elsevier/Morgan Kaufmann Publishers.
This book provides a useful overview of qualitative and quantitative methods. Chapter 15,
“Working with Human Subjects,” discusses ethical issues of working with human partici-
pants. PowerPoint slides are also available at:
https://www.elsevier.com/books-and-journals/book-companion/9780128053904
summary
The goal of this chapter was to introduce the main approaches to evaluation and the methods
typically used. These will be revisited in greater depth in the next two chapters. This chapter
stressed how evaluation is done throughout design by collecting information about users’ or
potential users’ experiences when interacting with a prototype, a computer system, a compo-
nent of a computer system, or a design artifact (such as a screen sketch) to improve its design.
The pros and cons of running lab-based evaluations versus field studies were outlined in
terms of participant reach, cost, effort, constraints, and the types of results that can be elicited.
Choosing which approach to use will depend on the goals of the evaluation, the researcher’s
or evaluator’s expectations, and the resources available to them.
Crowdsourcing was presented as a creative way of involving a wide range of people with
different ideas and skills. Finally, we briefly mentioned the ethical issues relating to how evalu-
ation participants are treated and their rights to privacy. We also raised questions about data
interpretation including the need to be aware of biases, reliability, data and ecological validity,
and the scope of the study.
Key Points
• Evaluation and design are closely integrated.
• Some of the same data gathering methods are used in evaluation as for discovering require-
ments and identifying users’ needs, for instance, observation, interviews, and questionnaires.
• Evaluations can be done in controlled settings such as labs, less-controlled field settings, or
where users are not present.
• Usability testing and experiments involve a high level of control over both what users do
and what is tested, whereas field and in-the-wild evaluations typically impose little or no
control on participants.
• Different methods are usually combined to provide different perspectives within a study.
• Participants need to be made aware of their rights. This is often done through informed
consent forms.
• It is important not to over-generalize findings from an evaluation.
https://www.elsevier.com/books-and-journals/book-companion/9780128053904
f U R T h E R R E A D I N G 521f U R T h E R R E A D I N G
ROGERS, Y., YUILL, N. and MARSHALL, P. (2013) “Contrasting Lab-Based and In-
the-Wild Studies for Evaluating Multi-User Technologies.” In B. Price (2013) The SAGE
Handbook on Digital Technology Research. SAGE Publications: 359–173. This chapter
explores the pros and cons of lab-based and in-the-wild evaluation studies with reference to
different types of technology platforms including tabletops and large wall displays.
SHNEIDERMAN, B., PLAISANT, C., COHEN, M., JACOBS, S., ELMQUIST, N. and
DIAKOPOULOS, N. (2016) Designing the User Interface: Strategies for Effective Human-
Computer Interaction (6th ed.). Addison-Wesley, Pearson. Chapter 5 provides an alternative
way of categorizing evaluation methods and offers a useful overview.
TULLIS, T. and ALBERT, B. (2013) Measuring the User Experience (2nd ed.). Morgan Kauf-
mann. This book provides a more general treatment of usability testing. It focuses more
strongly on evaluating the user experience and UX design.
Chapter 15
E V A L U A T I O N S T U D I E S :
F R O M C O N T R O L L E D T O
N A T U R A L S E T T I N G S
15.1 Introduction
15.2 Usability Testing
15.3 Conducting Experiments
15.4 Field Studies
Objectives
The main goals of the chapter are to accomplish the following:
• Explain how to do usability testing.
• Outline the basics of experimental design.
• Describe how to do field studies.
15.1 Introduction
Imagine that you have designed a new app to allow school children ages 9 or 10 and their
parents to share caring for the class hamster over the school holidays. The app will schedule
which children are responsible for the hamster and when, and it will also record when it is
fed. The app will also provide detailed instructions about when the hamster is scheduled to
go to another family and the arrangements about when and where it will be handed over. In
addition, both teachers and parents will be able to access the schedule and send and leave
messages for each other. How would you find out whether the children, their teacher, and
their parents can use the app effectively and whether it is satisfying to use? What evaluation
methods would you employ?
In this chapter, we describe evaluation studies that take place in a spectrum of settings,
from controlled laboratories to natural settings. Within this range we focus on the following:
• Usability testing, which takes place in usability labs and other controlled lab-like settings
• Experiments, which take place in research labs
• Field studies, which take place in natural settings, such as people’s homes, schools, work,
and leisure environments
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S524
15.2 Usability Testing
The usability of products has traditionally been tested in controlled laboratory settings.
This approach emphasizes how usable a product is. Initially, it was most commonly used to
evaluate desktop applications, such as websites, word processors, and search tools. It is also
important now, however, to test the usability of apps and other digital products. Perform-
ing usability testing in a laboratory, or in a temporarily assigned controlled environment,
enables designers to control what users do and allows them to control the environmental
and social influences that might impact the user’s performance. The goal is to test whether
the product being developed is usable by the intended user in order to achieve the tasks
for which it was designed and whether users are satisfied with their experience. For some
products, such as games, designers will also want to know whether their product is enjoy-
able and fun to use. (Chapter 1, “What is Interaction Design,” discusses usability and user
experience goals.)
15.2.1 Methods, Tasks, and Users
Collecting data about users’ performance on predefined tasks is a central component of
usability testing. As mentioned in Chapter 14, “Introducing Evaluation,” a combination of
methods is often used to collect data. The data includes video recordings of the users, includ-
ing their facial expressions, logged keystrokes, and mouse and other movements, such as
swiping and dragging objects. Sometimes, participants are asked to describe what they are
thinking and doing out loud (the “think aloud” technique) while carrying out tasks as a way
of revealing what they are thinking and planning. In addition, a user satisfaction question-
naire is used to find out how users actually feel about using the product by asking them
to rate it using a number of scales after they interact with it. Structured or semistructured
Source: http://geek-and-poke.com.
Licensed under CC-BY 3.0
1 5 . 2 U S A b I L I T y T E S T I N G 525
interviews may also be conducted with users to collect additional information about what
they liked and did not like about the product. Sometimes, designers also collect data about
how the product is used in the field.
Examples of the tasks that are typically given to users include searching for information,
reading different typefaces (for example, Helvetica and Times), navigating through differ-
ent menus, and uploading apps. Performance times and the number of the different types of
actions carried out by users are the two main performance measures. Obtaining these two
measures involves recording the time it takes typical users to complete a task, such as finding
a website, and the number of errors that users make, such as selecting incorrect menu options
when creating a visual display. The following quantitative performance measures, which were
identified in the late 1990s, are still used as a baseline for collecting user performance data
(Wixon and Wilson, 1997):
• Number of users completing a task successfully
• Time to complete a task
• Time to complete a task after a specified time away from the product
• Number and type of errors per task
• Number of errors per unit of time
• Number of navigations to online help or manuals
• Number of users making a particular error
A key concern when doing usability testing is the number of users that should be involved:
early research suggests that 5 to 12 is an acceptable number (Dumas and Redish, 1999),
though more is often regarded as being better because the results represent a larger and often
broader selection of the target user population. However, sometimes it is reasonable to involve
fewer users when there are budget and schedule constraints. For instance, quick feedback
about a design idea, such as the initial placement of a logo on a website, can be obtained from
only two or three users reporting on how quickly they spot the logo and whether they like its
design. Sometimes, more users can be involved early on by distributing an initial questionnaire
online to collect information about users’ concerns. The main concerns can then be examined
in more detail in a follow-up lab-based study with a small number of typical users.
15.2.2 Labs and Equipment
Many large companies, such as Microsoft, Google, and Apple, test their products in custom-
built usability labs that consist of a main testing lab with recording equipment and an obser-
vation room where the designers can watch what is going on and how the data collected
is being analyzed. There may also be a reception area where users can wait, a storage area,
and a viewing room for observers. These lab spaces can be arranged to mimic superficially
features of the real world. For example, when testing an office product or for use in a hotel
This link provides a practical introduction to usability testing and describes how it
relates to UX design: https://icons8.com/articles/usability-practical-definition-
ux-design/.
https://icons8.com/articles/usability-practical-definition-ux-design/
https://icons8.com/articles/usability-practical-definition-ux-design/
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S526
reception area, the lab can be set up to resemble those environments. Soundproofing and lack
of windows, co-workers, and other workplace and social distractions are eliminated so that
the users can concentrate on the tasks that have been set up for them to perform. While con-
trolled environments like these enable researchers to capture data about users’ uninterrupted
performance, the impact that real-world interruptions can have on usability is not captured.
Typically, there are two to three wall-mounted video cameras that record the users’
behavior, such as hand movements, facial expressions, and general body language. Micro-
phones are placed near where the participants will be sitting to record their comments. Video
and other data is fed through to monitors in an observation room, which is usually separated
from the main lab or workroom by a one-way mirror so that designers can watch what par-
ticipants are doing but not be seen by the participants. The observation room can be a small
auditorium with rows of seats at different levels or, more simply, a small backroom consisting
of a row of chairs facing the monitors.
Figure 15.1 shows a typical arrangement in which designers in an observation room
are watching a usability test through a one-way mirror, as well as watching the data being
recorded on a video monitor.
Usability labs can be expensive and labor-intensive to run and maintain. Therefore, less
expensive and more versatile alternatives started to become popular in the early and mid-
1990s. The development of mobile and remote usability testing equipment also corresponded
with the need to do more testing in small companies and in other venues. Mobile usability
equipment typically includes video cameras, laptops, eye-tracking devices, and other measur-
ing equipment that can be set up temporarily in an office or other space, converting it into a
makeshift usability laboratory. An advantage of this approach is that the equipment can be
taken into work settings, enabling testing to be done on-site, which makes it less artificial and
more convenient for the participants.
An increasing number of products are specifically designed for performing mobile evalua-
tions. Some are referred to as lab-in-a-box or lab-in-a-suitcase because they pack away neatly
into a convenient carrying case. The portable lab equipment typically consists of off-the-shelf
components that plug into a laptop that can record video directly to hard disk, eye-trackers
Figure 15.1 A usability laboratory in which designers watch participants on a monitor and through
a one-way mirror
Source: Helen Sharp
1 5 . 2 U S A b I L I T y T E S T I N G 527
(some of which take the form of glasses for recording the user’s gaze, as shown in Figure 15.2),
and facial recognition systems for recording changes in the user’s emotional responses.
An example of a recent study in which eye-tracking glasses were used to record the eye-
gaze of people in a shopping mall is reported by Nick Dalton and his colleagues (Dalton
et al., 2015). The goal of this study was to find out whether shoppers pay attention to large-
format plasma screen displays when wandering around a large shopping mall in London.
The displays varied in size, and some contained information about directions to different
parts of the mall, while others contained advertisements. Twenty-two participants (10 males
and 12 females, aged 19 to 73 years old) took part in the study in which they were asked to
carry out a typical shopping task while wearing Tobii Glasses Mobile Eye Tracking glasses
(see Figure 15.2). These participants were told that the researchers were investigating what
people look at while shopping; no mention was made of the displays. Each participant was
paid £10 to participate in the study. They were also told that there would be a prize drawing
after the study and that participants who won would receive a gift of up £100 in value. Their
task was to find one or more items that they would purchase if they won the prize drawing.
The researchers did this so that the study was an ecologically valid in-the-wild shopping task,
in which the participants focused on shopping for items that they wanted.
As the participants moved around the mall, their gaze was recorded and analyzed to
determine the percentage of time that they were looking at different things. This was done by
using software that converted eye-gaze movements so that they could be overlaid on a video
of the scene. The researchers then coded the participants’ gazes based on where they were
looking (for instance, at the architecture of the mall, products, people, signage, large text,
or displays). Several other quantitative and qualitative analyses were also performed. The
findings from these analyses revealed that participants looked at displays, particularly large
plasma screens, more than had been previously reported in earlier studies by other researchers.
Another trend in usability testing is to conduct remote, unmoderated usability testing in
which users perform a set of tasks with a product in their own setting, and their interactions are
logged remotely (Madathil and Greenstein, 2011). An advantage of this approach is that many
Figure 15.2 The Tobii Glasses Mobile Eye-Tracking System
Source: Dalton et al. (2015), p. 3891. Reproduced with permission of ACM Publications
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S528
users can be tested at the same time in real-world settings, and the logged data can be automati-
cally compiled for data analysis. For example, clicks can be tracked and counted per page when
users search for specific information on websites. This approach is particularly popular in large
companies such as Microsoft and Google and in companies specializing in user testing (for
example, Userzoom.com) that test products used across the world. With remote testing, large
numbers of participants can be recruited who are able participate at convenient times within
their own time zones. As more and more products are designed for global markets, designers
and researchers appreciate this flexibility. Remote testing also allows individuals with disabili-
ties to be involved, as they can work from their own homes (Petrie et al., 2006).
15.2.3 Case Study: Testing the iPad Usability
When Apple’s iPad first came onto the market, usability specialists Raluca Budiu and Jakob
Nielsen from the Nielsen Norman Group conducted user tests to evaluate participants’ inter-
actions with websites and apps specifically designed for the iPad (Budiu and Nielsen, 2010).
This classic study is presented here because it illustrates how usability tests are carried out and
the types of modifications that are made to accommodate real-world constraints, such as hav-
ing a limited amount of time to evaluate the iPad as it came onto the market. Completing the
study quickly was important because Raluca Budiu and Jakob Nielsen wanted to get feedback
to third-party developers, who were creating apps and websites for the iPad. These develop-
ers were designing products with little or no contact with the iPad developers at Apple, who
needed to keep details about the design of the iPad secret until it was launched. There was also
considerable “hype” among the general public and others before the launch, so many people
were eager to know if the iPad would really live up to expectations. Because of the need for
a quick first study, and to make the results public around the time of the iPad launch, a sec-
ond study was carried out in 2011, a year later, to examine some additional usability issues.
(Reports of both studies are available on the Nielsen Norman Group website, which suggests
reading the second study first. However, in this case study, the reports are discussed in chrono-
logical order: http://www.nngroup.com/reports/ipad-app-and-website-usability.)
15.2.3.1 iPad Usability: First Findings from User Testing
In the first study of iPad usability, Raluca Budiu and Jakob Nielsen (Budiu and Nielsen,
2010) used two usability evaluation methods: usability testing with think-aloud in which
users said what they were doing and thinking as they did it (discussed earlier in Chapter 8,
“Data Gathering”) and an expert review, which will be discussed in the next chapter. A key
question they asked was about whether user expectations were different for the iPad as
compared to the iPhone. They focused on this issue because a previous study of the iPhone
showed that people preferred using apps to browsing the web because the latter was slow
and cumbersome at that time. They wondered whether this would be the same for the iPad,
where the screen was larger and web pages were more similar to how they appeared on the
laptops or desktop computers that most people were accustomed to using at the time.
The usability testing was carried out in two cities in the United States: Fremont, Califor-
nia, and Chicago, Illinois. The test sessions were similar: the goal of both was to understand
the typical usability issues that users encounter when using applications and accessing web-
sites on the iPad. Seven participants were recruited. All were experienced iPhone users who
had owned their phones for at least three months and who had used a variety of apps.
One reason for selecting participants who used iPhones was because they would have
had previous experience in using apps and the web with a similar interaction style as the iPad.
https://Userzoom.com
http://www.nngroup.com/reports/ipad-app-and-website-usability
1 5 . 2 U S A b I L I T y T E S T I N G 529
The participants were considered to be typical users who represented the range of those
who might purchase an iPad. Two participants were in their 20s, three were in their 30s, one
was in their 50s, and one was in their 60s. Three were males, and four were females.
Before taking part, the participants were asked to read and sign an informed consent
form agreeing to the terms and conditions of the study. This form described the following:
• What the participant would be asked to do
• The length of time needed for the study
• The compensation that would be offered for participating in the study
• The participants’ right to withdraw from the study at any time
• A promise that the person’s identity would not be disclosed
• An agreement that the data collected from each participant would be confidential and
would not be made available to marketers or anyone other than the researchers
The Tests The session started with participants being invited to explore any application they
found interesting on the iPad. They were asked to comment on what they were looking for or
reading, what they liked and disliked about a site, and what made it easy or difficult for them
to carry out a task. A moderator sat next to each participant, observed, and took notes. The
sessions were video-recorded, and they lasted about 90 minutes each. Participants worked on
their own.
After exploring the iPad, the participants were asked by the researchers to open specific
apps or websites, explore them, and then carry out one or more tasks as they would have if
they were on their own. Each participant was assigned the tasks in a random order. All of
the apps that were tested were designed specifically for the iPad, but for some tasks the users
were asked to do the same task on a website that was not specifically designed for the iPad.
For these tasks, the researchers took care to balance the presentation order so that the app
would be the first presented for some participants and the website would be first presented
for others. More than 60 tasks were chosen from more than 32 different sites. Examples are
shown in Table 15.1.
App or website Task
iBook Download a free copy of Alice’s Adventures in Wonderland and read through
the first few pages.
Craigslist Find some free mulch for your garden.
Time Magazine Browse through the magazine, and find the best pictures of the week.
Epicurious You want to make an apple pie tonight. Find a recipe and see what you need
to buy in order to prepare it.
Kayak You are planning a trip to Death Valley in May this year. Find a hotel located
in the park or close to the park.
Table 15.1 Examples of some of the user tests used in the iPad evaluation (adapted from Budiu
and Nielsen, 2010)
Source: http://www.nngroup.com/reports/ ipad-app-and-website-usability. Used courtesy of the Nielsen
Norman Group
http://www.nngroup.com/reports/ipad-app-and-website-usability
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S530
The Equipment The testing was done using a setup (see Figure 15.3) similar to the mobile
usability kit described earlier. A camera recorded the participant’s interactions and gestures
when using the iPad and streamed the recording to a laptop computer. A webcam was also
used to record the expressions on the participants’ faces and their think-aloud commentary.
The laptop ran software called Morae, which synchronized these two data streams. Up to
three observers (including the moderator sitting next to the participant) watched the video
streams (rather than observing the participants directly) on their laptops situated on the table
so that they did not invade the participants’ personal space.
Usability Problems The main findings from the study showed that the participants were able to
interact with websites on the iPad but that it was not optimal. For example, links on the pages
were often too small to tap on reliably, and the fonts were sometimes difficult to read. The vari-
ous usability problems identified in the study were classified according to a number of well-
known interaction design principles and concepts, including mental models, navigation, quality
of images, problems of using a touchscreen with small target areas, lack of affordances, getting
lost in the application, effects of changing orientations, working memory, and feedback received.
Getting lost in an application is an old but important problem for designers of digital
products, and some participants got lost because they tapped the iPad too much and could
not find a back button and could not get back to the home page. One participant said “ . . . I
like having everything there [on the home page]. That’s just how my brain works” (Budiu and
ACTIVITy 15.1
1. What was the main purpose of this study?
2. What aspects are considered to be important for good usability and user experience in
this study?
3. How representative do you consider the tasks outlined in Table 15.1 to be for a typical
iPad user?
Comment
1. The main purpose of the study was to find out how participants interacted with the iPad
by examining how they interacted with the apps and websites that they used on the iPad.
The findings were intended to help designers and developers determine whether specific
websites need to be developed for the iPad.
2. The definition of usability in Chapter 1 suggests that the iPad should be efficient, effective,
safe, easy to learn, easy to remember, and have good utility (that is, good usability). The defini-
tion of user experience suggests that it should also support creativity and be motivating, help-
ful, and satisfying to use (that is, to offer a good user experience). The iPad is designed for the
general public, so the range of users is broad in terms of age and experience with technology.
3. The tasks are a small sample of the total set prepared by the researchers. They cover shop-
ping, reading, planning, and finding a recipe, which are common activities that people
engage in during their everyday lives.
1 5 . 2 U S A b I L I T y T E S T I N G 531
Nielsen, 2010, p. 58). Other problems arose because applications appeared differently in the
two views possible on the iPad: portrait and landscape.
Interpreting and Presenting the Data Based on the findings of their study, Budiu and Nielsen
made a number of recommendations, including supporting standard navigation. The results
of the study were written up as a report that was made publicly available to app developers
and the general public. It provided a summary of key findings for the general public as well
as specific details of the problems the participants had with the iPad so that developers could
decide whether to make specific websites and apps for the iPad.
While revealing how usable websites and apps are on the iPad, this user testing did not
address how the iPad would be used in people’s everyday lives. This required a field study
where observations were made of how people use iPads in their own homes, at school, in the
gym, and when traveling, but this did not happen because of lack of time.
Figure 15.3 The setup used in the Chicago usability testing sessions
Source: http://www.nngroup.com/reports/ ipad-app-and-website-usability. Used courtesy of the Nielsen Norman
Group
ACTIVITy 15.2
1. Was the selection of participants for the iPad study appropriate? Justify your comments.
2. What might have been some of the problems with asking participants to think out loud as
they completed the tasks?
Comments
1. The researchers tried to get a representative set of participants across an age and gender
range with similar skill levels, that is, participants who had already used an iPhone. Ideally,
it would have been good to have had additional participants to see whether the findings
were more generalizable across the broad range of users for whom the iPad was designed.
However, it was important to do the study as quickly as possible and get the results out to
developers and to the general public.
http://www.nngroup.com/reports/ipad-app-and-website-usability
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S532
15.2.3.2 iPad Usability: Year One
Having rushed to get their first report out when the iPad first came onto the market, Raluca
Budiu and Jakob Nielsen did more tests a year later in 2011. Even though many of their rec-
ommendations (for example, designing apps with back buttons, broader use of search, and
direct access to news articles by touching headlines on the front page) were implemented,
there were still some problems. For example, users accidentally touched something and
couldn’t find their way back to their starting point. There were also magazine apps that
required many steps to access a table of contents, and that led users to make mistakes when
navigating through the magazine.
Normally, a second usability study would not be done just a year after the first. How-
ever, the first study was done with participants who did not have direct experience with an
iPad. A year later, the researchers were able to recruit participants with at least two months’
experience of using an iPad. Another reason for doing a second study so close to the first one
was that many of the apps and websites with usability problems were developed without the
direct involvement of the Apple iPad development team due to the need for secrecy until the iPad
was officially launched onto the market.
This time, user testing was done with 16 iPad users, half men and half women. Fourteen were
between 25–50 years of age, and two were older than 50. The new findings included splash screens
that became boring after a while for regular users, too much information on the screen, fonts that
were too small, and swiping the wrong items when several options were presented on-screen.
The first set of tests in 2010 illustrates how the researchers had to adapt their testing
method to fit within a tight time period. Designers and researchers often have to modify how
they go about user testing for a number of reasons. For example, in a study in Namibia, the
researchers reported that questionnaires did not work well because the participants gave the
responses that they thought the researchers wanted to hear (Paterson et al., 2011). However,
“the interviews and observations, revealed that many participants were unable to solve all
tasks and that many struggled . . . Without the interviews and observations many issues
would not have been laid open during the usability evaluation” (Paterson et al., 2011, p.
245). This experience suggests that using multiple methods can reveal different usability
problems. Even more important, it illustrates the importance of not taking for granted that a
method used with one group of participants will work with another group, particularly when
working with people from different cultures.
For another example of usability testing, see the report entitled “Case Study:
Iterative Design and Prototype Testing of the NN/g Homepage” by Kathryn
Whitenton and Sarah Gibbons from the Nielsen Norman Group (August 26, 2018),
which describes how user testing with prototypes is integrated into the design
process. You can view this report at: https://www.nngroup.com/articles/
case-study-iterative-design-prototyping/.
2. If a person is concentrating hard on a task, it can be difficult to talk at the same time. This
can be overcome by asking participants to work in pairs so that they talk to each other
about the problems that they encounter.
https://www.nngroup.com/articles/case-study-iterative-design-prototyping/
https://www.nngroup.com/articles/case-study-iterative-design-prototyping/
1 5 . 3 C O N D U C T I N G E x p E R I M E N T S 533
15.3 Conducting Experiments
In research contexts, specific hypotheses are tested that make a prediction about the way
users will perform with an interface. The benefits are more rigor and confidence that one
interface feature is easier to understand or faster to use than another. An example of a
hypothesis is that context menus (that is, menus that provide options related to the con-
text determined by the users’ previous choices) are easier to select as compared to cas-
cading menus.
Hypotheses are often based on a theory, such as Fitts’ Law (see Chapter 16, “Evaluation:
Inspections, Analytics, and Models”), or previous research findings. Specific measurements
provide a way of testing the hypothesis. In the previous example, the accuracy of selecting
menu options could be compared by counting the number of errors made by participants
when selecting from each menu type.
15.3.1 Hypotheses Testing
Typically, a hypothesis involves examining a relationship between two things, called variables.
Variables can be independent or dependent. An independent variable is what the researcher
manipulates (that is, selects), and in the previous example, it is the different menu types. The
other variable is called the dependent variable, and in our example this is the time taken to
select an option. It is a measure of user performance and, if our hypothesis is correct, will
vary depending on the different types of menus.
When setting up a hypothesis to test the effect of the independent variable(s) on the
dependent variable, it is normal to derive a null hypothesis and an alternative one. The null
hypothesis in our example would state that there is no difference in the time it takes users to
find items (that is, the selection time) between context and cascading menus. The alternative
hypothesis would state that there is a difference between the two regarding selection time.
When a difference is specified but not what it will be, it is called a two-tailed hypothesis.
This is because it can be interpreted in two ways: either it is faster to select options from the
context menu or the cascading menu. Alternatively, the hypothesis can be stated in terms of
one effect. This is called a one-tailed hypothesis, and it would state that “it is faster to select
options from context menus,” or vice versa. A one-tailed hypothesis would be preferred if
there was a strong reason to believe it to be the case. A two-tailed hypothesis would be cho-
sen if there was no reason or theory that could be used to support the case that the predicted
effect would go one way or the other.
A video illustrates the usability problems that a woman had when navigating a
website to find the best deal for renting a car in her neighborhood. It illustrates
how usability testing can be done in person by a designer sitting with a partici-
pant. The video is called Rocket Surgery Made Easy by Steve Krug, and you can
view it here: https://www.youtube.com/watch?v=QckIzHC99xc.
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S534
You might ask why you need a null hypothesis, since it seems to be the opposite of what the
experimenter wants to find out. It is put forward so that the data can reject a statement without
necessarily supporting the opposite statement. If the experimental data shows a big difference
between selection times for the two menu types, then the null hypothesis that the menu type has
no effect on selection time can be rejected, which is different from saying that there is an effect.
Conversely, if there is no difference between the two, then the null hypothesis cannot be rejected
(that is, the claim that it is faster to select options from context menus is not supported).
To test a hypothesis, the researcher has to set up the conditions and find ways to keep
other variables constant to prevent them from influencing the findings. This is called the
experimental design. Examples of other variables that need to be kept constant for both types
of menus might include size and screen resolution. For example, if the text is in 10-point font
size in one condition and 14-point font size in the other, then it could be this difference that
causes the effect (that is, differences in selection speed are due to font size). More than one
condition can also be compared with the control, for example Condition 1 = Context menu;
Condition 2 = Cascading menu; and Condition 3 = Scrolling.
Sometimes, a researcher might want to investigate the relationship between two inde-
pendent variables, for example, age and educational background. A hypothesis might be that
young people are faster at searching the web than older people and that those with a scien-
tific background are more effective at searching the web. An experiment would be set up to
measure the time it takes to complete the task and the number of searches carried out. The
analysis of the data would focus on the effects of the main variables (age and background)
and also look for any interactions among them.
Hypothesis testing can also be extended to include even more variables, but it makes the
experimental design more complex. An example is testing the effects of age and educational
background on user performance for two methods of web searching: one using a search engine
and the other manually navigating through links on a website. Again, the goal is to test the effects
of the main variables (age, educational background, and web searching method) and to look for
any interactions among them. However, as the number of variables increases in an experimental
design, it makes it more difficult to work out what is causing the results from the data.
15.3.2 Experimental Design
A concern in experimental design is to determine which participants to involve for which
conditions in an experiment. The experience of participating in one condition will affect the
performance of those participants if asked to participate in another condition. For example,
having learned about the way the heart works using multimedia, if one group of participants
was exposed to the same learning material via another medium, for instance, virtual reality,
and another group of participants was not, the participants who had the additional exposure
to the material would have an unfair advantage. Furthermore, it would create bias if the par-
ticipants in one condition within the same experiment had seen the content and the others had
not. The reason for this is that those who had the additional exposure to the content would
have had more time to learn about the topic, and this would increase their chances of answer-
ing more questions correctly. In some experimental designs, however, it is possible to use the
same participants for all conditions without letting such training effects bias the results.
The names given for the different designs are different-participant design, same-participant
design, and matched-pairs design. In different-participant design, a single group of participants
is allocated randomly to each of the experimental conditions so that different participants per-
form in different conditions. Another term used for this experimental design is between-subjects
1 5 . 3 C O N D U C T I N G E x p E R I M E N T S 535
design. An advantage is that there are no ordering or training effects caused by the influence
of participants’ experience on one set of tasks to their performance on the next set, as each
participant only ever performs under one condition. A disadvantage is that large numbers of
participants are needed so that the effect of any individual differences among participants, such
as differences in experience and expertise, is minimized. Randomly allocating the participants
and pretesting to identify any participants that differ strongly from the others can help.
In same-participant design (also called within-subjects design), all participants perform
in all conditions so that only half the number of participants is needed; the main reason for
this design is to lessen the impact of individual differences and to see how performance varies
across conditions for each participant. It is important to ensure that the order in which par-
ticipants perform tasks for this setup does not bias the results. For example, if there are two
tasks, A and B, half the participants should do task A followed by task B, and the other half
should do task B followed by task A. This is known as counterbalancing. Counterbalancing
neutralizes possible unfair effects of learning from the first task, known as the order effect.
In matched-participant design (also known as pair-wise design), participants are matched
in pairs based on certain user characteristics such as expertise and gender. Each pair is then ran-
domly allocated to each experimental condition. A problem with this arrangement is that other
important variables that have not been considered may influence the results. For example, experi-
ence in using the web could influence the results of tests to evaluate the navigability of a website.
Therefore, web expertise would be a good criterion for matching participants. The advantages
and disadvantages of using different experimental designs are summarized in Table 15.2.
The data collected to measure user performance on the tasks set in an experiment usually
includes response times for subtasks, total times to complete a task, and number of errors
per task. Analyzing the data involves comparing the performance data obtained across the
different conditions. The response times, errors, and so on, are averaged across conditions to
see whether there are any marked differences. Statistical tests are then used, such as t-tests
that statistically compare the differences between the conditions, to reveal if these are signifi-
cant. For example, a t-test will reveal whether it is faster to select options from context or
cascading menus.
Design Advantages Disadvantages
Different
participants
(between-
subjects design)
No order effects. Many participants are needed. Individual
differences among participants are a problem,
which can be offset to some extent by
randomly assigning to groups.
Same participants
(within-
subjects design)
Eliminates individual
differences between
experimental conditions.
Need to counterbalance to avoid
ordering effects.
Matched
participants
(pair-wise design)
No order effects. The effects
of individual differences
are reduced.
Can never be sure that subjects are matched
across variables that might affect performance.
Table 15.2 The advantages and disadvantages of different allocations of participants to conditions
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S536
15.3.3 Statistics: t-tests
There are many types of statistics that can be used to test the probability of a result occur-
ring by chance, but t-tests are the most widely used statistical test in HCI and related fields,
such as psychology. The scores, for example, time taken for each participant to select items
from a menu in each condition (that is, context and cascading menus), are used to compute
the means (x) and standard deviations (SDs). The standard deviation is a statistical measure
of the spread or variability around the mean. The t-test uses a simple equation to test the
significance of the difference between the means for the two conditions. If they are signifi-
cantly different from each other, we can reject the null hypothesis and in so doing infer that
the alternative hypothesis holds. A typical t-test result that compared menu selection times
for two groups with 9 and 12 participants each might be as follows:
t p df4 53 0 05 19. , . ,
The t-value of 4.53 is the score derived from applying the t-test; df stands for degrees of
freedom, which represents the number of values in the conditions that are free to vary. This
is a complex concept that we will not explain here other than to mention how it is derived
and that it is always written as part of the result of a t-test. The df values are calculated by
summing the number of participants in one condition minus 1 and the number of partici-
pants in the other condition minus 1. It is calculated as df N Na b1 1 , where Na is
the number of participants in one condition and Nb is the number of participants in the
other condition. In our example, df 9 1 12 1 19, p is the probability that the effect
found did not occur by chance. So, when p 0 05. , it means that the effect found is probably
not due to chance and that there is only a 5 percent possibility that it could be by chance. In
other words, there most likely is a difference between the two conditions. Typically, a value
of p 0 05. is considered good enough to reject the null hypothesis, although lower levels of p
are more convincing, for instance, p 0 01. where the effect found is even less likely to be due
to chance, there being only a 1 percent chance of that being the case.
15.4 Field Studies
Increasingly, more evaluation studies are being done in natural settings with either little or no
control imposed on participants’ activities. This change is largely a response to technologies
being developed for use outside office settings. For example, mobile, ambient, IoT, and other
technologies are now available for use in the home, outdoors, and in public places. Typically,
field studies are conducted to evaluate these user experiences.
As mentioned in Chapter 14, evaluations conducted in natural settings are very different
from those conducted in controlled environments, where tasks are set and completed in an
orderly way. In contrast, studies in natural settings tend to be messy in the sense that activities
often overlap and are constantly interrupted by events that are not predicted or controlled
such as phone calls, texts, rain if the study is outside, and people coming and going. This fol-
lows the way that people interact with products in their everyday messy worlds, which is gen-
erally different from how they perform on fixed tasks in a laboratory setting. Evaluating how
people think about, interact with, and integrate products within the settings in which they
will ultimately be used, gives a better sense of how successful the products will be in the real
world. The trade-off is that it is harder to test specific hypotheses about an interface because
many environmental factors that influence the interaction cannot be controlled. Therefore,
1 5 . 4 F I E L D S T U D I E S 537
it is not possible to account, with the same degree of certainty, for how people react to or
use a product as can be done in controlled settings like laboratories. This makes it more dif-
ficult to determine what causes a particular type of behavior or what is problematic about
the usability of a product. Instead, qualitative accounts and descriptions of people’s behavior
and activities are obtained that reveal how they used the product and reacted to its design.
Field studies can range in time from just a few minutes to a period of several months or
even years. Data is collected primarily by observing and interviewing people, such as by col-
lecting video, audio, field notes, and photos to record what occurs in the chosen setting. In
addition, participants may be asked to fill out paper-based or electronic diaries, which run
on smartphones, tablets, or other handheld devices, at particular points during the day. The
kinds of reports that can be of interest include being interrupted during an ongoing activity
or when they encounter a problem when interacting with a product or when they are in a
particular location, as well as how, when, and if they return to the task that was interrupted.
This technique is based on the experience sampling method (ESM), discussed in Chapter 8,
which is often used in healthcare (Price et al., 2018). Data on the frequency and patterns of
certain daily activities, such as the monitoring of eating and drinking habits, or social inter-
actions like phone and face-to-face conversations, are often recorded. Software running on
the smartphones triggers messages to study participants at certain intervals, requesting them
to answer questions or fill out dynamic forms and checklists. These might include recording
what they are doing, what they are feeling like at a particular time, where they are, or how
many conversations they have had in the last hour.
As in any kind of evaluation, when conducting a field study, deciding whether to tell the
people being observed, or asked to record information, that they are being studied and how
long the study or session will last is more difficult than in a laboratory situation. For example,
when studying people’s interactions with an ambient display, or the displays in a shopping
mall described earlier (Dalton et al. 2016), telling them that they are part of a study will likely
change the way they behave. Similarly, if people are using an online street map while walking
in a city, their interactions may take only a few seconds, so informing them that they are being
studied would disrupt their behavior. It is also important to ensure the privacy of participants
in field studies. For example, participants in field studies that run over a period of weeks or
months should be informed about the study and asked to sign an informed consent form in
the usual way, as mentioned in Chapter 14. In studies that last for a long time, such as those in
people’s homes, the designers will need to work out and agree with the participants what part
of the activity is to be recorded and how. For example, if the designers want to set up cameras,
they need to be situated unobtrusively, and participants need to be informed in advance about
where the cameras will be and when they will be recording their activities. The designers will
also need to work out in advance what to do if the prototype or product breaks down. Can
the participants be instructed to fix the problem themselves, or will the designers need to be
called in? Security arrangements will also need to be made if expensive or precious equipment
is being evaluated in a public place. Other practical issues may also need to be considered
depending on the location, product being evaluated, and the participants in the study.
The study in which the Ethnobot (Tallyn et al., 2018) was used to collect information
about what users did and how they felt while walking around at the Royal Highland Show
in Scotland (discussed in Chapter 14) was an example of a field study. A wide range of other
studies have explored how new technologies have been used and adopted by people in their
own cultures and settings. By adopted, we mean how the participants use, integrate, and
adapt the technology to suit their needs, desires, and ways of living. The findings from studies
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S538
in natural settings are typically reported in the form of vignettes, excerpts, critical incidents,
patterns of behavior, and narratives to show how the products are being used, adopted, and
integrated into their surroundings.
15.4.1 In-the-Wild Studies
For several years now, it has become increasingly popular to conduct in-the-wild studies to
determine how people use and persist in using a range of new technologies or prototypes in
situ. The term in-the-wild reflects the context of the study, in which new technologies are
deployed and evaluated in natural settings (Rogers, 2011). Instead of developing solutions
that fit in with existing practices and settings, researchers often explore new technologi-
cal possibilities that can change and even disrupt participants’ behavior. Opportunities are
created, interventions are installed, and different ways of behaving are encouraged. A key
concern is how people react, change, and integrate the technology into their everyday lives.
The outcome of conducting in-the-wild studies for different periods and at different intervals
can be revealing, demonstrating quite different results from those arising out of lab studies.
Comparisons of findings from lab studies and in-the-wild studies have revealed that while
many usability issues can be uncovered in a lab study, the way the technology is actually used
can be difficult to discern. These aspects include how users approach the new technology, the
kinds of benefits that they can derive from it, how they use it in everyday contexts, and its
sustained use over time (Rogers et al, 2013; Kjeldskov and Skov, 2014; Harjuniemi and Häk-
kila, 2018). The next case study describes a field study in which the researchers evaluated a
pain-monitoring device with patients who had just had surgery.
CASE STUDy:
A field study of a pain monitoring device
Monitoring patients’ pain and ensuring that the amount of pain experienced by them after
surgery is tolerable is an important part of helping patients to recover. However, accurate pain
monitoring is a known problem among physicians, nurses, and caregivers. Collecting sched-
uled pain readings takes time, and it can be difficult because patients may be asleep or may
not want to be bothered. Typically, pain is managed in hospitals by nurses asking patients to
rate their pain on a 1–10 scale, which is then recorded by the nurse in the patients’ records.
Before launching on the field study that is the focus of our case study, Blaine Price and
his colleagues (Price et al., 2018) had already spent a considerable amount of time observ-
ing patients in hospitals and talking with nurses. They had also carried out usability tests to
ensure that the design of Painpad, a pain-monitoring tangible device for patients to report
their pain levels, was functioning properly. For example, they checked the usability of the
display and appropriateness of the device covering for the hospital environment and whether
the LED display was working and was readable. In other words, they ensured that they had a
well-functioning prototype for the field study that they planned to carry out.
The goal of the field study was to evaluate the use of Painpad by patients recovering
from ambulatory surgery (total hip or knee replacement) in the natural environments of two
UK hospitals. Painpad (see Figure 15.4) enables patients to monitor their own pain levels by
1 5 . 4 F I E L D S T U D I E S 539
pressing the keys on the pad to record their pain rating. The researchers were interested in
many aspects related to how patients interacted with Painpad, particularly on how robust and
easy it was to use in the hospital environments. They also wanted to see whether the patients
rated their pain every two hours as they should do and how the patients’ ratings using Pain-
pad compared with the ratings that the nurses collected. They also looked for insights about
the preferences and needs of the older patients who used Painpad and for design insights
around visibility, customizability, ease of operation, and the contextual factors that affected
its usability in hospital environments.
Data Collection and Participants
Two studies were conducted that involved 54 people (31 in one study and 23 in another). Data
screening excluded participants who did not provide data using Painpad or for whom the
nurses did not collect data that could be compared with the Painpad data. Because of the con-
fidential nature of the study, ethical considerations were carefully applied to ensure that the
data was stored securely and that the patients’ privacy was assured. Thirteen of the patients
were male, and 41 were female. They ranged in age from 32–88, with mean and median ages
of 64.6 and 64.5. The time they spent in the hospital ranged from 1–7 days, with an average
stay of 2–3 days.
After returning from surgery, the patients were each given a Painpad that stayed by the
side of their bed. Patients were encouraged to use it at their earliest convenience. The Painpad
was programmed to prompt the patients to report their pain levels every two hours. This two-
hour interval was based on the hospital’s desired clinical target for collecting pain data. Each
time a pain rating was due, alternating red and green lights flashed on the Painpad for up to
Figure 15.4 Painpad, a tangible device for inpatient self-logging of pain
Source: Price et al. (2018). Reproduced with permission of ACM Publications
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S540
five minutes, and an audio notification of a few seconds sounded. The patients’ pain rating
was automatically time-stamped by the Painpad and stored in a secure database. In addition
to the pain scores collected using Painpad, the nurses also collected verbal pain scores from
the patients every two hours. These scores were entered into the patients’ charts and later
entered into a database by a senior staff nurse and made available to the researchers for com-
parison with the Painpad data.
When the patients were ready to leave the second hospital mentioned, they were given
a short questionnaire that asked whether Painpad was easy to use, how often they made
mistakes using it, and whether they noticed the flashing light and sound notifications. They
were also asked to rate how satisfied they were with Painpad on a 1–5 Likert rating scale
and to make any other comments that they wanted to share about their experience in a free
text field.
Data Analysis and Presentation
Three types of data analysis were used by the researchers. They examined how satisfied the
patients were with Painpad based on the questionnaire responses, how the patients complied
with the bi-hourly requests to rate their pain on Painpad, and how the data collected with
Painpad compared with the data collected by the nurses.
Nineteen fully completed satisfaction questionnaires were collected that indicated that
Painpad was well received and easy to use (mean rating 4.63 on a scale 1–5, where 5 was
the highest rating) and that it was easy to remember to use it. Sixteen of the respondents
commented that they never made an error entering their pain ratings, the aesthetics of Pain-
pad were rated as “good,” and participants were “mostly satisfied” with it. Responses to the
flashing lights to draw patients’ attention to Painpad were polarized. Most patients noticed
the lights most of the time, while others only noticed the lights sometimes, and three patients
said they did not notice them at all. The effectiveness of the sound alert received a middle
rating; some patients thought it was “too loud and annoying,” and others thought it was too
soft. More nuanced reactions and ideas were collected from the free-text response box on the
questionnaire. For example, one patient (P49) wrote, “I think it is useful for monitoring
the pattern of pain over the day which can be changeable” Patient P52 commented, “A day-
to-day chart might be helpful.” Some patients, who had limited dexterity or other challenges,
reported how their ability to use Painpad was compromised because Painpad was sometimes
hard to reach or to hear.
After removing duplicate entries, there were 824 pain scores provided by the patients
using Painpad compared with 645 scores collected by the nurses. This indicated that the
patients recorded more pain scores than would typically be collected in the hospital by nurses.
To examine how the patients complied with using Painpad every two hours compared with
the scores collected by the nurses, the researchers had to define acceptable time ranges of
compliance. For example, they accepted all of the time scores that were submitted 15 minutes
before and 15 minutes after the bi-hourly time schedule for reporting time scores. This analy-
sis showed that the Painpad scores indicated stronger compliance with the two-hour schedule
than with scores collected by the nurses.
1 5 . 4 F I E L D S T U D I E S 541
Overall, the evaluation of Painpad indicated that it was a successful device for collecting
patients’ pain scores in hospitals. Of course, there are still more questions for Blaine Price
and his team to investigate. An obvious one is this: “Why did the patients give more pain
scores and adhere more strongly to the scheduled pain recording times with Painpad than
with the nurses?”
15.4.2 Other Perspectives
Field studies may also be conducted where a behavior of interest to the researchers reveals
itself only after using a particular type of software for a long time, such as a complex design
program or data visualization tool. For example, the expected changes in user problem- solving
strategies using a sophisticated visualization tool for knowledge discovery may emerge
only after days or weeks of active use because it takes time for users to become familiar,
ACTIVITy 15.3
1. Why do you think Painpad was evaluated in the field rather than in a controlled labora-
tory setting?
2. Two types of data were collected in the field study: pain ratings and user satisfaction ques-
tionnaires. What does each type contribute to our understanding of the design of Painpad?
Comment
1. The researchers wanted to find out how Painpad would be used by patients who had just
had ambulatory surgery. They wanted to know whether the patients liked using Painpad
and whether they liked its design and what problems they experienced when using it over
a period of several days within hospital settings. During the early development of Pain-
pad, the researchers carried out several usability evaluations to check that it was suitable
for testing in real hospital environments. It is not possible to do a similar evaluation in
a laboratory because it would be difficult, if not impossible, to create realistic and often
unpredictable events that happen in hospitals (for example, visitors coming into the ward,
conversations with doctors and nurses, and so forth). Furthermore, the kind of pain that
patients experience after surgery does not occur, nor can it be simulated, in participants
in lab studies. The researchers had already evaluated Painpad’s usability, and now they
wanted to see how it was used in hospitals.
2. Two kinds of data were collected. Pain data was logged on Painpad and recorded indepen-
dently by the nurses every two hours. This data enabled the researchers to compare the
pain data recorded using Painpad with the data collected by the nurses. A user satisfaction
questionnaire was also given to some of the patients. The patients answered questions by
selecting a rating from a Likert scale. The patients were also invited to give comments and
suggestions in a free text box. These comments helped the researchers to get a more
nuanced view of the patients’ needs, likes, and dislikes. For example, they learned that
some patients were hampered from taking full advantage of Painpad because of other
problems, such as poor hearing and restricted movement.
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S542
confident, and competent with the tool (Shneiderman and Plaisant, 2006). To evaluate the
efficacy of such tools, users are best studied in realistic settings in their own workplaces so
they can deal with their own data and set their own agenda for extracting insights relevant
to their professional goals.
These long evaluations of how experts learn and interact with tools for complex tasks
typically starts with an initial interview in which the researchers check that the participant
has a problem to work on, available data, and a schedule for completion. These are funda-
mental attributes that have to be present for the evaluation to proceed. Then the participant
will get an introductory training session with the tool, followed by 2–4 weeks of novice
usage, followed by 2–4 weeks of mature usage, leading to a semistructured exit interview.
Additional assistance may be provided by the researcher as needed, thereby reducing the tra-
ditional separation between researcher and participant, but this close connection enables the
researcher to develop a deeper understanding of the users’ struggles and successes with the
tools. More data, such as daily diaries, automated logs of usage, structured questionnaires,
and interviews can also be used to provide a multidimensional understanding of the weak-
nesses and strengths of the tool.
Sometimes, a particular conceptual or theoretical framework is adopted to guide how
an evaluation is performed or how the data collected from the evaluation is analyzed (see
Chapter 9, “Data Analysis”). This enables the data to be explained at a more general level in
terms of specific cognitive processes, social practices such as learning, or conversational or
linguistic interactions.
bOx 15.1
How Many Participants Are Needed When Carrying Out An
Evaluation Study?
The answer to this question depends on the goal of the study, the type of study (such as
usability, experiment, field, or another type), and the constraints encountered (for instance,
schedules, budgets, recruiting representative participants, and the facilities available). Chap-
ter 8 “Data Gathering,” discussed this question more broadly. The focus here is on the types
of evaluation studies discussed in this chapter: usability studies, experiments, and field studies.
Usability studies
Many professional usability consultants use to recommend 5–12 participants for studies
conducted in controlled or partially controlled settings. However, as the study of the iPad
illustrates, six participants generated a lot of useful data. While more participants might
have been preferable, Radiu Budiu and Jakob Nielsen (2010) were constrained in that they
needed to complete their study and release their results quickly. Since then, Radiu Budiu and
Jakob Nielsen (2012) has said, “If you want a single number, the answer is simple: test five
users in a usability study. Testing with five people lets you find almost as many usability
problems as you’d find using many more test participants.” Others say that as soon as the
same kinds of problems start being revealed and there is nothing new, it is time to stop.
1 5 . 4 F I E L D S T U D I E S 543
Experiments
Knowing how many participants are needed in an experiment depends on the type of experi-
mental design, the number of dependent variables being examined, and the kinds of statisti-
cal tests that will be used. For example, if different participants are being used to test two
conditions, more participants will be needed than if the same participants test both condi-
tions. These kinds of differences in experimental design influence the type of statistics used
and the number of participants needed. Therefore, consulting with a statistician or referring
to books and articles such as those by Caine (2016) and Cairns (2019) is advisable. Fifteen
participants is suggested as the minimum for many experiments (Cairns, 2019).
Field studies
The number of participants in a field study will vary, depending on what is of interest: it may
be a family at home, a software team in an engineering firm, children in a playground, a
whole community in a living lab, or even tens of thousands of people online. Although field
studies may not be representative of how other groups would act, the detailed findings
gleaned from these studies about how participants learn to use a technology and adapt to it
over time can be very revealing.
In-Depth Activity
This in-depth activity continues work on the online booking facility introduced at the end of
Chapter 11 and continued in Chapter 12. Using any of the prototypes that you have devel-
oped to represent the basic structure of your product, follow these instructions to evaluate it:
1. Based on your knowledge of the requirements for this system, develop a standard task (for
instance, booking two seats for a particular performance).
2. Consider the relationship between yourself and your participants. Do you need to use an
informed consent form? If so, prepare a suitable informed consent form. Justify
your decision.
3. Select three typical users, who can be friends or colleagues, and ask them to do the task
using your prototype.
4. Note the problems that each user encounters. If possible, time their performance. (If you
happen to have a camera or a smartphone with a camera, you could film each participant.)
5. Since the system is not actually implemented, you cannot study it in typical settings of use.
However, imagine that you are planning a controlled usability study and a field study. How
would you do it? What kinds of things would you need to take into account? What sort of
data would you collect, and how would you analyze it?
6. What are the main benefits and problems in this case with doing a controlled study versus
studying the product in a natural setting?
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S544
Summary
This chapter described evaluation studies in different settings. It focused on controlled labora-
tory studies, experiments, and field studies in natural settings. A study of the iPad when it first
came out and a second study conducted a year later was presented as an example of usability
testing. Experimental design was then discussed that involves testing a hypothesis in a con-
trolled research lab. The chapter ended with a discussion of field studies in which participants
used prototypes and new technologies in natural settings. The Painpad example involved eval-
uating how patients in two hospitals, who were recovering from surgery, used a mobile device
designed to enable them to self-monitor their pain levels throughout the day.
Key differences between usability testing, experiments, and field studies include the loca-
tion of the study—usability lab or makeshift usability lab (and living lab or online as discussed
in Chapter 14), research lab, or natural environment—and how much control is imposed. At
one end of the spectrum are experiments and laboratory testing, and at the other are in-the-
wild field studies. Most studies use a combination of different methods, and designers often
have to adapt their methods to cope with unusual new circumstances created when evaluating
the new systems being developed.
Key points
• Usability testing usually takes place in usability labs or temporary makeshift labs. These
labs enable designers and researchers to control the test setting. Versions of usability testing
are also conducted remotely, online, and in living labs.
• Usability testing focuses on performance measures, such as how long and how many errors
are made, when completing a set of predefined tasks. Direct and indirect observation (video
and keystroke logging) is conducted and supplemented by user satisfaction questionnaires and
interviews.
• Mobile and remote testing systems have been developed that are more portable and afford-
able than usability labs. Many contain mobile eye-tracking and face recognition systems
and other devices. Many companies continue to use usability labs because they provide a
venue for the whole team to come together to observe and discuss how users are responding
to the systems being developed.
• Experiments seek to test a hypothesis by manipulating certain variables while keeping
others constant.
• The researcher controls independent variable(s) to measure dependent variable(s).
• Field studies are carried out in natural settings. They seek to discover how people interact
with technology in the real world.
• Field studies that involve the deployment of prototypes or technologies in natural settings
may also be referred to as in-the-wild studies.
• Sometimes the findings of a field study are unexpected, especially for in-the-wild studies in
which the goal is typically to explore how novel technologies are used by participants
in their own homes, places of work, or outside.
545
Further Reading
KELLY CAINE (2016). Local Standards for Sample Size at CHI. Chi4good, CHI 2016, May
7–12, 2016, San Jose, CA, USA DOI: https://doi.org/10.1145/2858036.2858498. In this
paper, Kelly Caine points out that the CHI community is composed of researchers from a
wide range of disciplines (also mentioned in Chapter 1), who use a variety of methods. Fur-
thermore, CHI researchers often deal with constraints (for instance, access to participants
for an accessibility study). Therefore, the number of participants involved in a study may be
different from the number suggested in standard stats texts. The discussion in this paper is
based on an analysis of papers accepted at CHI, one of the premier conferences in the field.
PAUL CAIRNS (2019). Doing Better Statistics in Human-Computer Interaction, Cambridge
University Press. This practical book is primarily for HCI researchers when planning or com-
pleting the analysis of their data.
ANDY CRABTREE, ALAN CHAMBERLAIN, REBECCA GRINTER, MATT JONES, TOM
RODDEN, and YVONNE ROGERS (2013). Introduction to the special issue of “The Turn to
The Wild” ACM Transactions on Computer-Human Interaction (TOCHI), 20 (3). This col-
lection of articles provides in-depth case studies of projects that were conducted in the wild
over many years, from the widespread uptake of children’s storytelling mobile apps to the
adoption of online community technologies.
JONATHON LAZAR, HEIDI J. FENG, and HARRY HOCHHEISER, (2017). Research
Methods in Human–Computer Interaction. (2nd edition). Cambridge, MA: Elsevier/Morgan
Kaufmann Publishers. Chapters 2–4 describe how to design experiments and how to perform
basic statistical tests.
JAKOB NIELSEN and RALUCA BUDIU (2012). Mobile Usability. New Riders Press. This
book asks and attempts to answer the question of how we create usability and a satisfying
user experience on smartphones, tablets, and other mobile devices. There is also a wide range
of recent papers available on the NN/G website: nngroup.com.
COLIN ROBSON (1994, 2011). Experimental Design and Statistics in Psychology. Penguin
Psychology. Though now quite old, this book provides a useful introduction to experimental
design and basic statistics. Another useful book by the same author is Real World Research
(3rd ed.), published in 2011 by Blackwell Publishing.
F U R T H E R R E A D I N G
https://doi.org/10.1145/2858036.2858498
http://nngroup.com
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S546
INTERVIEW with danah boyd
danah boyd is a principal researcher at
Microsoft Research, the founder and presi-
dent of the Data & Society Research Insti-
tute, and a visiting professor at New York
University. In her research, danah examines
the intersection of technology and society
with an eye to limiting how technology can
be abused to reinforce inequity. danah wrote
It’s Complicated: The Social Lives of Net-
worked Teens (Yale University Press, 2014),
which examines teens’ engagement with so-
cial media. She blogs at www. zephoria.org/
thoughts and tweets at @zephoria.
danah, can you tell us a bit about your re-
search and what motivates you?
I am an ethnographer who examines the
interplay between technology and society.
For almost a decade, I researched different
aspects of social media, most notably how
American teens integrate social media into
their daily practices. Because of this, I’ve
followed the rise of many popular social
media services—MySpace, Facebook, You-
Tube, Twitter, Instagram, Snapchat, and so
on. I examined what teens do on these ser-
vices, but I also consider how these tech-
nologies fit into teens’ lives more generally.
Thus, I spent a lot of time driving around
the United States talking to teens and their
parents, educators and youth ministers,
law enforcement, and social workers, try-
ing to get a sense of what teens’ lives look
like and where technology fits in.
More recently, I’ve been focused on how
data-driven technologies are playing a cen-
tral role in many facets of society. Tech-
niques like machine learning and other
forms of artificial intelligence rely heav-
ily on data infrastructure. But what hap-
pens when data is manipulated, abused, or
biased? My goal is to examine sociotech-
nical vulnerabilities and imagine ways of
minimizing how technology can be used
to reinforce inequities or cause harm. As
Melvin Kranzberg once said, “Technology
is neither good nor bad; nor is it neutral.”
I’m trying to figure out how technological
decisions intersect with cultural practices,
who is affected and in what ways, and
what the right points of intervention are
to help construct a society that we want to
live in. To do this requires moving between
disciplines, sectors, and frames to get at the
complexity that we’ve created.
Fundamentally, I’m a social scientist in-
vested in understanding the social world.
Technology shapes social dynamics, pro-
viding a fascinating vantage point for un-
derstanding cultural practices.
How would you characterize good ethnog-
raphy? (Please include example(s) from
your own work.)
Ethnography is about mapping cultural
logics and practices. To do this successfully,
it’s important to dive deep into the every-
day practices of a particular community
http://www.zephoria.org/thoughts
http://www.zephoria.org/thoughts
http://@zephoria
547
and try to understand them on their own
terms. The next stage is to try to ground
what one observes in a broader discourse
of theory and ideas to provide a frame-
work for understanding cultural dynamics.
Many people ask me why I bothered
driving around the United States, talking
to teens when I can see everything that
they do online. What’s visible online is
only a small fraction of what people do,
and it’s easy to misinterpret why teens do
something simply by looking at the traces
of their actions. Getting into their lives,
understanding their logic, and seeing how
technology connects with daily practice
is critically important, especially because
teens don’t have distinct “online” versus
“offline” lives. It’s all intertwined, so it’s
necessary to see what’s going on from dif-
ferent angles.
Of course, this is just the data collec-
tion process. I’m also a firm believer that
analysis is iterative and that it’s important
to include other stakeholders in that pro-
cess. For over two decades, I’ve blogged
my in-process thinking in part to enable
a powerful feedback loop that I’ve deeply
relished.
I know you have encountered some sur-
prises—or maybe even a revelation—in
your work on Facebook and MySpace.
Would you tell us about it, please?
From 2006–2007, I was talking with teens
in different parts of the country, and I start-
ed noticing that some teens were talking
about MySpace, and some teens were
talking about Facebook. In Massachu-
setts, I met a young woman who uncom-
fortably told me that the black kids in her
school were on MySpace, while the white
kids were on Facebook. She described
MySpace as “like ghetto.” I didn’t enter
into this project expecting to analyze race
and class dynamics in the United States,
but, after her comments, I couldn’t avoid
them. I started diving into my data, real-
izing that race and class could explain the
difference between which teens preferred
which sites. Uncomfortable with this and
totally afar from my intellectual strengths,
I wrote a really awkward blog post about
what I was observing. For better or worse,
the BBC picked this up as a “formal report
from UC Berkeley,” and I received more
than 10,000 messages over the next week.
Some were hugely critical, with some
making assumptions about me and my
intentions. But the teens who wrote con-
sistently agreed. And then two teens start-
ed pointing out to me that it wasn’t just an
issue of choice but an issue of movement,
with some teens moving from MySpace to
Facebook because MySpace was less de-
sirable and Facebook was “safe.” Anyhow,
recognizing the racist and classist roots
of this, I spent a lot of time trying to un-
pack the different language that teens used
when talking about these sites in a paper
called “White Flight in Networked Pub-
lics? How Race and Class Shaped Ameri-
can Teen Engagement with MySpace and
Facebook.”
This might all seem antiquated these
days, but the patterns I witnessed in
MySpace and Facebook continue to repeat
themselves. The tensions between Snap-
chat and Instagram have similar patterns,
as does WhatsApp versus iMessage. More-
over, the network dynamics that underpin
all adoption and usage of social media are
increasingly being manipulated to rein-
force social divisions within society. I never
imagined that the teens that I watched try-
ing to hack the attention economy in 2004
would create a template that could be used
to undermine democratic conversations
around the world only a decade later.
I N T E R V I E W W I T H D A N A H b O y D
1 5 E VA L U AT I O N S T U D I E S : F R O M C O N T R O L L E D T O N AT U R A L S E T T I N G S548
I know you are doing a lot of work on big
data and that some of that is focused on
social media. What are you learning and
what are your concerns for the future?
To be honest, what concerns me the most
about social media and data analytics is
that these technologies operate within a
particular formation of financialized capi-
talism that prioritizes short-term profits
and cancerous levels of growth over other
social values, including democracy, climate
sustainability, and community cohesion.
Even when data-analytics projects start
from ideal places, it’s hard for those ideals
to stay intact as companies grow and face
different kinds of financial pressure. As a
result, the same technologies that could le-
verage data to empower communities are
quickly used for exploitative purposes. I
genuinely struggle to balance my love of
technology with my concern that these tools
will be used to magnify inequality, spread
disinformation, increase climate risks, and
polarize society for political purposes.
Chapter 16
E V A L U A T I O N : I N S P E C T I O N S ,
A N A LY T I C S , A N D M O D E L S
16.1 Introduction
16.2 Inspections: Heuristic Evaluation and Walk-Throughs
16.3 Analytics and A/B testing
16.4 Predictive Models
Objectives
The main goals of this chapter are to accomplish the following:
• Describe the key concepts associated with inspection methods.
• Explain how to do heuristic evaluation and walk-throughs.
• Explain the role of analytics in evaluation.
• Describe how A/B testing is used in evaluation.
• Describe how to use Fitts’ law—a predictive model.
16.1 Introduction
The evaluation methods described in this book so far have involved interaction with, or
direct observation of, users. In this chapter, we introduce methods that are based on under-
standing users through one of the following:
• Knowledge codified in heuristics
• Data collected remotely
• Models that predict users’ performance
None of these methods requires users to be present during the evaluation. Inspection
methods often involve a researcher, sometimes known as an expert, role-playing the users
for whom the product is designed, analyzing aspects of an interface, and identifying poten-
tial usability problems. The most well-known methods are heuristic evaluation and walk-
throughs. Analytics involves user interaction logging, and A/B testing is an experimental
method. Both analytics and A/B testing are usually carried out remotely. Predictive modeling
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S550
involves analyzing the various physical and mental operations that are needed to perform
particular tasks at the interface and operationalizing them as quantitative measures. One of
the most commonly used predictive models is Fitts’ law.
16.2 Inspections: Heuristic Evaluation and
Walk-Throughs
Sometimes, it is not practical to involve users in an evaluation because they are not avail-
able, there is insufficient time, or it is difficult to find people. In such circumstances, other
people, often referred to as experts or researchers, can provide feedback. These are people
who are knowledgeable about both interaction design and the needs and typical behavior
of users. Various inspection methods were developed as alternatives to usability testing in
the early 1990s, drawing on software engineering practice where code and other types of
inspections are commonly used. Inspection methods for interaction design include heuristic
evaluations and walk-throughs, in which researchers examine the interface of an interac-
tive product, often role-playing typical users, and suggest problems that users would likely
have when interacting with the product. One of the attractions of these methods is that
they can be used at any stage of a design project. They can also be used to complement
user testing.
16.2.1 Heuristic Evaluation
In heuristic evaluation, researchers, guided by a set of usability principles known as heuris-
tics, evaluate whether user-interface elements, such as dialog boxes, menus, navigation struc-
ture, online help, and so on, conform to tried-and-tested principles. These heuristics closely
resemble high-level design principles (such as making designs consistent, reducing memory
load, and using terms that users understand). Heuristic evaluation was developed by Jakob
Nielsen and his colleagues (Nielsen and Mohlich, 1990; Nielsen, 1994a) and later modified
by other researchers for evaluating the web and other types of systems (see Hollingshead and
Novick, 2007; Budd, 2007; Pinelle et al., 2009; Harley, 2018). In addition, many researchers
and practitioners have converted design guidelines into heuristics that are then applied in
heuristic evaluation.
The original set of heuristics for HCI evaluation were empirically derived from the analy-
sis of 249 usability problems (Nielsen, 1994b); a revised version of these heuristics follows
(Nielsen, 2014: useit.com):
Visibility of System Status The system should always keep users informed about what is
going on, through appropriate feedback and within reasonable time.
Match Between System and the Real World The system should speak the users’ language,
with words, phrases, and concepts familiar to the user, rather than system-oriented terms.
It should follow real-world conventions, making information appear in a natural and
logical order.
User Control and Freedom Users often choose system functions by mistake and will need a
clearly marked emergency exit to leave the unwanted state without having to go through
an extended dialog. The system should support undo and redo.
http://useit.com
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 551
Consistency and Standards Users should not have to wonder whether different words, situ-
ations, or actions mean the same thing. The system should follow platform conventions.
Error Prevention Rather than just good error messages, the system should incorporate care-
ful design that prevents a problem from occurring in the first place. Either eliminate
error-prone conditions or check for them and present users with a confirmation option
before they commit to the action.
Recognition Rather Than Recall Minimize the user’s memory load by making objects,
actions, and options visible. The user should not have to remember information from
one part of the dialog to another. Instructions for use of the system should be visible or
easily retrievable whenever appropriate.
Flexibility and Efficiency of Use Accelerators—unseen by the novice user—may often speed
up the interaction for the expert user such that the system can cater to both inexperi-
enced and experienced users. Allow users to tailor frequent actions.
Aesthetic and Minimalist Design Dialogs should not contain information that is irrelevant
or rarely needed. Every extra unit of information in a dialog competes with the relevant
units of information and diminishes their relative visibility.
Help Users Recognize, Diagnose, and Recover from Errors Error messages should be
expressed in plain language (not codes), precisely indicate the problem, and construc-
tively suggest a solution.
Help and Documentation Even though it is better if the system can be used without docu-
mentation, it may be necessary to provide help and documentation. Any such informa-
tion should be easy to search, focused on the user’s task, list concrete steps to be carried
out, and not be too large.
More information about heuristic evaluation is provided at
www.nngroup.com/articles/ux-expert-reviews/
This site shows how a researcher, Wendy Bravo, used heuristics to evaluate two
travel websites, Travelocity and Expedia:
https://medium.com/@WendyBravo/heuristic-evaluation-of-two-travel-
websites-13f830cf0111
This video, developed by David Lazarus and published on May 9, 2011, provides
insights into Jakob Nielsen’s 10 Usability Heuristics for Interface Design. The
video is still useful even though the heuristics have been updated slightly since
it was made.
http://www.nngroup.com/articles/ux-expert-reviews/
https://medium.com/@WendyBravo/heuristic-evaluation-of-two-travel-websites-13f830cf0111
https://medium.com/@WendyBravo/heuristic-evaluation-of-two-travel-websites-13f830cf0111
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S552
Designers and researchers evaluate aspects of the interface against the appropriate heu-
ristics. For example, if a new social media system is being evaluated, the designer might con-
sider how users would add friends to their networks. Those doing the heuristic evaluation go
through the interface several times, inspecting the various interaction elements and compar-
ing them with the list of usability heuristics. During each iteration, usability problems will be
identified and ways of fixing them may be suggested.
Although many heuristics apply to most products (for example, be consistent and pro-
vide meaningful feedback, especially if an error occurs), some of the core heuristics are
too general for evaluating products that have come onto the market more recently, such
as mobile devices, digital toys, social media, ambient devices, web services, and IoT. Many
designers and researchers have therefore developed their own heuristics by tailoring Nielsen’s
heuristics with other design guidelines, market research, results from research studies, and
requirements documents. The Nielsen/Norman Group has also taken a more detailed look
at particular heuristics, such as the first heuristic listed above, “visibility of system status,”
(Harley, 2018a), which focuses on communication and transparency.
Exactly which heuristics are appropriate and how many are needed for different products
is debatable and depends on the goals of the evaluation. However, most sets have between 5
and 10 items. This number provides a good range of usability criteria by which to judge the
various aspects of a product’s design. More than 10 items become difficult for those doing
the evaluation to manage, while fewer than 5 items tend not to be sufficiently discriminating.
Another concern is the number of researchers needed to carry out a thorough heuristic
evaluation that identifies the majority of usability problems. Empirical tests were conducted
suggesting that 3–5 can typically identify up to 75 percent of the total usability problems,
as shown in Figure 16.1 (Nielsen, 1994a). However, employing several researchers can
be resource intensive. Therefore, the overall conclusion is that while more researchers might be
better, fewer can be used—especially if the researchers are experienced and knowledgeable
about the product and its intended users.
100%
75%
50%
25%
0%
0 5 10 15
Number of Evaluators
P
ro
po
rt
io
n
of
U
sa
bi
lit
y
P
ro
bl
em
s
F
ou
nd
Figure 16.1 Curve showing the proportion of usability problems in an interface found by heuristic
evaluation using various numbers of evaluators
Source: Nielsen and Mack (1994). Used courtesy of John Wiley & Sons, Inc.
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 553
Heuristic Evaluation for Websites
A number of different heuristic sets for evaluating websites have been developed based
on Nielsen’s original 10 heuristics. One of these was developed by Andy Budd after
discovering that Nielsen’s heuristics did not address the problems of the continuously
evolving web. He also found that there was overlap between several of the guidelines and
that they varied widely in terms of their scope and specificity, which made them difficult
to use. An extract from these heuristics is shown in Box 16.1. Notice that a difference
between these and Nielsen’s original heuristics is that they place more emphasis on infor-
mation content.
BOX 16.1
Extract from the Heuristics Developed by Budd (2007) That
Emphasize Web Design Issues
Clarity
Make the system as clear, concise, and meaningful as possible for the intended audience.
• Write clear, concise copy.
• Only use technical language for a technical audience.
• Write clear and meaningful labels.
• Use meaningful icons.
Minimize Unnecessary Complexity and Cognitive Load
Make the system as simple as possible for users to accomplish their tasks.
• Remove unnecessary functionality, process steps, and visual clutter.
• Use progressive disclosure to hide advanced features.
• Break down complicated processes into multiple steps.
• Prioritize using size, shape, color, alignment, and proximity.
Provide Users with Context
Interfaces should provide users with a sense of context in time and space.
• Provide a clear site name and purpose.
• Highlight the current section in the navigation.
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S554
A similar approach to Budd’s is also taken by Leigh Howells in her article entitled “A
guide to heuristic website reviews” (Howells, 2011). In this article and in a more recent
one by Toni Granollers (2018), techniques for making the results of heuristic evaluation
more objective are proposed. This can be done either to show the occurrence of different
heuristics from an evaluation or to compare the results of different researchers’ evalu-
ations, as shown in Figure 16.2. First, a calculation is done to estimate the percentage
of usability problems identified by each researcher, which is then displayed around the
diagram (in this case there were seven researchers). Then a single value representing the
mean of all of the researchers’ individual means is calculated and displayed in the center
of the diagram. In addition to being able to compare the relative performance of differ-
ent researchers and the overall usability of the design, a version of this procedure can be
used to compare the usability of different prototypes or for comparisons with competi-
tors’ products.
Doing Heuristic Evaluations
Doing a heuristic evaluation can be broken down into three main stages (Nielsen and Mack,
1994; Muniz, 2016).
• A briefing session, in which the user researchers are briefed about the goal of the evalua-
tion. If there is more than one researcher, a prepared script may be used to ensure that each
person receives the same briefing.
• The evaluation period, in which the user researchers typically spend 1–2 hours indepen-
dently inspecting the product, using the heuristics for guidance.
Typically, the researchers will take at least two passes through the interface. The first pass
gives a feel for the flow of the interaction and the product’s scope. The second pass allows
them to focus on specific interface elements in the context of the whole product and to iden-
tify potential usability problems.
• Provide a breadcrumb trail (that is, show where the user has been in a website).
• Use appropriate feedback messages.
• Show the number of steps in a process.
• Reduce perception of latency by providing visual cues (for instance, a progress indicator)
or by allowing users to complete other tasks while waiting.
Promote a Pleasurable and Positive User Experience
The user should be treated with respect, and the design should be aesthetically pleasing and
promote a pleasurable and rewarding experience.
• Create a pleasurable and attractive design.
• Provide easily attainable goals.
• Provide rewards for usage and progression.
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 555
Eval1; 77,2%
Eval7; 66,7%
Eval6; 86,0%
Eval5; 72,7%
Eval2; 69,0%
Eval4; 87,0%
Eval3; 83,8%
MEAN
77,5%
Figure 16.2 Radar diagram showing the mean number of problems identified by each of the seven
researchers and the overall mean shown in the center of the diagram
Source: Granollers (2018). Used courtesy of Springer Nature
ACTIVITY 16.1
1. Use some of Budd’s heuristics (Box 16.1) to evaluate a website that you visit regularly.
Do these heuristics help you to identify important usability and user experience issues?
If so, how?
2. How does being aware of the heuristics influence how you interact with the website?
3. Was it difficult to use these heuristics?
Comment
1. The heuristics focus on key usability criteria, such as whether the interface seems unneces-
sarily complex and how color is used. Budd’s heuristics also encourage consideration of
how the user feels about the experience of interacting with a website.
2. Being aware of the heuristics may lead to a stronger focus on the design and interac-
tion, and it can raise awareness of what the user is trying to do and how the website is
responding.
3. When applied at a high level, these guidelines can be tricky to use. For example, what
exactly does “clarity” mean in regard to a website? Although the detailed list (write clear,
concise copy; only use technical language for a technical audience, and so on) provides
some guidance, making the evaluation task a bit easier, it may still seem quite difficult,
particularly for those not used to doing heuristic evaluations.
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S556
If the evaluation is for a functioning product, the researchers will typically have some
specific user tasks in mind so that their exploration is focused. Suggesting tasks may be
helpful, but many UX researchers suggest their own tasks. However, this approach is more
difficult if the evaluation is done early in design when there are only screen mock-ups or a
specification. Therefore, the approach needs to be adapted for the evaluation circumstances.
While working through the interface, specification, or mock-ups, a second researcher may
record the problems identified, while the other researcher may think aloud, which can be
video recorded. Alternatively, each researcher may take notes.
• The debriefing session, in which the researchers come together to discuss their findings
with designers and to prioritize the problems they found and give suggestions for solutions.
The heuristics focus the researchers’ attention on particular issues, so selecting appro-
priate heuristics is critically important. Even so, sometimes there is disagreement among
researchers, as discussed in the next “Dilemma.”
DILEMMA
Classic Problems or False Alarms?
Some researchers and designers may have the impression that heuristic evaluation is a panacea
that can reveal all that is wrong with a design with little demand on a design team’s resources.
However, in addition to being quite difficult to use as just discussed, heuristic evaluation has
other problems, such as sometimes missing key problems that would likely be found by testing
the product with real users.
Shortly after heuristic evaluation was developed, several independent studies compared it
with other methods, particularly user testing. They found that the different approaches often iden-
tify different problems and that sometimes heuristic evaluation misses severe problems (Karat,
1994). In addition, its efficacy can be influenced both by the number of experts and by the nature
of the problems, as mentioned earlier (Cockton and Woolrych, 2001; Woolrych and Cockton,
2001). Heuristic evaluation, therefore, should not be viewed as a replacement for user testing.
Another issue concerns researchers reporting problems that don’t exist. In other words,
some of the researchers’ predictions are wrong. Bill Bailey (2001) cites analyses from three
published sources showing that about 33 percent of the problems reported were real usability
problems, some of which were serious, while others were trivial. However, the researchers
missed about 21 percent of users’ problems. Furthermore, about 43 percent of the problems
identified by the researchers were not problems at all; they were false alarms. He points out
that this means that only about half of the problems identified were true problems: “More
specifically, for every true usability problem identified, there will be a little over one false
alarm (1.2) and about one-half of one missed problem (0.6). If this analysis is true, the experts
tend to identify more false alarms and miss more problems than they have true hits.”
How can the number of false alarms or missed serious problems be reduced? Checking
that researchers really have the expertise that is required could help, particularly that they
have a good understanding of the target user population. But how can this be done? One way
to overcome these problems is to have several researchers. This helps to reduce the impact of
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 557
Another important issue when designing and evaluating web pages, mobile apps, and
other types of products is their accessibility to a broad range of users, for example, people
with sight, hearing, and mobility challenges. Many countries now have web content acces-
sibility guidelines (WCAG) to which designers must pay attention, as discussed in Box 16.2.
one person’s experience or poor performance. Using heuristic evaluation along with user test-
ing and other methods is also a good idea. Providing support for researchers and designers to
use heuristics effectively is yet another way to reduce these shortcomings. For example, Bruce
Tognazzini (2014) now includes short case studies to illustrate some of the principles that he
advocates using as heuristics. Analyzing the meaning of each heuristic and developing a set of
questions can also be helpful, as mentioned previously.
BOX 16.2
Evaluating for Accessibility Using the Web Content Accessibility
Guidelines
Web Content Accessibility Guidelines (WCAG) are a detailed set of standards about how to
ensure that web page content is accessible for users with various disabilities (Lazar et al., 2015).
While heuristics such as Ben Shneiderman’s eight golden rules (Shneiderman et al., 2016) and
Nielsen and Mohlich’s heuristic evaluation are well-known within the HCI community, the
WCAG is probably the best-known set of interface guidelines or standards outside of the
HCI community. Why? Because many countries around the world have laws that require that
government websites, and websites of public accommodations (such as hotels, libraries, and
retail stores), are accessible for people with disabilities. A majority of those laws, including
the Disability Discrimination Act in Australia, Stanca Act in Italy, Equality Act in the United
Kingdom, and Section 508 of the Rehabilitation Act in the United States, as well as policies
such as Canada’s Policy on Communications and Federal Identity and India’s Guidelines for
Indian Government Websites, use WCAG as the benchmark for web accessibility.
The concept of web accessibility is as old as the web itself. Tim Berners-Lee said, “The
power of the Web is in its universality. Access by everyone, regardless of disability, is an essential
aspect” (https://www.w3.org/Press/IPO-announce). To fulfill this mission, the WCAG were cre-
ated, approved, and released in 1999. The WCAG were created by committee members from
475 member organizations, including leading tech companies such as Microsoft, Google, and
Apple. The process for developing them was transparent and open, and all of the stakeholders,
including many members of the HCI community, were encouraged to contribute and comment.
For more information about the web accessibility guidelines, laws, and
policies, see https://www.w3.org/WAI/
https://www.w3.org/Press/IPO-announce
https://www.w3.org/WAI/
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S558
WCAG 2.0 was released in 2008. WCAG 2.1 was released in 2018, with a modification to
improve accessibility further for low-vision users and for web content presented on mobile
devices. In addition, when designers follow these guidelines, there are often benefits for all users,
such as improved readability and search results that are presented in more meaningful ways.
The key concepts of web accessibility, according to WCAG, are summarized as POUR—
Perceivable, Operable, Understandable, and Robust.
1. Perceivable
1.1 Provide text alternatives for non-text content.
1.2 Provide captions and other alternatives for multimedia.
1.3 Create content that can be presented in different ways, including by assistive tech-
nologies, without losing meaning.
1.4 Make it easier for users to see and hear content.
2. Operable
2.1 Make all functionality available from a keyboard.
2.2 Give users enough time to read and use content.
2.3 Do not use content that causes seizures or physical reactions.
2.4 Help users navigate and find content.
2.5 Make it easy to use inputs other than keyboard.
3. Understandable
3.1 Make text readable and understandable.
3.2 Make content appear and operate in predictable ways.
3.3 Help users avoid and correct mistakes.
4. Robust
4.1 Maximize compatibility with current and future user tools.
Source: https://www.w3.org/WAI/standards-guidelines/wcag/glance/.
These guidelines can be used as heuristics to evaluate basic web page accessibility. For
example, they can be converted into specific questions such as: Is there ALT text on graphics?
Is the entire page usable if a pointing device cannot be used? Is there any flashing content that
will trigger seizures? Is there captioning on videos? While some of these issues can be addressed
directly by designers, captioning is typically contracted out to organizations that specialize in
developing and inserting captions. Governments and large organizations have to make their
websites accessible to avoid possible legal action in the United States and some other coun-
tries. However, tools and advice to enable small companies and individuals to develop appro-
priate captions help to make captioning more universal.
While all of the various WCAG documents online would add up to
hundreds of printed pages, the key concepts and core requirements are
summarized in “WCAG 2.1 at a Glance,” (www.w3.org/WAI/standards-
guidelines/wcag/glance) a document that could be considered to be a
set of HCI heuristics.
https://www.w3.org/WAI/standards-guidelines/wcag/glance/
http://www.w3.org/WAI/standards-guidelines/wcag/glance
http://www.w3.org/WAI/standards-guidelines/wcag/glance
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 559
Some researchers have created heuristics specifically to ensure that websites and other
products are accessible to users with disabilities. For example, Jenn Mankoff et al. (2005) dis-
covered that developers who did heuristic evaluation using a screen reader found 50 percent
of known usability problems. Although, admirably, much research focuses on accessibility for
people with sight problems, research to support other types of disabilities is also needed. An
example is the research by Alexandros Yeratziotis and Panayiotis Zaphiris (2018), who cre-
ated a method comprising 12 heuristics for evaluating deaf users’ experiences with websites.
While automated software testing tools have been developed in an attempt to apply
WCAG guidelines to web pages, this approach had limited success because there are so many
accessibility requirements that are not currently machine-testable. Human inspection using
the WCAG, or user testing involving people with disabilities, are still the superior methods
for evaluating web compliance with WCAG 2.1 standards.
Turning Design Guidelines, Principles, and Golden Rules into Heuristics
An approach to developing heuristics for evaluating the many different types of digital technol-
ogies is to convert design guidelines into heuristics. Often this is done by just using guidelines
as though they are heuristics, so guidelines and heuristics are assumed to be interchangeable.
A more principled approach is for designers and researchers to translate the design guide-
lines into questions. For example, Kaisa Väänänen-Vainio-Mattila and Minna Wäljas (2009)
adopted this approach when developing heuristics for evaluating user experience with a web
service. They identified what they called hedonic heuristics, which directly addressed how
users feel about their interactions. These were based on design guidelines concerning whether
the user feels that the web service provides a lively place where it is enjoyable to spend time
and whether it satisfies a user’s curiosity by frequently offering interesting content. When
stated as questions, these become: Is the service a lively place where it is enjoyable to spend
time? Does the service satisfy users’ curiosity by frequently offering interesting content?
In a critique of Nielsen’s Heuristics (1994) and a similar set of heuristics proposed by
Bruce Tognazzini’s known as “First Principles of HCI Design and Usability” (Tognazzini,
2014), Toni Granollers points out the need for revising these heuristics. She claims that there
is considerable overlap both within each of the two sets of heuristics and between them.
Furthermore, she stresses the need for more guidance in using heuristics and advocates for
developing questions as a way to provide this support. Granollers suggests first converting
the heuristics into principles, and then, as was suggested earlier, identifying pertinent ques-
tions to ground the principles so that they are useful. For example, consider the heuristic “vis-
ibility and system state,” which is a composite between Nielsen’s and Tognazzini’s heuristics.
Granolles suggests the following questions:
Does the application include a visible title page, section or site? Does the user always
know where they are located? Does the user always know what the system or applica-
tion is doing? Are the links clearly defined? Can all actions be visualized directly (i.e.,
no other actions are required)?
Granollers, 2018, p. 62
Each heuristic is therefore decomposed into a set of questions like these, which could be
further adapted for evaluating specific products.
Heuristics (some of which may be guidelines or rules) have been created for designing
and evaluating a wide range of products including shared groupware (Baker et al., 2002),
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S560
video games (Pinelle et al., 2008), multiplayer games (Pinelle et al., 2009), online commu-
nities (Preece and Shneiderman, 2009), information visualization (Forsell and Johansson,
2010), captchas (Reynaga et al., 2015), and e-commerce sites (Hartley, 2018b). David Travis
(2016), a consultant from Userfocus, has compiled 247 guidelines that are used in evalua-
tions. These include 20 guidelines for home page usability, 20 for search usability, 29 for
navigation and information architecture, 23 for trust and credibility, and more.
In the mid-1980s Ben Shneiderman also proposed design guidelines that are frequently
used as heuristics for evaluation. These are called the “eight golden rules.” They were slightly
revised recently (Shneiderman et al., 2016) and are now stated as follows:
1. Strive for consistency.
2. Seek universal usability.
3. Offer informative feedback.
4. Design dialogs to yield closure.
5. Prevent errors.
6. Permit easy reversal of actions.
7. Keep users in control.
8. Reduce short-term memory load
To access more information about these guidelines, check out David Travis’s web-
site at https://www.userfocus.co.uk/resources/guidelines.html
ACTIVITY 16.2
COMPArINg HEUrISTICS
1. Compare Nielsen’s usability heuristics with Shneiderman’s eight golden rules. Which are
similar, and which are different?
2. Then select another set of heuristics or guidelines for evaluating a system in which you are
particularly interested and add them to the comparison.
Comment
1. Only a few heuristics and golden rules nearly match, for instance, Nielsen’s guidelines for
“consistency and standards,” “error prevention,” and “user control and freedom” match
up with Shneiderman’s rules of “striving for consistency,” “prevent errors,” and “keep users
in control.” Looking deeper, Nielsen’s “help users recognize, diagnose and recover from
errors” and “help and documentation” map with Shneiderman’s “offer informative feed-
back.” It is harder to find heuristics and golden rules that are unique to each researcher’s
set; “aesthetic and minimalist design” appears only in Nielsen’s list, whereas “seek universal
usability” appears only in Shneiderman’s list. However, with even deeper analysis, it could
be argued that there is considerable overlap between the two sets. Without examining
https://www.userfocus.co.uk/resources/guidelines.html
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 561
16.2.2 Walk-Throughs
Walk-throughs offer an alternative approach to heuristic evaluation for predicting users’
problems without doing user testing. As the name suggests, walk-throughs involve walk-
ing through a task with the product and noting problematic usability features. While most
walk-through methods do not involve users, others, such as pluralistic walk-throughs, involve
a team that may include users, as well as developers and usability specialists.
In this section, we consider cognitive and pluralistic walk-throughs. Both were origi-
nally developed for evaluating desktop systems, but, as with heuristic evaluation, they can be
adapted for other kinds of interfaces.
Cognitive Walk-Throughs
Cognitive walk-throughs involve simulating how users go about problem-solving at each
step in a human-computer interaction. A cognitive walk-through, as the name implies, takes
a cognitive perspective in which the focus is on evaluating designs for ease of learning—a
focus that is motivated by observations that users learn by exploration. This well-established
method (Wharton et al., 1994) is now often integrated with a range of other evaluation and
design processes. See, for example, the Jared Spool blog at https://medium.com/@jmspool,
(Spool 2018).
The main steps involved in cognitive walk-throughs are as follows:
1. The characteristics of typical users are identified and documented, and sample tasks are
developed that focus on the aspects of the design to be evaluated. A description, mock-up,
or prototype of the interface to be developed is also produced, along with a clear sequence
of the actions needed for the users to complete the task.
2. A designer and one or more UX researchers come together to do the analysis.
3. The UX researchers walk through the action sequences for each task, placing it within the
context of a typical scenario. As they do this, they try to answer the following questions:
a. Will the correct action be sufficiently evident to the user?
(Will the user know what to do to achieve the task?)
b. Will the user notice that the correct action is available?
(Can users see the button or menu item that they should use for the next action? Is it apparent
when it is needed?)
and considering each heuristic and guideline in detail, making comparisons like this is not
straightforward. It is therefore difficult to judge when faced with choosing between these
and/or other heuristics. In the end, perhaps the best way forward is for researchers to select
the set of heuristics that seem most appropriate for their own evaluation context.
2. We selected the web accessibility guidelines listed in Box 16.2. Unlike the Nielsen heuristics
and Shneiderman’s eight golden rules, these guidelines specifically target the accessibility of
websites for users with disabilities, particularly those who are blind or have limited vision.
The ones under “perceivable,” “operable,” and “robust” do not appear in either of the
other two lists. The guidelines listed for “understandable” are more like those in Nielsen’s
and Shneiderman’s lists. They focus on reminding designers to make content appear in
consistent and predictable ways and to help users to avoid making mistakes.
https://medium.com/@jmspool
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S562
c. Will the user associate and interpret the response from the action correctly?
(Will users know from the feedback that they have made a correct or incorrect choice of action?)
In other words, will users know what to do, see how to do it, and understand from feed-
back whether the action was completed correctly or not?
4. As the walk-through is being done, a record of critical information is compiled.
a. The assumptions about what would cause problems and why are identified.
b. Notes about side issues and design changes are made.
c. A summary of the results is compiled.
5. The design is then revised to fix the problems presented. Before making the fix, insights
derived from the walk-through are often checked by testing them with real users.
When doing a cognitive walk-through, it is important to document the process, keeping
account of what works and what doesn’t. A standardized feedback form can be used in which
answers are recorded to each question. Any negative answers are carefully documented on
a separate form, along with details of the product, its version number, and the date of the
evaluation. It is also useful to document the severity of the problems. For example, how
likely a problem is to occur, and how serious it will be for users. The form can also be used
to record the process details outlined in steps 1–4.
Brad Dalrymple (2017) describes doing a walk-through with himself as the user in three
steps. Notice that there are fewer steps and they are a bit different from those previously listed.
1. Identify the user goal you want to examine.
2. Identify the tasks you must complete to accomplish that goal.
3. Document the experience while completing the tasks.
Dalrymple provides an example of the actions that he needs to go through to create a
Spotify playlist (the task) of music for guests who will attend his dinner party (the goal).
Compared with heuristic evaluation, walk-throughs focus more closely on identifying
specific user problems at a detailed level. Another type of walk-through that takes a semiotic
engineering perspective is described in Box 16.3.
Check out this link for the Dalrymple cognitive walk-through to create a Spot ify
playlist:
https://medium.com/user-research/cognitive-walk-throughs-b84c4f0a14d4
BOX 16.3
A Semiotic Engineering Inspection Technique
Humans use a variety of signs and symbols to communicate and encode information. These
include everyday things like road signs, written or spoken words, mathematical symbols, ges-
tures, and icons. The study of how signs and symbols are constituted, interpreted, and pro-
duced is known as semiotics.
https://medium.com/user-research/cognitive-walk-throughs-b84c4f0a14d4
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 563
UX designs use a variety of signs to communicate meanings to users. Some of these are
well established, such as the trashcan for deleting files, while others are created for or used
only in particular types of applications, such as a row of birds in a bird identification app (see
Figure 16.3). The goal of UX designers is that users of their designs understand what they
mean to communicate with familiar and unfamiliar signs alike.
An important aspect of UX design is how to get the designers’ message across to the
users by means of interaction signs alone. Knowledge of semiotic and engineering concepts—
brought together in the semiotic engineering of human interaction with and through digital
technologies (de Souza, 2005)—contributes to improving the communication of principles,
features, and values of UX design.
The primary method used to evaluate the quality of semiotic engineering is SigniFYIng
Message—an inspection procedure (de Souza et al., 2016) that focuses on the communicative
power of signs that UX designers can choose in order to communicate their message to users.
These are the very signs through which users, in turn, will be able to express what they want
to do, explore, or experience during interaction. The method is suitable for evaluating small
portions of a UX design in detail. When carrying out this kind of semiotic evaluation, inspec-
tors are guided by specific questions about three types of interaction signs.
• Static signs, which communicate what they mean instantly and do not require further in-
teraction for a user to make sense of them.
• Dynamic signs, which only communicate meaning over time and through interaction. In
other words, the user can only make sense of them if they engage in interaction.
• Metalinguistic signs, which can be static or dynamic. Their distinctive feature is that their
meaning is an explanation, a description, some information, warning, or commentary
about another interface sign.
Figure 16.4 shows examples of how these signs achieve communication within four
screens of a smartphone app for arranging meetings. To help users avoid time zone errors
when meeting participants who are in different time zones, UX designers may elect to com-
municate times using Greenwich mean time (GMT) and also expose their rationale to users.
(b)(a)
Figure 16.3 (a) Icons for a trashcan and (b) bird symbols
Source: (a) University of Maryland; (b) Merlin Bird ID app, Cornell Lab of Ornithology
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S564
The outcome of a SigniFYIng Message inspection is an assessment of the quality of the mes-
sages and the strategies of communication that a piece of UX design offers to the users. Using
this information, designers may choose to modify the signs to clarify the communication.
Figure 16.4 Examples of static, dynamic, and metalinguistic signs used in UX design
sketches for a meeting arrangement app
Source: de Souza et al. (2016). Used courtesy of Springer Nature
ACTIVITY 16.3
Conduct a cognitive walk-through of typical users who want to buy a copy of this book as an
ebook at www.amazon.com or www.wiley.com. Follow the steps outlined earlier by Cathleen
Wharton (Wharton et al., 2009).
Comment
Step 1
Typical Users Students and professional designers who use the web regularly.
Task To buy an ebook version of this book from www.amazon.com or
www.wiley.com.
Step 2
You will play the role of the expert evaluator.
http://www.amazon.com
http://www.wiley.com
http://www.amazon.com
http://www.wiley.com
http://www.wiley.com
1 6 . 2 I N S P E C T I O N S : H E U r I S T I C E V A L U AT I O N A N D W A L k – T H r O U g H S 565
A variation of a cognitive walk-through was developed by Rick Spencer (2000) to
overcome some problems that he encountered when using the original form of a cognitive
walk-through for a design team. The first problem was that answering the questions and
discussing the answers took too long. Second, the designers tended to be defensive, often
Step 3
(Note that the interface for www.amazon.com or www.wiley.com may have changed since the
authors did this evaluation.)
The first action will probably be to select the search box on the home page of the website
selected and then type in the title or names of the author(s) of the book.
Q: Will users know what to do?
A: Yes. They know that they must find books, and the search box is a good
place to start.
Q: Will users see how to do it?
A: Yes. They have seen a search box before, will type in the appropriate term, and
will click the Go or Search icon.
Q: Will users understand from the feedback provided whether the action was cor-
rect or not?
A: Yes. Their action should take them to a page that shows them the cover of this
book. They need to click this or a Buy icon next to the cover of the book.
Q: Will users understand from the feedback provided whether the action was cor-
rect or not?
A: Yes. They have probably done this before, and they will be able to continue to
purchase the book.
ACTIVITY 16.4
From your experience of reading about and trying a heuristic evaluation and cognitive walk-
through, how do you think they compare for evaluating a website in terms of the following?
1. The time typically needed to do each kind of evaluation
2. The suitability of each method for evaluating a whole website
Comment
1. A cognitive walk-through would typically take longer because it is a more detailed process
than a heuristic evaluation.
2. A cognitive walk-through would typically not be used to evaluate a whole website unless
it was a small one. A cognitive walk-through is a detailed process, whereas a heuristic
evaluation is more holistic.
http://www.amazon.com
http://www.wiley.com
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S566
invoking long explanations of cognitive theory to justify their designs. This was particu-
larly difficult because it undermined the efficacy of the method and the social relationships
of team members. To cope with these problems, he adapted the method by asking fewer
detailed questions and curtailing discussion. This meant that the analysis was more coarse-
grained but could normally be completed in about 2.5 hours, depending on the task being
evaluated by the cognitive walk-through. He also identified a leader and set strong ground
rules for the session, including a ban on defending a design, debating cognitive theory, or
doing designs on the fly.
More recently, Valentina Grigoreanu and Manal Mohanna (2013) modified the cogni-
tive walk-through so that it could be used effectively within an agile design process in which
a quick turnaround in design-evaluate-design cycles is needed. Their method involves an
informal, simplified streamlined cognitive walk-through (SSCW) followed by an informal
pluralistic walk-through (discussed next). When compared to a traditional user study on the
same user interface, they found that approximately 80 percent of the findings from the user
study were also revealed by the SSCW.
Pluralistic Walk-Throughs
Pluralistic walk-throughs are another type of well-established walk-through in which users,
developers, and usability researchers work together to step through a task scenario. As they
do this, they discuss usability issues associated with dialog elements involved in the sce-
nario steps (Nielsen and Mack, 1994). In a pluralistic walk-through, each person is asked
to assume the role of a typical user. Scenarios of use, consisting of a few prototype screens,
are given to each person who writes down the sequence of actions that they would take to
move from one screen to another, without conferring with each other. Then they all discuss
the actions they each suggested before moving on to the next round of screens. This process
continues until all of the scenarios have been evaluated (Bias, 1994).
The benefits of pluralistic walk-throughs include a strong focus on users’ tasks at a
detailed level, that is, looking at the steps taken. This level of analysis can be invaluable for
certain kinds of systems, such as safety-critical ones, where a usability problem identified
for a single step could be critical to its safety or efficiency. The approach lends itself well to
participatory design practices, as discussed in Chapter 12, “Design, Prototyping, and Con-
struction,” by involving a multidisciplinary team in which users play a key role. Furthermore,
the researchers bring a variety of expertise and opinions for interpreting each stage of the
interaction. The limitations with this approach include having to get the researchers together
at one time and then proceed at the rate of the slowest. Furthermore, only a limited number
of scenarios, and hence paths through the interface, can usually be explored because of time
constraints.
A discussion of the value of the cognitive walk-through method for evaluating vari-
ous devices can be found at www.userfocus.co.uk/articles/cogwalk.html
http://www.userfocus.co.uk/articles/cogwalk.html
1 6 . 3 A N A LY T I C S A N D A / B T E S T I N g 567
16.3 Analytics and A/B Testing
A variety of users’ actions can be recorded by software automatically, including key presses,
mouse or other pointing device movements, time spent searching a web page, looking at help
systems, and task flow through software modules. A key advantage of logging activity auto-
matically is that it is unobtrusive provided the system’s performance is not affected, but it
also raises ethical concerns about observing participants if this is done without their knowl-
edge, as discussed in Chapter 10, “Data at Scale.” Another advantage is that large volumes
of data can be logged automatically and then explored and analyzed using visualization and
other tools.
16.3.1 Web Analytics
Web analytics is a form of interaction logging that was specifically created to analyze users’
activity on websites so that designers could modify their designs to attract and retain custom-
ers. For example, if a website promises users information about how to plant a wildflower
garden but the home page is unattractive and it only shows gardens in arid and tropical
regions, then users from more temperate zones will not look any further because the informa-
tion they see isn’t relevant to them. These users become one-time visitors and leave to look
for other websites that contain the information they need to create their gardens. If the web-
site is used by thousands of users and a small number of users do not return, this loss of users
may not be noticed by the web designers and web owners unless they track users’ activities.
Using web analytics, web designers and developers can trace the activity of the users who
visit their website. They can see how many people came to the site, how many stayed and for
how long, and which pages they visited. They can also find out about where the users came
from and much more. Web analytics is therefore a powerful evaluation tool for web designers
that can be used on its own or in conjunction with other types of evaluations, particularly
user testing. For instance, web analytics can provide a “big-picture” overview of user interac-
tion on a website, whereas user testing with a few typical users can reveal details about UX
design problems that need to be fixed.
Because the goal of using web analytics is to enable designers to optimize users’ usage
of the website, web analytics is especially valued by businesses and market research organi-
zations. For example, web analytics can be used to evaluate the effectiveness of a print or
media advertising campaign by showing how traffic to a website changes during and after
the campaign.
For an overview of walk-throughs and an example of a cognitive walk-through of
iTunes, see the following site:
http://team17-cs3240.blogspot.com/2012/03/cognitive-walkthrough-and-
pluralistic.html
Note: The link to pluralistic walk-throughs may not work correctly on all browsers.
http://team17-cs3240.blogspot.com/2012/03/cognitive-walkthrough-and-pluralistic.html
http://team17-cs3240.blogspot.com/2012/03/cognitive-walkthrough-and-pluralistic.html
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S568
Web analytics are also used in evaluating non-transactional products such as informa-
tion and entertainment websites, including hobby, music, games, blogs, and personal websites
(refer to Sleeper et al., 2014), and for learning. When analytics are used in learning, they are
often referred to as learning analytics (for example, Oviatt et al., 2013; Educause, 2016).
Learning analytics play a strong role in evaluating learners’ activities in massive open online
courses (MOOCs) and with Open Education Resources (OERs). The designers of these sys-
tems are interested in questions such as at what point do learners tend to drop out and why?
Other types of specialist analytics have also been developed that can be used in evalua-
tion studies, such as visual analytics (discussed in Chapter 10, “Data at Scale, in which thou-
sands and often millions of data points are displayed and can be manipulated visually, as in
social network analysis (Hansen et al., 2019).
Box 16.5 and Box 16.6 contain two short case examples of web analytics being used in
different evaluation contexts. The first is an early example designed to evaluate visitor traffic
to a website for Mountain Wines of California. The second shows the use of Google Analyt-
ics for evaluating the use of a community website for air monitoring.
Using Web Analytics
There are two types of web analytics: on-site and off-site analytics. On-site analytics are used
by website owners to measure visitor behavior. Off-site analytics measure a website’s visibil-
ity and potential to acquire an audience on the Internet regardless of who owns the website.
In recent years, however, the difference between off-site and on-site analytics has blurred but
some people still use these terms. Additional sources may also be used to augment the data
collected about a website, such as email, direct mail campaign data, sales, and history data,
which can be paired with web traffic data to provide further insights into users’ behavior.
Google Analytics
Even as early as 2012, Google Analytics was the most widely used on-site web analytics and
statistics service. More than 50 percent of the 10,000 most popular websites at that time
(Empson, 2012) used Google Analytics, and its popularity continues to soar.
Figure 16.5 shows parts of the Google Analytics dashboard for the accompanying website
for the previous edition of this book, id-book.com, for the week starting at the end of Novem-
ber 2018 until the beginning of December 2018. The first segment (a) shows information about
who accessed the site and how long they stayed, the second segment (b) shows the devices used
to view the website and the pages visited, and the third segment (c) shows the languages spoken
by the users.
A video of Simon Buckingham Shum’s 2014 keynote presentation at the EdMedia
2014 Conference can be found at http://people.kmi.open.ac.uk/sbs/2014/06/
edmedia2014-keynote/ The video introduces learning analytics and how analyt-
ics are used to answer key questions in a world where people are dealing with
large volumes of digital data.
http://id-book.com
http://people.kmi.open.ac.uk/sbs/2014/06/edmedia2014-keynote/
http://people.kmi.open.ac.uk/sbs/2014/06/edmedia2014-keynote/
1 6 . 3 A N A LY T I C S A N D A / B T E S T I N g 569
(b)
(a)
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S570
ACTIVITY 16.5
Consider the three screenshot segments shown in Figure 16.5 from the Google Analytics for
id-book.com, and then answer the following questions.
1. How many people visited the site during this period?
2. What do you think someone might look at in 2 minutes, 37 seconds (the average time they
spent on the site)?
3. Bounce rate refers to the percentage of visitors who view just one page of your site. What
is the bounce rate for this book, and why do you think this might be a useful metric to
capture for any website?
4. Which devices are being used to access the site?
5. Which were the three largest language groups during the period, and what can you say
about the bounce rate for each of them.
Comment
1. 1,723 users visited the site over this period. Notice that some users must have had more
than one session since the number of users is not the same as the number of sessions, which
was 2,271.
(c)
Figure 16.5 Segments of the Google Analytics dashboard for id-book.com in December 2018:
(a) audience overview, (b) the devices used to access the site, and (c) the languages of the users
http://id-book.com
http://id-book.com
1 6 . 3 A N A LY T I C S A N D A / B T E S T I N g 571
Ian Lurie’s “Google Analytics Tutorial—Install” video explains how to install and
use Google Analytics on your website. This video can be found at http://youtu
.be/P_l4oc6tbYk
Scott Bradley’s Google Analytics Tutorial Step-by-Step video describes the sta-
tistics included in Google Analytics, and it provides insight into how the analytics
may be used to improve user traffic. This video can be found at http://youtu.be/
mm78xlsADgc
For an overview of different dashboards that can be customized in Google Ana-
lytics, see Ned Poulter’s website (2013) 6 Google Analytics Custom Dashboards
to Save You Time NOW! at http://www.stateofdigital.com/google-analytics-
dashboards/
You can also study an online course, developed by FutureLearn, on data science
with Google Analytics at www.futurelearn.com/courses/data-science-google-
analytics/
2. The number of pages viewed per session on average is about 3.25 in 2 minutes, 37 seconds.
This suggests that a user probably won’t have played any of the videos on the site nor read
any of the case studies in any great detail. From part (b), it appears that they did check out
some of the chapters, resources, and slides.
3. The bounce rate is 58.30 percent. This is a useful metric because it represents a simple but
significant characteristic of user behavior, which is that after visiting the home page, they
did not go anywhere else on the site. Typical bounce rates are 40–60 percent, while greater
than 65 percent is high and less than 35 percent is low). If the bounce rate is high, it merits
further investigation to see whether there is a problem with the website.
4. 79.6 percent of users accessed the site using a desktop, 16 percent used a mobile device,
and 4.4 percent used a laptop. Compared to the previous week, the number of mobile users
increased by 3.2 percent.
5. American English speakers were the largest group (317, or 59.81 percent), followed by
British English speakers (44, or 8.3 percent), and then Chinese speakers (27, or 5.09 per-
cent). The bounce rate for the Chinese visitors was by far the highest at 82.86 percent,
compared with 55.5 percent for the Americans and 63.45 percent for the British visitors.
http://www.stateofdigital.com/google-analytics-dashboards/
http://www.stateofdigital.com/google-analytics-dashboards/
http://www.futurelearn.com/courses/data-science-google-analytics/
http://www.futurelearn.com/courses/data-science-google-analytics/
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S572
There are many sites on the web that provide lists of analytics tools. One of these,
which includes some tools in addition to those mentioned in Box 16.4, is the
following:
https://www.computerworlduk.com/galleries/data/best-web-analytics-tools-
alternatives-google-analytics-3628473/
BOX 16.5
Tracking Visitors to Mountain Wines Website
In this study, Mountain Wines of California hired VisiStat to do an early study of the traffic
to its website. Mountain Wines wanted to find ways to encourage more visitors to come to its
website with the hope of enticing them to visit the winery. The first step to achieving this goal
was to discover how many visitors were currently visiting the website, what they did there,
and where they came from. Obtaining analytics about the website enabled Mountain Wines to
start to understand what was happening and how to increase the number of visitors (VisiStat,
2010). Part of the results of this early analysis are shown in Figure 16.6, which provides an
overview of the number of page views provided by VisiStat. Figure 16.7 shows where some of
the IP addresses are located.
BOX 16.4
Other Analytics Tools
In addition to Google Analytics, other tools continue to emerge that provide additional layers
of information, good access control options, and raw and real-time data collection.
Moz Analytics Tracks search marketing, social media marketing, brand activity, links, and
content marketing, and it is particularly useful for link management and analysis:
www.moz.com
TruSocialMetrics Tracks social media metrics, and it helps calculate social media marketing
return on investment: www.truesocialmetrics.com
Clicky Comprehensive and real-time analytics tool that shows individual visitors and the
actions they take, and it helps define what people from different demographics find interest-
ing: www.clicky.com
KISSmetrics Detailed analytics tool that displays what website visitors are doing on your
website before, during, and after they buy: www.kissmetrics.com
Crazy Egg Tracks visitor clicks based on where they are specifically clicking, and it creates
click heat maps useful for website design, usability, and conversion: www.crazyegg.com
ClickTale Records website visitor actions and uses meta-statistics to create visual heat map
reports on customer mouse movement, scrolling, and other visitor behaviors: www
.clicktale.com
https://www.computerworlduk.com/galleries/data/best-web-analytics-tools-alternatives-google-analytics-3628473/
https://www.computerworlduk.com/galleries/data/best-web-analytics-tools-alternatives-google-analytics-3628473/
http://www.moz.com
http://www.truesocialmetrics.com
http://www.clicky.com
http://www.kissmetrics.com
http://www.crazyegg.com
http://www.clicktale.com
http://www.clicktale.com
1 6 . 3 A N A LY T I C S A N D A / B T E S T I N g 573
Using this and other data provided by VisiStat, the Mountain Wines founders could see
visitor totals, traffic averages, traffic sources, visitor activity, and more. They discovered the
importance of visibility for their top search words; they could pinpoint where their visitors
were going on their website; and they could see where their visitors were geographi-
cally located.
Figure 16.7 Where the 13 visitors to the website are located by the IP address
Source: http://www.visistat.com/tracking/monthly-page-views.php
BOX 16.6
Using Google Analytics for Air Quality Monitoring
Many parts of the world suffer from poor air quality caused by pollution from industry, traf-
fic congestion, and forest fires. More recently, fires in California, the northwest United States,
Canada, and parts of Europe have created severe air quality problems. Consequently, commu-
nities are developing devices to crowdsource air quality readings for monitoring the quality
of the air that they breathe. In one of these community-empowered air quality monitoring
project, Yen-Chia Hsu and her colleagues (2017) developed a website that integrates animated
smoke images, data from sensors, and crowdsourced smell reports and wind data.
Figure 16.6 A general view of the kind of data provided by VisiStat
Source: http://www.visistat.com/tracking/monthly-page-views.php
http://www.visistat.com/tracking/monthly-page-views.php
http://www.visistat.com/tracking/monthly-page-views.php
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S574
16.3.2 A/B Testing
Another way to evaluate a website, part of a website, an application, or an app running on
a mobile device is by carrying out a large-scale experiment to evaluate how two groups of
users perform using two different designs—one of which acts as the control and the other as
the experimental condition, that is, the new design being tested. This approach is known
as A/B testing, and it is basically a controlled experiment but one that often involves hun-
dreds or thousands of participants. Like the experimental design discussed in Chapter 15,
“Evaluation Studies: From Controlled to Natural Settings,” A/B testing involves a “between
subjects” experimental design in which two similar groups of participants are randomly
selected from a single large user population (Kohavi and Longbotham, 2015), for instance,
from users of social media sites such as Twitter, Facebook, or Instagram. The main differences
between A/B testing and the experiments discussed in Chapter 15 is one of scale and that
typically A/B testing is done online.
To do A/B testing, a variable of interest is identified, such as the design of an advertise-
ment. Group A is served design A, the existing design, and group B is served design B, the
new design. A dependent measure is then identified, such as how many times participants in
each group, A and B, click the advertisement that they are presented over a particular period
of time, such as a day, week, or a month. Because this is a controlled experiment, the results
can be analyzed statistically to establish the probability that if a difference is observed, it is
because of the treatment (in this case, the design) and not because of chance.
As Ron Kohavi (2012) mentions, A/B testing provides a valuable data-driven approach
for evaluating the impact of small or large differences in the designs of web and social media
sites. From front-end user-interface changes to backend algorithms, from search engines (such
as Google, Bing, and Yahoo!) to retailers (for example, Amazon, eBay, and Etsy) to social net-
working services (such as Facebook, LinkedIn, and Twitter) to travel services (for instance,
Expedia, Airbnb, and Booking.com) to many startups, online controlled experiments are now
utilized to make data-driven decisions at a wide range of companies (Deng et al., 2017).
To get the most benefit from running an online A/B test, Ron Kohavi and Roger Long-
botham (2015) recommend first running an A/A test. This is a test in which both populations
of participants see the same design and should have the same experience. The results of the
Having enabled the community to monitor its own air quality and to collect reliable data
to advocate for change, these researchers were eager to track users’ activity on their website.
They carried out a Google Analytics evaluation of the website from August 2015 to July 2016,
which showed that there were 542 unique users who visited the website on 1,480 occasions
for an average of 3 minutes each.
This study was innovative because, like many other local communities, this community
was not technically savvy. Furthermore, developing information technology to democratize
scientific knowledge and support citizen empowerment is a challenging task. However, Google
Analytics, along with user testing, enabled these researchers to modify the design of the web-
site and the associated system so that it was easier for the community to use.
http://Booking.com
1 6 . 3 A N A LY T I C S A N D A / B T E S T I N g 575
A/A test are then examined, and they should show no statistically significant difference. Fol-
lowing this procedure ensures that the two randomly selected populations are indeed random
and that the conditions under which the experiment is running are indeed similar. This is
important because the Internet is complex, and users’ interactions can be influenced in ways
that researchers do not expect (for example by bots or the way browsers refresh or redirect),
which could reduce the value of the A/B test, possibly even invalidating it.
Powerful though A/B testing may be, researchers are advised to check their plans in
detail to ensure that they are testing what they expect to test. For example, Ron Kohabi and
Roger Longbottom carried out an A/B test on two versions of a design for early versions of
the Microsoft Office 2007 home page. The idea was to test the effectiveness of a new and
more modern-looking home page with the primary objective of increasing the number of
download clicks. However, instead of the number of download clicks going up as expected,
it actually decreased by 64 percent. The researchers wondered what caused such an unex-
pected result. Upon closer examination of the two designs, they noticed the words in the
new design were “Buy now” with a $149.95 price, whereas the old design said, “Try 2007
for free” and “Buy now”. The impact of being asked to pay $149.95 distorted the experi-
ment, even though the new design might have actually been better. Microsoft Office has
gone through many revisions since testing the 2007 version, but this example is included
because it demonstrates the care that is needed when setting up A/B testing to ensure that it
is actually testing the intended design features. Other design features, particularly ones that
involve payments by users, can have powerful unexpected consequences that even expe-
rienced researchers like Ron Kohabi and Roger Longbottom may overlook when setting
up the test.
ACTIVITY 16.6
From your knowledge of web analytics and A/B testing:
1. What would you be able to find out by using each method to evaluate a website?
2. What skills would you need to use each successfully?
Comment
1. Analytics would most likely be used to get an overview of how users are using the web-
site. It would show who is using the website, when and for how long, where the users’
IP addresses are located, bounce rates, and more. In contrast, A/B testing is a controlled
experiment that enables researchers to evaluate and compare the impact of two or more
UX designs. Typically, A/B testing is used to look at one or two features rather than a
whole website.
2. There are many tools for evaluating websites using analytics. These tools are typically
fairly straightforward to use, and with just a little knowledge designers can embed prewrit-
ten code into their designs to obtain analytics. Alternatively, there are many consultancy
companies that can be hired to perform this service. In contrast, knowledge of experimen-
tal design and statistics is needed to do A/B testing.
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S576
16.4 Predictive Models
Like inspection methods and analytics, predictive models can be used to evaluate a product
without users being present. Rather than user researchers being involved in role-playing dur-
ing inspections, or tracking their behavior using analytics, predictive models use formulas to
derive various measures of user performance. Predictive modeling provides estimates of the
efficiency of different systems for various kinds of tasks. For example, a smartphone designer
may choose to use a predictive model because it enables them to determine accurately, which
is the optimal sequence of keys for performing a particular operation.
16.4.1 Fitts’ Law
One kind of predictive model that has been influential in HCI and interaction design over the
years is Fitts’ law. Fitts’ law (Fitts, 1954) predicts the time it takes to reach a target using a point-
ing device. It was originally used in human factors research to model the relationship between
speed and accuracy when moving toward a target on a display. In interaction design, it has been
used to model the time it takes to point at a target (for example, an icon on a screen), based on
the size of the object and the distance to the object (Mackenzie, 1992). One of its main benefits
is that it can help designers decide where to locate physical or digital buttons, what size to make
them, and how close together to put them on a touch display or a physical device. In the early
days, it was most useful for designing physical laptop/PC keyboard layouts and the placement
of physical keys on mobile devices, such as smartphones, watches, and remote controls. It has
also been used for designing the layout of digital displays for input on touchscreen interfaces.
Fitts’ law states that:
T k D Slog .2 1 0/
where
T = time to move the pointer to a target
D = distance between the pointer and the target
S = size of the target
k is a constant of approximately 200 ms/bit.
In a nutshell, the bigger the target, the easier and quicker it is to reach it. This is why inter-
faces that have big buttons are easier to use than interfaces that present lots of tiny buttons
crammed together. Fitts’ law also predicts that the targets accessed most quickly on any com-
puter display are positioned at the four corners of the screen. This is because of their pinning
action; in other words, the sides of the display constrain the user from over-stepping the target.
Fitts’ law can be useful for evaluating systems where the time to locate an object physi-
cally is critical to the task at hand. In particular, it can help designers think about where
to locate objects on the screen in relation to each other. This is especially useful for mobile
devices, where there is limited space for placing icons and buttons on the screen. For exam-
ple, in an early study carried out by Nokia, Fitts’ law was used to predict text entry rates
for several input methods on a 12-key cell phone keypad (Silverberg et al., 2000). The study
helped the designers make decisions involving trade-offs about the size of keys, their posi-
tioning, and the sequences of keypresses to perform common tasks.
Scott MacKenzie and Robert Teather (2012) used Fitts’ law in several studies including
one deigned to evaluate tilt as an input method for devices with built-in accelerometers, such
1 6 . 4 P r E D I C T I V E M O D E L S 577
as touchscreen phones and tablet computers. It was also used to examine the effect of the size
of the physical gap between displays and the proximity of targets in multiple-display envi-
ronments (Hutchings, 2012). In addition, Fitts’ law has been used to compare eye-tracking
input with manual input for visual targets (Vertegaal, 2008), different ways of mapping
Chinese characters to the keypad of cell phones (Liu and Räihä, 2010); and gestural, touch,
and mouse interaction (Sambrooks and Wilkinson, 2013). More recently, Fitts’ law has been
used for considering the effectiveness of new ways of input such as different game controllers
(Ramcharitar and Teather, 2017), cursor positions for 3D selections in VR (Li et al., 2018),
and gaze input on large displays with touch and mouse input (Rajanna and Hammond,
2018). Another creative use of Fitts’ law is to evaluate the efficacy of simulating users with
motor impairments interacting with a head-controlled mouse pointer system (Ritzvi et al.,
2018). This application of Fitts’ law was especially useful because it can be difficult to recruit
participants with motor impairments to take part in user tests.
ACTIVITY 16.7
Microsoft toolbars provide the user with the option of displaying a label below each tool.
Give a reason why labeled tools may be accessed more quickly. (Assume that the user knows
the tool and does not need the label to identify it.)
Comment
The label becomes part of the target and hence the target gets bigger. As mentioned earlier,
bigger targets can be accessed more quickly.
Furthermore, tool icons that don’t have labels are likely to be placed closer together so that
they are more crowded. Spreading the icons farther apart creates buffer zones of space around the
icons so that if users accidentally go past the target, they will be less likely to select the wrong icon.
When the icons are crowded together, the user is at greater risk of accidentally overshooting and
selecting the wrong icon. The same is true of menus where the items are closely bunched together.
In-Depth Activity
This in-depth activity continues the work you did on the new interactive product for booking
tickets at the end of Chapters 11, 12, and 15. The goal of this assignment is to evaluate the
prototypes produced in the assignment from Chapter 12 by using heuristic evaluation.
1. Decide on an appropriate set of heuristics and perform a heuristic evaluation of one of the
prototypes that you designed in Chapter 12.
2. Based on this evaluation, redesign the prototype to overcome the problems that you
encountered.
3. Compare the findings from this evaluation with those from the usability testing in the
previous chapter. What differences do you observe? Which evaluation approach do you
prefer and why?
1 6 E V A L U AT I O N : I N S P E C T I O N S , A N A LY T I C S , A N D M O D E L S578
Summary
This chapter presented inspection evaluation methods, focusing on heuristic evaluation and
walk-throughs, which are usually done by specialists (often referred to as experts), who
role-play users’ interactions with designs, prototypes, and specifications. They use their
knowledge of the kinds of problems that users typically encounter, and then they offer their
opinions. Heuristic evaluation and walk-throughs offer a structure to guide the evalua-
tion process.
Analytics, in which users’ interactions are logged, are often performed remotely and
without users being aware that their interactions are being tracked. Large volumes of data are
collected, anonymized, and statistically analyzed using specially developed software services,
such as Google Analytics. The analysis provides information about how a product is used, for
instance, how different versions of a website or prototype perform, or which parts of a website
are seldom used—possibly because of poor usability design or lack of appeal. Data are often
presented visually so that it is easier to see trends and interpret the results.
A/B testing is another form of remote testing. Fundamentally, A/B testing is a controlled
experiment in which two or more dependent variables are investigated using large numbers
of participants who are randomly allocated to the different experimental conditions. Small
differences in the UX design of a home page can, for example, be tested using A/B testing. For
sites with very large populations of users, such as popular social media sites, even small differ-
ences in design can strongly impact the number of users who use the application.
Fitts’ law is an example of an evaluation method that can be used to predict user perfor-
mance by determining whether a proposed interface design or keypad layout will be optimal.
Typically, Fitts’ law is used to compare different design layouts for virtual or physical objects,
such as buttons on a device or screen.
Designers and researchers often find that they have to modify these methods, as they do
for those described in the previous chapter, for use with the wide range of products that have
come onto the market since they were originally developed.
Key Points
• Inspections can be used for evaluating a range of representations including requirements,
mockups, prototypes, or products.
• User testing and heuristic evaluation often reveal different usability problems.
• Other types of inspections used in UX design include pluralistic and cognitive walk-throughs.
• Walk-throughs are a fine-grained, focused methods that are suitable for evaluating small
parts of a product.
• Analytics involve collecting data about user interactions to identify how users use a website
or product and which parts are underused.
• When applied to websites, analytics are often referred to as web analytics. Similarly, when
applied to learning systems, they are referred to as learning analytics.
• Fitts’ law is a predictive model that has been used in HCI to evaluate keypress sequences
for handheld devices.
579
Further Reading
BUDIU, R. and NIELSEN, J. (2012) Mobile Usability. New Riders Press. This book dis-
cusses why designing for mobile devices is different than designing for other systems. It
describes how to evaluate these systems, including doing expert reviews, and it provides
many examples.
FUTURELEARN, (2018) offers a course entitled Data Science with Google Analytics www
.futurelearn.com/courses/data-science-google-analytics/ This course provides a good intro-
duction for those who are new to Google Analytics. The course is regularly updated, and it is
free. There is a small cost, however, if you want to buy the course materials.
GRANOLLERS, T. (2018) Usability Evaluation with Heuristics, Beyond Nielsen’s list. ACHI
2018: The Eighth International Conference on Advances in Human Computer Interaction.
60–65. This paper provides a detailed comparison of sets of heuristics, and it suggests ways
to improve heuristic evaluation.
KOHAVI, R. and LONGBOTHAM, R. (2015) Unexpected Results in Online Controlled
Experiments. SIGKDD Explorations Volume 12, Issue 2, 31–35. This paper describes some
of the things to look out for when doing A/B testing.
MACKENZIE, S.I. and SOUKOREFF, R. W. (2002) Text entry for mobile computing: models
and methods, theory and practice. Human-Computer Interaction, 17, 147–198. This paper
provides a useful survey of mobile text-entry techniques and discusses how Fitts’ law can
inform their design.
F U r T H E r r E A D I N g
http://www.futurelearn.com/courses/data-science-google-analytics/
http://www.futurelearn.com/courses/data-science-google-analytics/
Abdelnour-Nocéra, J., Clemmensen, T., and Kurosu, M. (2013b) Reframing HCI Through
Local and Indigenous Perspectives. International Journal of Human-Computer Interaction,
29: 201–204.
Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., and Kankanhalli, M. (2018) Trends and Tra-
jectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI
’18). ACM, New York, NY, Paper 582, 18 pages.
Abelein, U., Sharp, H., and Paech, B. (2013) Does Involving Users in Software Development
Really Influence System Success?, IEEE Software, Nov/Dec 2013, 13–19.
Abowd, G., and Mynatt, E. (2000) Charting Past, Present, and Future Research in Ubiquitous
Computing. ACM Transactions on Computer–Human Interaction, 7(1), 29–58.
Abowd, G. D. (2012) What Next, Ubicomp?: Celebrating an Intellectual Disappearing Act. In
Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp’12). ACM,
New York, NY, pp. 31–40.
Abowd, G. D., Atkeson, C. G., Bobick, A. F., Essa, I. A., MacIntyre, B., Mynatt, E. D., and
Starner, T. E. (2000) Living Laboratories: The Future Computing Environments Group at
the Georgia Institute of Technology. In Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’00). ACM, New York, NY, pp. 215–216.
Adams, A., and Sasses, M.A. (1999) Users Are Not The Enemy. Communications of the
ACM, 42(12), 41–46.
Adib, F., Mao, H., Kabelac Z., Katabi, D., and Miller, R. C. (2015) Smart Homes That
Monitor Breathing and Heart Rate. In Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, pp. 837–846.
Adlin, T., and Pruitt, J. (2010) The Essential Persona Lifecycle: Your Guide to Building and
Using Personas. Morgan Kaufmann.
Alavi, H.S., Churchill, E.F., Wiberg, M., Lalanne, D., Dalsgaard, P., Schieck, A.F., and Rogers,
Y. (2019) Introduction to Human-Building Interaction (HBI)—Interfacing HCI with Archi-
tecture and Urban Design. To appear in ACM ToCHI.
Alexander I., and Robertson, S. (2004) Understanding Project Sociology by Modeling Stake-
holders, IEEE Software, 21(1), 23–27.
Alexander, C. (1979) A Pattern Language: Towns, Buildings, Construction. Oxford Univer-
sity Press.
Al-Humairi, A., Al-Kindi, O., and Jabeur, N. (2018) Automated Musical Instruments. In
Proceedings of ICMRE 2018 Proceedings of the 2018 4th International Conference on
Mechatronics and Robotics Engineering, pp. 163–169.
Ali, R., Arden-Close, E., and McAlaney, J. (2018) Digital Addiction: How Technology Keeps Us
Hooked. The Conversation. Downloaded from http://theconversation.com/digital-addiction-
how-technology-keeps-us-hooked-97499.
References
http://theconversation.com/digital-addiction-how-technology-keeps-us-hooked-97499
http://theconversation.com/digital-addiction-how-technology-keeps-us-hooked-97499
R e f e R e n c e s582
Allanwood, G., and Beare, P. (2014) User Experience Design. Fairchild Books.
Allison, D., Wills, B., Bowman, D., Wineman, J., and Hodges, L. (1997) The Virtual Reality
Gorilla Exhibit, IEEE Computer Graphics and Applications, 30–38.
Ambler, S. (2002) Agile modeling. John Wiley. Also at http://www.agilemodeling.com/essays/
agileDocumentationBestPractices.htm.
Anderson, C. (2013) Makers. Random House Business Books.
Anderson, D.J. (2010) Kanban: Successful Evolutionary Change for Your Technology Business.
Blue Hole Press.
Antle, A. N., Corness, G., and Droumeva, M. (2009) Human–Computer-Intuition? Exploring
the Cognitive Basis for Intuition in Embodied Interaction. International Journal of Arts and
Technology, 2, 3, 235–254.
Ardito, C., Buono, P., Costabile, D. C. M. F., and Lanzilotti, R. (2014) Investigating and
Promoting UX Practice in Industry: An Experimental Study. International Journal of Human-
Computer Studies, 72, 542–551.
Armitage, U. (2004) Navigation and Learning in Electronic Texts. PhD thesis, Centre for HCI
Design, City University London.
Aronson-Rath, R., Milward, J., Owen, T., and Pitt, F. (2016). Virtual Reality Journalism. New
York, NY: Columbia Journalism School.
Ayobi, A., Sonne, T., Marshall, P., and Cox, A. L. (2018) Flexible and Mindful Self-Tracking:
Design Implications from Paper Bullet Journals. In Proceedings of the 2018 CHI Confer-
ence on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, Paper
28, 14 pages.
Babich, N. (2016) Designing Card-Based User Interfaces. Downloaded from https://www
.smashingmagazine.com/2016/10/designing-card-based-user-interfaces/.
Babich, N. (2018) The Do’s and Don’ts of Mobile UX Design. Downloaded from https://
theblog.adobe.com/10-dos-donts-mobile-ux-design/.
Bachour, K., Kaplan, F., and Dillenbourg, P. (2008) Reflect: An Interactive Table for Regulat-
ing Face-to-Face Collaborative Learning. In Proceedings of the 3rd European Conference
on Technology Enhanced Learning: Times of Convergence: Technologies Across Learning
Contexts. In P. Dillenbourg and M. Specht (eds) Lecture Notes in Computer Science, 5192.
Springer-Verlag, Berlin, Heidelberg, pp. 39–48.
Bachour, K., Seiied Alavi, H., Kaplan, F., and Dillenbourg, P. (2010) Low-Resolution Ambi-
ent Awareness Tools for Educational Support. In Proceedings of CHI 2010 Workshop: The
Future of HCI and Education.
Bailey, B. (2000) How to Improve Design Decisions by Reducing Reliance on Superstition.
Let’s Start with Miller’s ‘Magic 7.’ Human Factors International, Inc. www.humanfactors
.com (accessed December 16, 2010).
Bailey, R. W. (2001) Insights from Human Factors International Inc. (HFI). Providing Con-
sulting and Training in Software Ergonomics. January (www.humanfactors.com/home).
Bainbridge, D. (2014) Information Technology and Intellectual Property Law (6th edn).
Bloomsbury Professional.
Baker, K., Greenberg, S., and Gutwin, C. (2002) Empirical Development of a Heuristic Evalu-
ation Methodology for Shared Workspace Groupware. In ACM Proceedings of CSCW’02
Conference.
http://www.agilemodeling.com/essays/agileDocumentationBestPractices.htm
http://www.agilemodeling.com/essays/agileDocumentationBestPractices.htm
https://www.smashingmagazine.com/2016/10/designing-card-based-user-interfaces/
https://www.smashingmagazine.com/2016/10/designing-card-based-user-interfaces/
https://theblog.adobe.com/10-dos-donts-mobile-ux-design/
https://theblog.adobe.com/10-dos-donts-mobile-ux-design/
http://www.humanfactors.com
http://www.humanfactors.com
http://www.humanfactors.com/home
R e f e R e n c e s 583
Baker, M., Casey, R., Keyes, B., and Yanco, H. A. (2004) Improved Interfaces for Human–
Robot Interaction in Urban Search and Rescue. In Proceedings of the IEEE Conference on
Systems, Man and Cybernetics, October.
Balakrishnan, A. D., Kiesler, S., Cummings, J. N., and Zadeh, Reza. (2011) Research Team
Integration: What it is and Why it Matters. In Proceedings of the ACM 2011 Conference on
Computer Supported Cooperative Work, ACM Press, pp. 523–532.
Balestrini, M., Diez, T., Marshall, P., Gluhak, A., and Rogers. Y. (2015) IoT Community Tech-
nologies: Leaving Users to Their Own Devices or Orchestration of Engagement? In Endorsed
Transactions on Internet of Things, EAI, Vol. 15 (1).
Bano, M., and Zowghi, D. (2015) A Systematic Review on the Relationship Between User
Involvement and System Success, Information and Software Technology, 58, pp.148–169.
Bano, M., Zowghi, D., and Rimini, F. (2017) User Satisfaction and System Success: An Empir-
ical Exploration of User Involvement in Software Development. Empirical Software Engi-
neering, 22, pp. 2339–2372.
Banzi, M. (2009) Getting Started with Arduino. O’Reilly Media Inc.
Barnard, P. J., Hammond, N., Maclean, A., and Morten, J. (1982) Learning and Remember-
ing Interactive Commands in a Text Editing Task, Behavior and Information Technology,
1, 347–358.
Baskinger, M. (2008) Pencils Before Pixels: A Primer in Hand-Generated Sketching, Interac-
tions, March–April, 28–36.
Bastos, J. A. D. M., Afonso, L. M., and de Souza, C.S. (2017) Metacommunication Between
Programmers Through an Application Programming Interface: A Semiotic Analysis of Date
and Time APIs, in IEEE Symposium on Visual Languages and Human-Centric Computing
(VL/HCC), 213–221.
Baum, F. L., and Denslow, W. (1900) The Wizard of Oz. Random House, New York.
Baumeister, R.F. Vohs, K.D., DeWall, C.N., and Zhang, L. (2007) How Emotion Shapes
Behavior: Feedback, Anticipation, and Reflection, Rather than Direct Causation. Personality
and Social Psychology Review, 11(2), 167–203.
Baumer, E. P.S., Berrill, T., Botwinick, S.C., Gonzales, J.L., Ho, K., Kundrik, A., Kwon, L.,
LaRowe, T., Nguyen, C.P. Ramirez, F., Schaedler, P., Ulrich, W., Wallace, A., Wan, Y., and Wein-
feld, B. (2018) What Would You Do? Design Fiction and Ethics. In Proceedings of the 2018 ACM
Conference on Supporting Groupwork (GROUP ’18). ACM, New York, NY, pp. 244–256.
Baumer, E. P. S., and Thomlinson, B. (2011) Comparing Activity Theory with Distributed
Cognition for Video Analysis: Beyond Kicking the Tyres. In ACM Proceedings of CHI
’11, 133–142.
Bauwens, V., and Genoud, P. (2014) Lessons Learned: Online Ethnography. A Tool for Crea-
tive Dialogue Between State and Citizens, Interactions, 60–65.
Baxter, G., and Sommerville, I. (2011) Socio-Technical Systems: From Design Methods to
Systems Engineering, Interacting with Computers, 23, (1) 4–17.
Beck, K., and Andres, C. (2005) Extreme Programming Explained: Embrace Change (2nd
edn). Addison-Wesley.
Bell, G., Blythe, M., and Sengers, P. (2005) Making by Making Strange: Defamiliarization
and the Design of Domestic Technologies, ACM Transactions on Computer–Human Interac-
tion, 12(2), 149–173.
R e f e R e n c e s584
Bergman, E., and Haitani, R. (2000) Designing the PalmPilot: A Conversation with Rob
Haitani. In Information Appliances. Morgan Kaufmann, San Francisco.
Bergman, O., and Whittaker, S. (2016) The Science of Managing Our Digital Stuff. MIT Press.
Bernard, H. R. (2017) Direct and Indirect Observation. Chapter 14 in Research Methods in
Anthropology: Qualitative and Quantitative Approaches. 6th edition. Rowman & Littlefield
Publishers.
Beyer, H., and Holtzblatt, K. (1998) Contextual Design: Defining Customer-Centered Sys-
tems. Morgan Kaufmann, San Francisco.
Bias, R. G. (1994) The Pluralistic Usability Walk-Through – Coordinated Empathoes. In
J. Nielsen and R. L. Mack (eds) Usability Inspection Methods. John Wiley & Sons Inc., New York.
Bird, J., and Rogers, Y. (2010) The Pulse of Tidy Street: Measuring and Publicly Display-
ing Domestic Electricity Consumption. Workshop on Energy Awareness and Conservation
through Pervasive Applications, Pervasive 2010 Conference. Downloaded from http://www
.changeproject.info/projects.html (retrieved September 2014).
Blandford, A., and Furniss, D. (2006) DiCoT: A Methodology for Applying Distributed Cog-
nition to the Design of Team Working Systems. In S. W. Gilroy and M. D. Harrison (eds)
Interactive Systems: 12th International Workshop, DSVIS 2005, Lecture Notes in Computer
Science, 3941 Springer-Verlag, Berlin, Heidelberg, pp. 26–38.
Blandford, A., Furniss, D., and Makri, S. (2017) Qualitative HCI Research: Going Behind the
Scenes. Morgan Claypool Publishers.
Blazevski, B., and Hallewell Haslwanter, J.D. (2017) User-Centered Development of a System
to Support Assembly Line Worker. In Proceedings of the 19th International Conference on
Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17). ACM,
New York, NY, Article 57, 7 pages.
Blythe M., and Cairns, P. (2009) Critical Methods and User Generated Content: The iPhone
on YouTube. In Proceedings of CHI 2009. ACM, New York, NY, pp. 1467–1476.
Blythe, M. (2017) Research Fiction: Storytelling, Plot and Design. In Proceedings of Con-
ference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY,
pp. 5400–5411.
Bødker, S. (2000) Scenarios in User-Centered Design—Setting the Stage for Reflection and
Action, Interacting with Computers, 13(1), 61–76.
Boehm, B., and Basili, V.R. (2001) Software Defect Reduction Top 10 List, IEEE Computer,
34(1), 135–137.
Boehner, K. Vertesi, J., Sengers, P. Dourish, P. (2007) How HCI Interprets the Probes. In
Proceedings of Conference on Human Factors in Computing Systems (CHI’07). ACM, New
York, NY, pp. 1077–1086.
Borchers, J. (2001) A Pattern Approach to Interaction Design. Wiley.
Bornschein, J., and Weber, G. (2017) Digital Drawing Tools for Blind Users: A State-of-the-
Art and Requirement Analysis. In Proceedings of the 10th International Conference on Per-
vasive Technologies Related to Assistive Environments (PETRA’17), ACM, New York, NY,
pp. 21–28.
Bostock, M., Ogievetsky, V., and Heer, J. (2011) D3: Data-Driven Documents. IEEE Transac-
tions on Visualization and Computer Graphics, 17, 12, 2301–2309.
http://www.changeproject.info/projects.html
http://www.changeproject.info/projects.html
R e f e R e n c e s 585
Bouchet, J., and Nigay, L. (2004) ICARE: A Component-Based Approach for the Design and
Development of Multimodal Interfaces. In Proceedings of CHI 2004. ACM, New York, NY,
pp. 1325–1328.
Bowman, L.L., Levine, L.E., Waite, B.M., and Gendron, M. (2010) Can Students Really Multitask? An
Experimental Study of Instant Messaging while Reading. Computers and Education 54, 4, 927–931.
Bowser, A., Shilton, K., Preece, J., and Warrick, E. (2017) Accounting for Privacy in Citizen
Science: Ethical Research in a Context of Openness. In Proceedings of CSCW’17, ACM, New
York, NY, pp. 2124–2136.
Boyd, D. (2014) It’s Complicated: The Social Lives of Networked Teens. Yale.
Braun, V., and Clarke, V. (2006) Using Thematic Analysis in Psychology. Qualitative Research
in Psychology, 3(2). pp. 77–101, ISSN1478-0887.
Brendon, G. (2017) Forget “Best” or “Sincerely,” This Email Closing Gets the Most Replies.
Downloaded from https://blog.boomerangapp.com/author/brendan/.
Brereton, M., and McGarry, B. (2000) An Observational Study of How Objects Support
Engineering Design Thinking and Communication: Implications for the Design of Tangible
Media. In Proceedings of CHI 2000. ACM, New York, NY, pp. 217–224.
Briggs, G. F., Hole, G. J., and Turner, J. A. J. (2018) The Impact of Attentional Set and Situ-
ational Awareness on Dual-task Driving Performance. Transportation Research Part F: Traf-
fic Psychology and Behaviour, 57, 36–47.
Brignull, H., and Rogers, Y. (2003) Enticing People to Interact with Large Public Displays in
Public Spaces. In Proceedings of INTERACT 2003, Zurich, pp. 17–24.
Brkan, M. (2017) AI-Supported Decision-Making Under the General Data Protection Regu-
lation. In Proceedings of the 16th Edition of the International Conference on Artificial Intel-
ligence and Law (ICAIL ’17). ACM, New York, NY, pp. 3–8.
Brudy, F., Houben, S., Marquardt, N., and Rogers, Y. (2016) CurationSpace: Cross-Device Con-
tent Curation Using Instrumental Interaction. In Proceedings of the 2016 ACM International
Conference on Interactive Surfaces and Spaces (ISS ’16). ACM, New York, NY, pp. 159–168.
Buchenau, M., and Suri, J. F. (2000) Experience Prototyping. In Proceedings of DIS 2000,
Design Interactive Systems: Processes, Practices, Methods, Techniques, pp. 17–19.
Budd, A. (2007) Web Heuristics. Available at: www.andybudd.com/archives/2007/01/
heuristics_for_modern_web_application_development/ (accessed September 2010).
Budiu, R., and Nielsen, J. (2010) Usability of iPad Apps and Websites. First Research Find-
ings. Nielsen Norman Group. Downloaded from www.nngroup.com/reports/mobile/ipad/
(retrieved August 2010).
Budiu, R., and Nielsen, J. (2012) Mobile Usability. New Riders Press.
Buechley, L., and Qiu, K. (2014) Sew Electric. A Collection of DIY Projects that Combine
Fabric, Electronics, and Programming. HLT Press.
Buolamwini, J., and Gebru, T. (2018) Gender Shades: Intersectional Accuracy Disparities
in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness,
Accountability and Transparency, PMLR, 81, pp.77–91.
Burgess, P.W. (2015) Serial Versus Concurrent Multitasking: From Lab to Life. In J. M. Fawcett,
E.F. Risko and A. Kingstone (eds.) The Handbook of Attention. 443–462. Cambridge, MA:
The MIT Press.
https://blog.boomerangapp.com/author/brendan/
http://www.andybudd.com/archives/2007/01/heuristics_for_modern_web_application_development/
http://www.andybudd.com/archives/2007/01/heuristics_for_modern_web_application_development/
http://www.nngroup.com/reports/mobile/ipad/
R e f e R e n c e s586
Buxton, B. (2007) Sketching User Experiences. Morgan Kaufmann, San Francisco.
Caddick, R., and Cable, S. (2011) Communicating the User Experience: A Practical Guide
for Creating Useful UX Documentation. Wiley.
Caine, K. (2016) Local Standards for Sample Size at CHI. In Proceedings of Conference on
Human Factors in Computing Systems (CHI’16). ACM, New York, NY, pp. 981–992.
Caird, J.K., Simmons, S.M., Wiley, K., Johnston, K.A., and Horrey, W.J. (2018) Does Talk-
ing on a Cell Phone, With a Passenger, or Dialing Affect Driving Performance? An Updated
Systematic Review and Meta-Analysis of Experimental Studies. Human Factors: The Journal
of the Human Factors and Ergonomics Society, 60 (1).
Cairns, P. (2019) Doing Better Statistics in Human-Computer Interaction: Short Essays on
Some Common Questions. Oxford University Press: Oxford, UK.
Campbell, J. L., Brown. J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Sanquist, T., . . . and
Morgan, J. L. (2016, December). Human Factors Design Guidance for Driver-Vehicle Inter-
faces. Report No. DOT HS 812 360. Washington.
Card, S. K., Mackinley, J. D., and Shneiderman, B. (eds) (1999) Readings in Information
Visualization: Using Vision to Think. Morgan Kaufmann, San Francisco.
Card, S. K., Moran, T. P., and Newell, A. (1983) The Psychology of Human–Computer Inter-
action. Lawrence Earlbaum Associates, Hillsdale, NJ.
Carroll J. M. (2000) Introduction to the Special Issue on Scenario-Based Systems Develop-
ment, Interacting with Computers, 13(1), 41–42.
Carroll, J. M. (2004) Beyond Fun, Interactions, 11(5), 38–40.
Carroll, J. M. (ed.) (2003) HCI Models, Theories and Frameworks: Towards a Multidiscipli-
nary Science. Morgan Kaufmann, San Francisco.
Carter, S., and Mankoff, J. (2005) When Participants Do the Capturing: The Role of Media
in Diary Studies. In Proceedings of CHI 2005. ACM, New York, NY, pp. 899–908.
Chang, C., Hinze, A., Bowen, J., Gilbert, L., and Starkey, N. (2018) Mymemory: A Mobile
Memory Assistant for People with Traumatic Brain Injury. International Journal of Human-
Computer Studies, 117, 4–19.
Chang, T. (2004) The Results of Student Ratings: Paper vs Online, Journal of Taiwan Normal
University: Education, 49(1), 171–186.
Charmaz, K. (2014) Constructing Grounded Theory (2nd edn). SAGE Publications.
Chen, X., Zou, Q., Fan, B, Zheng, Z., and Luo, X. (2018) Recommending Software Features
for Mobile Applications Based on User Interface Comparison, Requirements Engineering
(online), 1–15, Available: http://link.springer.com/10.1007/ s00766-018-0303-4
Chidziwisano, G., and Wyche, S. (2018) M-Kulinda: Using a Sensor-Based Technology Probe
to Explore Domestic Security in Rural Kenya. In Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, Paper 10, 13 pages.
Chuang, L.L., and Pfeil, U. (2018). Transparency and Openness Promotion Guidelines for
HCI. In Extended Abstracts of the CHI Conference on Human Factors in Computing Sys-
tems (CHI EA ’18). ACM, New York, NY, Paper SIG04, 4 pages.
Churchill, E. (2018) Data, Design, and Ethnography, Interactions, 22–23.
Churchill, E., Bowser, A., and Preece, J. (2013) Teaching and Learning Human-Computer
Interaction. Interactions, 20, 2, 44–53.
http://link.springer.com/10.1007/
R e f e R e n c e s 587
Clegg, T., Preece, J., Warrick, E., Pauw, D., Boston, C., and Cameron, J. (2019). Community-
Driven Informal Adult Environmental Learning: Using Theory as a Lens to Identify Steps
Toward Concientización. To appear in Journal of Environmental Education.
Clemmensen, T., Hertzum, M., Hornbaek, K., Shi, Q., and Yammiyavar, P. (2008) Cultural
Cognition in the Thinking-Aloud Method for Usability Evaluation. In Proceedings of 29th
International Conference on Information Systems. Paris, 2008, Paper 189.
Cliffe, A. D. (2017) A Review of the Benefits and Drawbacks to Virtual Field Guides in
Today’s Geoscience Higher Education Environment. International Journal of Educational
Technology in Higher Education. 14, 28.
Cline, D.H. (2012) Six Degrees of Alexander: Social Network Analysis as a Tool for Ancient
History. AHB, 26, 59–70.
Cobb, S., Beardon, L., Eastgate, R., Glover, T., Kerr, S., Neale, H., Parsons, S., Benford, S.,
Hopkins, E., Mitchell, P., Reynard, G., and Wilson, J. (2002) Applied Virtual Environments
to Support Learning of Social Interaction Skills in Users with Asperger’s Syndrome, Digital
Creativity, 13(1), N-22.
Cockton, G., and Woolrych, A. (2001) Understanding Inspection Methods: Lessons from an
Assessment of Heuristic Evaluation. In A. Blandford and J. Vanderdonckt (eds), People and
Computers XV. Springer-Verlag, Berlin, pp. 171–191.
Cohen, J. (1960) A Coefficient of Agreement for Nominal Scales. Educational and Psycho-
logical Measurement, 20(1), 37–46.
Cohen, M., Giangola, J. P., and Balogh, J. (2004) Voice User Interface Design. Addison-
Wesley, Harlow, Essex.
Constantine, L. L., and Lockwood, L. A. D. (1999) Software for Use. Addison-Wesley,
Harlow, Essex.
Cooper, A. (1999) The Inmates are Running the Asylum. SAMS, Indianapolis.
Cooper, A. (2018) When Companies Question the Value of Design. Downloaded from https://
medium.com/s/story/whats-the-roi-of-ux-c47defb033d2.
Corbin, J. M., and Strauss, A. (2014) Basics of Qualitative Research: Techniques and Proce-
dures for Developing Grounded Theory. SAGE Publications.
Costa, N.A., Holder, E., and MacKinnon, S.N. (2017) Implementing Human Centred Design
in the Context of a Graphical User Interface Redesign for Ship Manoeuvring. International
Journal of Human-Computer Studies, 100, 55–65.
Coyle, A. (1995) Discourse Analysis. In G. M. Breakwell, S. Hammond and C. Fife-Schaw
(eds) Research Methods in Psychology. SAGE, London.
Crabtree, A. (2003) Designing Collaborative Systems: A Practical Guide to Ethnography.
Springer-Verlag, Berlin.
Craik, K. J. W. (1943) The Nature of Explanation. Cambridge University Press, Cambridge.
Cramer, H., Evers, V., Ramlal, S., Someren, M., Rutledge, L., Stash, N., Aroyo, L., and Wiel-
inga, B. (2008) The Effects of Transparency on Trust in and Acceptance of a Content-Based
Art Recommender. User Model User-Adap Inter, 18, 455.
Crampton Smith, G. (1995) The Hand that Rocks the Cradle. ID Magazine May/June, 60–65.
Crowcroft, J., Haddadi, H., and Henderson, T. (2018) Responsible Research on
Social Networks: Dilemmas and Solutions. In The Oxford Handbook of Networked
https://medium.com/s/story/whats-the-roi-of-ux-c47defb033d2
https://medium.com/s/story/whats-the-roi-of-ux-c47defb033d2
R e f e R e n c e s588
Communication. Brooke Foucault Welles and Sandra González-Bailón (eds.) Oxford Hand-
books online.
Crumlish, C., and Malone, E. (2009) Designing Social Interfaces: Principles, Patterns and
Practices for Improving the User Experience, O’Reilly.
Csikszentmihalyi, M. (1996) Go with the flow. Wired Interview. www.wired.com/wired/
archive/4.09/czik.html (retrieved May 6, 2005).
Csikszentmihalyi, M. (1997) Finding Flow: The Psychology of Engagement with Everyday
Life. Basic Books, New York.
Cutting, J., Gundry, D., and Cairns, P. (2019) Busy Doing Nothing? What Do Players Do in
Idle Games? International Journal of Human-Computer Studies, 122, 133–144.
Cycil, C., Perry, M., Laurier, E., and Taylor, A. (2013) ‘Eyes Free’ In-Car Assistance: Parent
and Child Passenger Collaboration During Phone Calls. In Proceedings of MobileHCI’13,
ACM Press, pp. 332–341.
Da Silva, T., Silveira, M. S., Maurer, F., and Silveria, F.F. (2018) The Evolution of Agile UXD.
Information and Software Technology, 102, 1–5.
Dalton, N.S., Collins, E., and Marshall, P. (2015) Display Blindness?: Looking Again at the
Visibility of Situated Displays Using Eye-Tracking. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY,
pp. 3889-3898.
Davis, K. (2012) Ethics of Big Data. O’Reilly.
de Rada, D., and Dominguez-Alvarez (2014) Response Quality of Self-Administration Ques-
tionnaires: A Comparison Between Paper and Web Questionnaires. Social Science Computer
Review, 32(2), 256–269.
de Souza, C. S. (2005) The Semiotic Engineering of Human-Computer Interaction. Cam-
bridge, MA. The MIT Press.
de Souza, C. S., Cerqueira, R., Afonso, L., Brandão, R., Ferreira, J. (2016) Software Develop-
ers as Users: Semiotic Investigations in Human-Centered Software Development. Springer.
Deng, A., and Shi, X. (2016) Data-Driven Metric Development for Online Controlled Experi-
ments: Seven Lessons Learned. In Proceedings of 22nd ACM SIGKDD International Confer-
ence on Knowledge Discovery and Data Mining (KDD ’16), pp. 77–86.
Deng, A., Dmitriev, P., Gupta, S., Raff, R. K. P., and Vermeer, L (2017) A/B Testing at Scale:
Accelerating Software Innovation. In Proceedings of Special Interest Group on Information
Retrieval ’17, pp. 1397–1397.
Deng, L., and Huang, X. (2004) Challenges in Adopting Speech Recognition, Communica-
tions of the ACM, 47(1), 69–75.
Denzin, N. (2006). Sociological Methods: A Sourcebook. Aldine Transaction (5th edn), ISBN
9780-202308401.
Denzin, N. K., and Lincoln, Y. S. (2011) The SAGE Handbook of Qualitative Research.
SAGE Publications.
Deshpande, A., Sharp, H., Barroca, L., and Gregory, A.J. (2016) Remote Working and Col-
laboration in Agile Teams. In Proceedings of International Conference on Information Sys-
tems, pp. 4543–4559.
http://www.wired.com/wired/archive/4.09/czik.html
http://www.wired.com/wired/archive/4.09/czik.html
R e f e R e n c e s 589
Dhillon, B., Banach, P., Hocielnik, R., Emparanza, J.P., Politis, I., Paczewska, A., and Marko-
poulos, P. (2011) Visual Fidelity of Video Prototypes and User Feedback: A Case Study, In
Proceedings of BCS-HCI.
Diaz de Rada, V., and Dominguez-Alvarez, J. A. (2014) Response Quality of Self- Administered
Questions: A Comparison Between Paper and Web. Social Science Computer Review, 32, 2,
SAGE Publications.
Dietz, P. H., and Leigh, D. L. (2001) DiamondTouch: A Multi-User Touch Technology. In Sym-
posium on User Interface Software and Technology (UIST). ACM, New York, NY, pp. 219–226.
Diez, T., and Posada, A. (2013) The Fab and the Smart City: The Use of Machines and
Technology for the City Production by Its Citizens. In Proceedings of the 7th International
Conference on Tangible, Embedded and Embodied Interaction, ACM Press, pp. 447–454.
DiSalvo, C., Sengers, P., and Brynjarsdottir, H. (2010) Mapping the Landscape of Sustainable
HCI. In Proceedings of CHI 2010. ACM, New York, NY, pp. 1975–1984.
Dix, A., Finlay, J., Abowd, G., and Beale, R. (2004) Human–Computer Interaction (3rd edn).
Pearson Education, Harlow, Essex.
Douglas, H.E., Raban, M. Z., Walter, S. R., and Westbrook. J. I. (2017) Improving Our
Understanding of Multi-Tasking in Healthcare: Drawing Together the Cognitive Psychology
and Healthcare Literature. Applied Ergonomics, Volume 59, Part A, 45–55.
Dourish, P. (2001) Where the Action Is: The Foundations of Embodied Interaction. MIT
Press, Cambridge, MA.
Dourish, P., and Bly, S. (1992) Portholes: Supporting Awareness in a Distributed Work Group.
In Proceedings of CHI ’92. ACM, New York, NY, pp. 541–547.
Drascic, D., and Milgram, P. (1996) Perceptual Issues in Augmented Reality. In M. T. Bolas, S.
S. Fisher and J. O. Merritt (eds) SPIE Volume 2653: Stereoscopic Displays and Virtual Reality
Systems III. SPIE, San Jose, CA, pp. 123–134.
Druga, S., Williams, R., Breazeal, C., and Resnick, M. (2017) “Hey Google, is it OK if I eat
you?” In Proc. of the 2017 Conference on Interaction Design and Children (IDC ’17), ACM
Press, pp. 595–600.
DSDM (2014) The DSDM Agile Project Framework Handbook, DSDM Consortium, Kent,
UK, ISBN 978-0-9544832-9-6.
Dumas, B., Lalanne, D., and Oviatt, S. (2009) Multimodal Interfaces: A Survey of Principles,
Models and Frameworks. Human Machine Interaction Lecture Notes in Computer Science,
5440, 3–26.
Dumas, J. S., and Redish, J. C. (1999) A Practical Guide to Usability Testing (rev. edn). Intel-
lect, Exeter.
Eason, K. (1987) Information Technology and Organizational Change. Taylor and Fran-
cis, London.
Eason, K. (2014) Afterword: The Past, Present, and Future of Sociotechnical Systems Theory.
Applied Ergonomics, 45, 213–220.
Ebert, J.F., Huibers, L., Christensen, B., and Christensen M.B. (2018) Paper- or Web-Based
Questionnaire Invitations as a Method for Data Collection: Cross-Sectional Comparative
Study of Differences in Response Rate, Completeness of Data, and Financial Cost. Journal of
Medical Internet Research, 20(1): e24.
R e f e R e n c e s590
Ecker, R. (2016) Automated Detection and Guessing Without Semantics of Sender-Receiver
Relations in Computer-Mediated Discourses. In Proceedings of the 18th International Con-
ference on Information Integration and Web-based Applications and Services (iiWAS ’16).
ACM, New York, NY, pp. 170–178.
Educause (2016) The 2016 Horizon Report. Downloaded from: https://library.educause.edu/
topics/teaching-and-learning/learning-analytics.
Eggers, D. (2013) The Circle. Knopf.
Elgon, M. (2018) The Case Against Teaching Kids to be Polite to Alexa. Mind and Machine.
Downloaded from: https://www.fastcompany.com/40588020/the-case-against-teaching-kids-
to-be-polite-to-alexa.
Eliot, C., and Woolf, B. (1994) Reasoning about the User Within a Simulation-Based Real-
Time Training System. In Proceedings of 4th International Conference on User Modeling,
Mitre Corp., Bedford, MA.
Elrod, S., Bruce, R., Gold, R., Goldberg, D., Halasz, F., Janssen, W., Lee, D., McCall, K., Ped-
ersen, E., Pier, K., Tang, J., and Welch, B. (1992) Liveboard: A Large Interactive Display Sup-
porting Group Meetings, Presentations and Remote Collaboration. In Proceedings of CHI
’92. ACM, New York, NY, pp. 599–607.
Empson, R. (2012) Google Biz Chief: Over 10M Websites Now Using Google Analytics.
Retrieved from http://techcrunch.com/2012/04/12/google-analytics-officially-at-10m/.
Erickson, T., and Kellogg, W. A. (2000) Social Translucence: An Approach to Designing Systems
that Support Social Processes, Transactions of Computer-Human Interaction, 7(1), 59–83.
Erickson, T. D. (1990) Working with Interface Metaphors. In B. Laurel (ed.) The Art of
Human–Computer Interface Design. Addison-Wesley, Boston.
Erickson, T. D., Smith, D. N., Kellogg, W. A., Laff, M., Richards, J. T., and Bradner, E. (1999)
Socially Translucent Systems: Social Proxies, Persistent Conversation and the Design of
‘Babble’. In Proceedings of CHI ’99, pp. 72–79.
Eysenck, M., and Brysbaert, M. (2018) Fundamentals of Cognition. 3rd Edition. Routledge.
Fernaeus, Y., and Tholander, J. (2006) Finding Design Qualities in a Tangible Programming
Space. In Proceedings of CHI 2006. ACM, New York, NY, pp. 447–456.
Fernández-Luque, F., Zapata, J., Ruiz, R., and Iborra, E. (2009) A Wireless Sensor Network
for Assisted Living at Home of Elderly People, Lecture Notes in Computer Science, 5602.
Springer-Verlag, Berlin, Heidelberg, pp. 65–74.
Ferreira, J., Sharp, H., and Robinson, H. (2012) Agile Development and User Experience
Design Integration as an On-going Achievement in Practice. In Proceedings of Agile 2012,
Dallas, Texas.
Ferreira, J., Sharp, H., and Robinson, H.M. (2011) User Experience Design and Agile Devel-
opment: Managing Cooperation Through Articulation Work. In: Software Practice and
Experience, 41(9), 963–974.
Fetterman, D. M. (2010) Ethnography: Step by Step (3rd edn). Applied Social Research
Methods Series, Vol. 17. SAGE.
Fialho, P., and Coheur, L. (2015) ChatWoz: Chatting through a Wizard of Oz. In Proceedings
of the 17th International ACM SIGACCESS Conference on Computers and Accessibility
(ASSETS ’15). ACM, New York, NY, pp. 423–424.
https://library.educause.edu/topics/teaching-and-learning/learning-analytics
https://library.educause.edu/topics/teaching-and-learning/learning-analytics
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa
https://www.fastcompany.com/40588020/the-case-against-teaching-kids-to-be-polite-to-alexa
Google Biz Chief: Over 10M Websites Now Using Google Analytics
R e f e R e n c e s 591
Fishkin, K. P. (2004) A Taxonomy for and Analysis of Tangible Interfaces, Personal and Ubiq-
uitous Computing, 8, 347–358.
Fiske, J. (1994) Audiencing: Cultural Practice and Cultural Studies. In N. K. Denzin and Y. S.
Lincoln (eds) Handbook of Qualitative Research. SAGE, Thousand Oaks, CA, pp. 189–198.
Fitts, P. M. (1954) The Information Capacity of the Human Motor System in Controlling
Amplitude of Movement, Journal of Experimental Psychology, 47, 381–391.
Flanagan, J. C. (1954) The Critical Incident Technique, Psychological Bulletin, 51, 327–358.
Fogg, B.J. (2009) A Behavior Model for Persuasive Design. In Proceedings of the 4th Interna-
tional Conference on Persuasive Technology (Persuasive ’09). ACM, New York, NY, Article
40, 7 pages.
Folmer, E., Yuan, B., Carr, D., and Sapre, M. (2009) TextSL: A Command-Based Virtual
World Interface for the Visually Impaired. In Proceedings 11th international ACM SIGAC-
CESS Conference on Computers and Accessibility, pp. 59–66.
Fontana, A., and Frey, J. H. (2005). The Interview: From Neutral Stance to Political Involve-
ment. In N. K. Denzin and Y. S. Lincoln (eds) The SAGE Handbook of Qualitative Research
(3rd edn), pp. 695–727. Thousand Oaks, CA: SAGE.
Foong, E., Gergle, D., and Gerber, E.M. (2017) Novice and Expert Sensemaking of Crowd-
sourced Feedback. In Proceedings of the ACM on Human-Computer Interaction, 1, 2,
Article 45.
Forsell, C., and Johansson, J. (2010) An Heuristic Set for Evaluation in Information Visuali-
zation. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI
’10), Giuseppe Santucci (ed.). ACM, New York, NY, pp. 199–206.
Friedman, N., and Cabral, A. (2018) Using a Telepresence Robot to Improve Self-Efficacy
of People with Developmental Disabilities. In Proceedings of the 20th International ACM
SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). ACM, New York,
NY. 489–491.
Froehlich, J., Findlater, L., and Landay, J. (2010) The Design of Eco-Feedback Technology. In
Proceedings of CHI ’10, ACM, New York, NY, pp. 1999–2008.
Furniss, D., and Blandford, A. (2006) Understanding Emergency Dispatch in Terms of Dis-
tributed Cognition: A Case Study, Ergonomics, 49(12/13), October, pp. 1174–1203.
Gabrielli, S., Rogers, Y., and Scaife, M. (2000) Young Children’s Spatial Representations
Developed Through Exploration of a Desktop Virtual Reality Scene, Education and Informa-
tion Technologies, 5(4), 251–262.
Galitz, W. O. (1997) The Essential Guide to User Interface Design. John Wiley & Sons
Inc., New York.
Gallo, A. (2017) A Refresher on A/B Testing, Harvard Business Review. Downloaded from
https://hbr.org/2017/06/a-refresher-on-ab-testing.
Gardner, H., and Davis, K. (2014) The App Generation: How Today’s Youth Navigate Iden-
tity, intimacy, and Imagination in a Digital World. Yale University Press.
Garrett, J. J. (2010) The Elements of User Experience: User-Centered Design for the Web and
Beyond (2nd edn). New Riders Press.
Gaver, B., Dunne, T., and Pacenti, E. (1999) Cultural Probes, ACM Interactions Magazine
January/February, 21–29.
https://hbr.org/2017/06/a-refresher-on-ab-testing
R e f e R e n c e s592
Gibson, J. (2014) Introduction to Game Design, Prototyping, and Development. Addi-
son Wesley.
Gigante, M. A. (1993) Virtual Reality: Enabling Technologies. In R. A. Earnshaw, M. A.
Gigante and H. Jones (eds) Virtual Reality Systems. Academic Press, London, pp. 15–25.
Gigerenzer, G., Todd, P., and the ABC Research Group (1999) Simple Heuristics That Make
Us Smart. Oxford University Press, New York.
Glaser, B. G. (1992) Basics of Grounded Theory: Emergence vs Forcing. Sociology Press.
Glaser, B. G., and Strauss, A. (1967) Discovery of Grounded Theory. Aldine, London.
Golsteijn, C., Gallacher, S., Koeman, L., Wall, L., Andberg, S., Rogers, Y., and Capra, L.
(2015) VoxBox: a Tangible Machine That Gathers Opinions from the Public at Events. In
Proc. of TEI 2015. ACM.
Gooch, D., Barker, M., Hudson, L. Kelly, R., Kortuem, G., van der Linden, J., Petre, M.,
Brown, R., Klis-Davies, A., Forbes, H., Mackinnon, J., Macpherson, R., and Walton, C.
(2018) Amplifying Quiet Voices: Challenges and Opportunities for Participatory Design at
an Urban Scale. ACM Transactions on Computer-Human Interaction. 25, 1, 2.
Gosper, J., Agathos, J-L., Rutter, R., and Coatta, T. (2011) Case Study: UX Design and Agile:
A Natural Fit? Communications of the ACM, 54(1), 54–60.
Gothelf, J and Seiden, J. (2016) Lean UX: Designing Great Products with Agile Teams (2nd
edition). O’Reilly.
Gottesdiener, E., and Gorman, M. (2012) Discover to Deliver: Product Planning and Analy-
sis. EBG Consulting, Inc.
Gould, J. D., and Lewis, C. H. (1985) Designing for Usability: Key Principles and What
Designers Think, Communications of the ACM, 28(3), 300–311.
Granollers, T. (2018) Usability Evaluation with Heuristics, Beyond Nielsen’s list. In Pro-
ceedings of the 8th International Conference on Advances in Human Computer Interaction.
pp. 60–65.
Gray, C.M., Kou, Y., Battles, B., Hoggatt, J., and Toombs, A.L. (2018) The Dark (Patterns)
Side of UX Design. In Proceedings of the 2018 CHI Conference on Human Factors in Com-
puting Systems (CHI ’18). ACM, New York, NY, Paper 534, 14 pages.
Greenberg, S., Carpendale, S., Marquardt, N., and Buxton, B. (2012) Sketching User Experi-
ences. Morgan Kaufmann.
Griffiths, A. (2014) How Paro the Robot Seal is Being Used to Help UK Dementia Patients.
Downloaded from: http://www.theguardian.com/society/2014/jul/08/paro-robot-seal-demen-
tia-patients-nhs-japan.
Grison, E., Gyselinck, V., and Burkhardt, J-M. (2013) Using the Critical Incidents Technique
to Explore Variables Related to Users’ Experience of Public Transport Modes. In Proceedings
of ECCE ’13 Proceedings of the 31st European Conference on Cognitive Ergonomics, Article
No. 21, ACM.
Grosse-Hering, B., Mason, J., Aliakseyeu, D., Bakker, C., and Desmet, P. (2013) Slow Design
for Meaningful Interactions. In Proceedings of Conference on Human Factors in Computing
Systems (CHI’13). ACM, New York, NY, pp.3431–3440.
Grudin, J. (1989) The Case Against User Interface Consistency, Communications of the ACM,
32(10), 1164–1173.
http://www.theguardian.com/society/2014/jul/08/paro-robot-seal-dementia-patients-nhs-japan
http://www.theguardian.com/society/2014/jul/08/paro-robot-seal-dementia-patients-nhs-japan
R e f e R e n c e s 593
Gubbels, M., and Froehlich, J. (2014) Physically Computing Physical Computing: Creative
Tools for Building with Physical Materials and Computation. IDC ’14 Extended Abstracts.
Guha, M. L., Druin, A., and Fails, J.A. (2013) Cooperative Inquiry Revisited: Reflections of
the Past and Guidelines for the Future of Intergenerational Co-Design. International Journal
of Child-Computer Interaction, 1(1), 14–23.
Gui, X., Chen, Y., Caldeira, C., Xiao, D., and Chen, Y. (2017) When Fitness Meets Social
Networks: Investigating Fitness Tracking and Social Practices on WeRun. In Proceedings of
the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New
York, NY, pp. 1647–1659.
Gun, R. J. A., and Billinghurst, L. M. (2016) A Comparative Study of Simulated Augmented
Reality Displays for Vehicle Navigation. In Proceedings of the 28th Australian Conference on
Computer-Human Interaction (OzCHI ’16). ACM, New York, NY, pp. 40–48.
Gunther, V. A., Burns, D. J., and Payne, D. J. (1986) Text Editing Performance as a Function of
Training with Command Terms of Differing Lengths and Frequencies, SIGCHI Bulletin, 18, 57–59.
Hampton, K., and Wellman, B. (2003) Neighboring in Netville: How the Internet Supports
Community and Social Capital in a Wired Suburb. City and Community, 2, 4, 277–311.
Handel, D., Hochman, J., and Santo, D. (2015) Visualizing Teacher Tweets: Finding Profes-
sional Learning Networks in Topical Networks. ASIST, 2015, 1–3.
Hansen, D., Shneiderman, B., Smith, M. A., and Himelboim, I. (2019) Analyzing Social Media
Networks with NodeXL: Insights from a Connected World: (2nd ed.). Elsevier Publishers.
Harari, G. M., Gosling, S. D., Wang, R., Chen, F., Chen, Z., Campbell, A. T. (2017) Patterns
of Behavior Change in Students Over an Academic Term: A Preliminary Study of Activity and
Sociability Behaviors Using Smartphone Sensing Methods, Computers in Human Behavior,
67, 129–138.
Harboe, G., and Huang, E.M. (2015) Real-World Affinity Diagramming Practices: Bridging
the Paper-Digital Gap. In Proceedings of the 33rd Annual ACM Conference on Human Fac-
tors in Computing Systems (CHI ’15). ACM, New York, NY, pp. 95–104.
Harjuniemi E., and Häkkilä, J. (2018) Smart Handbag for Remembering Keys. In Proceed-
ings of the 22nd International Academic Mindtrek Conference (MindTrek 2018). New York,
NY: ACM Press, pp. 244–247.
Harley, A. (2018a) Visibility of System Status. Downloaded from: https://www.nngroup.com/
articles/visibility-system-status/
Harley, A. (2018b) UX Guidelines for Ecommerce Homepages, Category Pages, and Product
Listing Pages. Downloaded from: https://www.nngroup.com/articles/ecommerce-homepages-
listing-pages/.
Harley, A. (2018c) UX Expert Reviews: Downloaded from: https://www.nngroup.com/
articles/ux-expert-reviews/.
Harman, M., Jia, Y., and Zhang, Y. (2012) App Store Mining and Analysis: MSR for App
Stores. In Proceedings of the 9th IEEE Working Conference on Mining Software Reposi-
tories (MSR 12),108–111.
Harper, R., Rodden, T., Rogers, Y., and Sellen, A. (2008) Being Human: HCI in the Year
2020. Microsoft (free copies from: http://research.microsoft.com/en-us/um/cambridge/
projects/hci2020/).
https://www.nngroup.com/articles/visibility-system-status/
https://www.nngroup.com/articles/visibility-system-status/
https://www.nngroup.com/articles/ecommerce-homepages-listing-pages/
https://www.nngroup.com/articles/ecommerce-homepages-listing-pages/
https://www.nngroup.com/articles/ux-expert-reviews/
https://www.nngroup.com/articles/ux-expert-reviews/
http://research.microsoft.com/en-us/um/cambridge/projects/hci2020/
http://research.microsoft.com/en-us/um/cambridge/projects/hci2020/
R e f e R e n c e s594
Harrison, E. (ed.) (2009) Media Space 20 + Years of Mediated Life. Springer.
Hartson, H. R., and Hix, D. (1989) Toward Empirically Derived Methodologies and Tools
for Human–Computer Interface Development, International Journal of Man–Machine Stud-
ies, 31, 477–494.
Hassenzahl, M. (2010) Experience Design: Technology for All the Right Reasons. Morgan
& Claypool.
Hatch, M. (2014) The Maker Movement Manifesto. McGraw Hill.
Hayashi, E., Maas, M., and Hong, J.I. (2014) Wave to Me: User Identification Using Body
Lengths and Natural Gestures, In Proceedings of CHI 14, pp. 3453–3462.
Hayes. G.R. (2011) The Relationship of Action Research to Human-Computer Interac-
tion. ACM Transactions on Human-Computer Interaction, 18, 3, Article 15, 20 pages.
Hazas, M., Bernheim Brush, A.J., and Scott, J. (2012) Sustainability Does not Begin with the
Individual, Interactions, 19(5), 14–17.
Hazlewood, W., Dalton, N. S., Rogers, Y., Marshall, P., and Hertrich, S. (2010) Bricolage and Con-
sultation: A Case Study to Inform the Development of Large-Scale Prototypes for HCI Research.
In Proceedings of Designing Interactive Systems, DIS 2010. ACM, New York, NY, pp. 380–388.
Heath, C., and Luff, P. (1992) Collaboration and Control: Crisis Management and Multime-
dia Technology in London Underground Line Control Rooms. In Proceedings of CSCW ’92
1(1&2), 69–94.
Heath, C., Hindmarsh, J., and Luff, P. (2010) Video in Qualitative Research. SAGE.
Heer, J., and Bostock, M. (2010) Crowdsourcing Graphical Perception: Using Mechanical
Turk to Assess Visualization Design. In Proceedings of CHI 2010. ACM, New York, NY,
pp. 203–212.
Hektner, J. M., Schmidt, J. A., and Csikszentmihalyi, M. (2006) Experience Sampling Method:
Measuring the Quality of Everyday Life. SAGE.
Hendriks-Jansen, H. (1996) Catching Ourselves in the Act: Situated Activity, Interactive
Emergence, Evolution, and Human Thought. MIT Press, Cambridge, MA.
Henkel, L. A. (2014) Point-and-Shoot Memories The Influence of Taking Photos on Memory
for a Museum Tour. Psychological science, 25.2, 396–402.
Henschke, M., Gedeon, T., and Jones, R. (2015) Touchless Gestural Interaction with Wizard-
of-Oz: Analysing User Behaviour. In Proceedings of the Annual Meeting of the Australian
Special Interest Group for Computer Human Interaction (OzCHI ’15), ACM, New York,
NY, pp. 207–211.
Hicks, J. (2012) The Icon Handbook. Five Simple Steps Publishing Ltd.
Hine, C. (2000) Virtual Ethnography. SAGE.
Hodges, S. Hartmann, B., Gellersen, H., and Schmidt, A. (2014) A Revolution in the Making
[Guest editors’ introduction]. IEEE Pervasive Computing, 13(3): 18–21.
Hodges, S., Scott, J., Sentance, S., Miller, C., Villar, N., Schwiderski-Grosche, S., Hammil, K.,
and Johnston, S. (2013) .NETGadgeteer: A New Platform for K–12 Computer Science Edu-
cation. In Proceedings of SIGCSE ’13, ACM, 391–396.
Hodges, S., Williams, L., Berry, E., Izadi, S., Srinivasan, J., Butler, A., Smyth, G., Kapur, N.,
and Wood, K. (2006) SenseCam: A Retrospective Memory Aid. In P. Dourish and A. Friday
(eds) Ubicomp 2006, LNCS 4206. Springer-Verlag, pp. 177–193.
R e f e R e n c e s 595
Hofte H. T., Jensen, K. L., Nurmi, P., and Froehlich, J. (2009) Mobile Living Labs 09: Meth-
ods and Tools for Evaluation in the Wild. In In Proceedings of the 11th International Confer-
ence on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’09).
ACM, New York, NY, Article 107, 2 pages.
Hollingshead, T., and Novick, D. G. (2007) Usability Inspection Methods After 15 Years of
Research and Practice. SIGDOC 2007, pp. 249–255.
Hollis, V., Konrad, A., Springer, A., Antoun, M., Antoun, C., Martin, R., and Whittaker. S
(2017) What Does All This Data Mean for My Future Mood? Actionable Analytics and Tar-
geted Reflection for Emotional Well Being. Human–Computer Interaction, 32 (5–6), 208–267.
Holloway, C., and Dawes, H. (2016) Disrupting the World of Disability: The Next Genera-
tion of Assistive Technologies and Rehabilitation Practices. Healthcare Technology Letters,
3 (4), 254–256.
Holtzblatt, K. (2001) Contextual Design: Experience in Real Life. Mensch & Computer.
Holtzblatt, K., and Beyer, H. (2017) Contextual Design (2nd ed.) Design for life. Mor-
gan Kaufmann.
Holtzblatt, K., and Jones, S. (1993) Contextual Inquiry: A Participatory Technique for Sys-
tems Design. In D. Schuler and A. Namioka (eds) Participatory Design: Principles and prac-
tice. Lawrence Earlbaum Associates, Hillsdale, NJ, pp. 177–210.
Höök, K. (2018) Designing with the Body Somaesthetic Interaction Design. MIT
Hornbæk, K., and Hertzum, M. (2017) Technology Acceptance and User Experience: A
Review of the Experiential Component in HCI. Transactions on Human-Computer Interac-
tion, 24, 5, Article 33, 30 pages.
Hornecker, E. (2005) A Design Theme for Tangible Interaction: Embodied Facilitation. In
Proceedings of the 9th European Conference on Computer Supported Cooperative Work,
ECSCW ’05, 18–22 September, Paris. Kluwer/Springer, pp. 23–43.
Hornecker, E. Marshall, P., and Jörn Hurtienne, J. (2017) Locating Theories of Embodiment
Along Three Axes: 1st–3d Person, Body-Context, Practice-Cognition. Workshop position
paper for CHI 2017 workshop on Soma-Based Design Theory. Downloaded from: http://
www.ehornecker.de/Papers/SomaestheticWS-embodimentshortie
Horton, S. (2005) Access by Design: A Guide to Universal Usability for Web Designers. New
Riders Press, Indianapolis, IN.
Houben, S., Golsteijn, C., Gallacher, S., Johnson, R., Bakker, S., Marquardt, N., Capra, L.,
and Rogers, Y. (2016) Physikit: Data Engagement Through Physical Ambient Visualizations
in the Home. In Proceedings of the 2016 CHI Conference on Human Factors in Computing
Systems (CHI ’16). ACM, New York, NY, pp. 1608–1619.
Howells, L. (2011). A Guide to Heuristic Website Reviews http://www.smashingmagazine.
com/2011/12/16/a-guide-to-heuristic-website-reviews/ (accessed August, 2014).
Hsu, Y., Dille, P., Cross, J., Dias, B., Sargent, R., and Nourbakhsh, I. (2017) Community-
Empowered Air Quality Monitoring System. In Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, pp. 1607–1619.
Huff, D. (1991) How to Lie with Statistics. Penguin.
Hutchings, D. (2012) An Investigation of Fitts’ Law in a Multiple-Display Environment.
In ACM Proceedings of CHI’12.
http://www.ehornecker.de/Papers/SomaestheticWS-embodimentshortie
http://www.ehornecker.de/Papers/SomaestheticWS-embodimentshortie
http://www.smashingmagazine.com/2011/12/16/a-guide-to-heuristic-website-reviews/
http://www.smashingmagazine.com/2011/12/16/a-guide-to-heuristic-website-reviews/
R e f e R e n c e s596
Hutchings, D., Smith, G., Meyers, B., Czerwinski, M., and Robertson, G. (2004) Display
Space Usage and Window Management Operation Comparisons Between Single Monitor
and Multiple Monitor Users. In Proceedings of the Working Conference on Advanced Visual
Interfaces, AVI 2004, pp. 32–39.
Hutchins, E. (1995) Cognition in the Wild. MIT Press, Cambridge, MA.
Hutchins, E., Holan, J. D., and Norman, D. (1986) Direct manipulation interfaces. In D. Nor-
man and S. W. Draper (eds) User Centered System Design. Lawrence Earlbaum Associates,
Hillsdale, NJ, pp. 87–124.
Hutchinson, H., Mackay, W., Westerlund, B., Bederson, B. B., Druin, A., Plaisant, C., Beau-
douin-Lafon, M., Conversy, S., Evans, H., Hansen, H., Roussel, N., and Eiderbäck, B. (2003)
Technology Probes: Inspiring Design for and with Families, In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI ’03). ACM, New York, NY,
pp. 17–24.
IEEE Ethically Aligned Design (2018), Downloaded from: https://ethicsinaction.ieee.org/.
Isaacs, E., Konrad, A., Walendowski, A., Lennig, T., Hollis, V., and Whittaker, S. (2013) Ech-
oes from the Past: How Technology Mediated Reflection Improves Well-Being. In Proceed-
ings of CHI ’13, ACM, 1071–1080.
Ishii, H., and Ullmer, B. (1997) Tangible Bits: Towards Seamless Interfaces Between People,
Bits and Atoms. In Proceedings CHI 1997. ACM, New York, NY, pp. 234–241.
Ishii, H., Kobayashi, M., and Grudin, J. (1993) Integration of Interpersonal Space and Shared
Work-Space: Clearboard Design and Experiments. ACM Transactions on Information Sys-
tems, 11(4), 349–375.
Jadhav, D., Bhutkar, G., and Mehta, V. (2013) Usability Evaluation of Messenger Applications
for Android Phones Using Cognitive Walkthrough. In Proceedings of APCHI’2013, pp. 9–18.
Jaidka, S, Reeves, S., and Bowen, J. (2017) Modelling Safety-Critical Devices: Coloured Petri
Nets and Z. In Proceedings of ECIS, the ACM SIGCHI Symposium on Engineering Interac-
tive Computing Systems, pp. 51–56.
Jang, J., Zhao, D., Hong, W., Park, Y., and Yong Yi, M. (2016) Uncovering the Underlying
Factors of Smart TV UX over Time: A Multi-study, Mixed-method Approach, In Proceedings
of the ACM International Conference on Interactive Experiences for TV and Online Video
(TVX ’16). ACM, New York, NY, pp. 3–12.
Jaques, N., Rudovic, O., Taylor, S., Sano, A., and Picard, R. (2017) Predicting Tomorrow’s
Mood, Health, and Stress Level using Personalized Multitask Learning and Domain Adapta-
tion. In Proceedings of Machine Learning Research, 48, 17–33.
Javornik, A., Freeman, R., and Moutinho, A. (2017) Retail Experience With Face Applica-
tion. Free to download from https://www.amazon.co.uk/dp/B076C41L31/ref=rdr_ext_sb_
ti_hist_2.
Javornik, A., Rogers, Y., Gander, D., and Moutinho, A. (2017) MagicFace: Stepping into
Character through an Augmented Reality Mirror. In Proceedings of the CHI Conference on
Human Factors in Computing Systems (CHI’17). ACM, New York, NY, pp. 4838–4849.
Johnson, J. (2014) Designing with the Mind in Mind: Simple Guide to Understanding User
Interface Design Rules. Morgan Kaufmann.
Johnson, J., and Henderson, A. (2002) Conceptual Models: Begin by Designing What to
Design, Interactions January/February, 25–32.
https://ethicsinaction.ieee.org
R e f e R e n c e s 597
Johnson, J., and Henderson, A. (2012) Conceptual Models: Core to Good Design. Morgan
& Claypool Publishers.
Johnson, J., and Finn, K. (2017) Designing User Interfaces for an Aging Population: Towards
Universal Design. Morgan Kaufmann.
Johnson, R., Van der Linden, J., and Rogers, Y. (2010). To Buzz or Not to Buzz: Improving
Awareness of Posture Through Vibrotactile Feedback. In Whole Body Interaction Workshop,
CHI 2010. ACM.
Johnson-Laird, P. N. (1983) Mental Models. Cambridge University Press, Cambridge.
Jokela, T., Ojala, J., and Olsson, T. (2015) A Diary Study on Combining Multiple Informa-
tion Devices in Everyday Activities and Tasks. In Proceedings of the CHI Conference on
Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, pp. 3–12.
Jones, L. A., and Sarter, N. B. (2008) Tactile Displays: Guidance for their Design and
Application, Human Factors: The Journal of the Human Factors and Ergonomics Society
50, 90–111.
Joonhwan, L., Forlizzi, J., and Hudson, S. E. (2005) Studying the Effectiveness of MOVE:
A Contextually Optimized In-Vehicle Navigation System. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI ’05). ACM, New York, NY,
pp. 571–580.
Jordan, B., and Henderson, A. (1995) Interaction Analysis: Foundations and Practice. Journal
of the Learning Sciences, 4(1), 39–103.
Joshi, S., Nistala, P.V., Jani, H., Sakhardande, P., and Dsouza, T. (2017) User-Centered Design
Journey for Pattern Development. In Proceedings of the 22nd European Conference on Pat-
tern Languages of Programs (EuroPLoP ’17). ACM, New York, NY, Article 23, 19 pages.
Jupp, V. (ed.) (2006) The SAGE Dictionary of Social Research Methods. SAGE.
Kahn, R., and Cannell, C. (1957) The Dynamics of Interviewing. John Wiley & Sons
Inc., New York.
Kahneman, D. (2011) Thinking, Fast and Slow. Penguin.
Kammer, D., Schmidt, D., Keck, M., and Groh, R. (2013) Developing Mobile Interface Meta-
phors and Gestures. In Proceedings of MobileHCI, ACM Press, pp. 516–521.
Kang, S., Clegg, T., Norooz, L., Froehlich, J., and Byrne V. (2018) Prototyping and Simulat-
ing Complex Systems with Paper Craft and Augmented Reality: An Initial Investigation. In
Proceedings of the 12th International Conference on Tangible, Embedded, and Embodied
Interaction (TEI ’18). ACM, New York, NY, pp. 320–328.
Kaninsky, M., Gallacher, S., and Rogers. Y. (2018) Confronting People’s Fears about Bats:
Combining Multi-modal and Environmentally Sensed Data to Promote Curiosity and Dis-
covery. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18).
ACM, New York, NY, pp. 931–943.
Karapanos, E., Martensi, J.-B., and Hassenzahl, M. (2009) Accounting for Diversity in Sub-
jective Judgments. In Proceedings of CHI 2009. ACM, New York, NY, pp. 639–648.
Karat, C.-M. (1994) A Comparison of User Interface Evaluation Methods. In J. Nielsen and
R. L. Mack (eds) Usability Inspection Methods. John Wiley & Sons Inc., New York.
Kari, T., Arjoranta, J., and Salo, M. (2017) Behavior Change Types with Pokémon GO. In
Proceedings of the 12th International Conference on the Foundations of Digital Games
(FDG ’17). ACM, New York, NY, Article 33, 10 pages.
R e f e R e n c e s598
Kawa, L. (2018) Two Major Apple Shareholders Push for Study of iPhone Addiction in Chil-
dren. Available at: https://www.bloomberg.com/news/articles/2018-01-08/jana-calpers-push-
apple-to-study-iphone-addiction-in-children.
Kazemitabaar, M., McPeak, J., Jiao, A., He, L., Outing, T., and Froehlich, J.E. (2017) Maker-
Wear: A Tangible Approach to Interactive Wearable Creation for Children. In Proceedings of
the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New
York, NY, pp. 133–145.
Keirnan, A., Ahmadpour, N., Pedell, S., and Mayasari, A. (2015) Lights, Camera, Action:
Using Animations to Co-Evaluate User Experience Scenarios. In Proceedings of the Annual
Meeting of the Australian Special Interest Group for Computer Human Interaction (OzCHI
’15), ACM, New York, NY, pp. 492–496.
Kelley, T. with Littman, J. (2016) The Art of Innovation. Profile Books, Croydon, Surrey.
Kempton, W. (1986) Two Theories of Home Heat Control, Cognitive Science 10, 75–90.
Kerr, S. J., Tan, O., Chua, J. C. (2014) Cooking Personas: Goal-Directed Design Requirements
in the Kitchen, International Journal of Human-Computer Studies, 72, 255–274.
Khalid, H., Shihab, E., Nagappan, M., and Hassan, A.E. (2015) What Do Mobile App Users
Complain About? IEEE Software, May/June, 70–77.
Kim, H., Coutrix, C., and Roudaut, A. (2018) KnobSlider: Design of a Shape-Changing UI
for Parameter Control. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (CHI ’18). ACM, New York, NY, Paper 339, 13 pages.
Kim, J., and Hastak M. (2018) Social Network Analysis: Characteristics of Online Social
Networks after a Disaster. International Journal of Information Management, 38, 86–96.
Kim, S. (1990) Interdisciplinary Cooperation. In B. Laurel (ed.) The Art of Human–Com-
puter Interface Design. Addison-Wesley, Reading, MA.
Kim, Y. (2015). Libero: On-the-Go Crowdsourcing for Package Delivery. In Proceedings of
the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing
Systems (CHI EA ’15). ACM, New York, NY, pp. 121–126.
Kinshumann, K., Glerum, K., Greenberg, S., Aul, G., Orgovan, V., Nichols, G., Grant, D.,
Loihle, G., and Hunt, G. (2011) Debugging in the (Very) Large: Ten Years of Implementation
and Experience, CACM, 54(7), 111–116.
Kirk, D. S., Durrant, A., Wood, G., Leong, T.W., and Wright, P. (2016) Understanding the Social-
ity of Experience in Mobile Music Listening with Pocketsong. In Proceedings of the 2016 ACM
Conference on Designing Interactive Systems (DIS ’16). ACM, New York, NY, pp. 50–61.
Kirkham, R., Mellor, S., Green, D., Lin, J-S., Ladha, K., Ladha, C., Jackson, D., Olivier, P.,
Wright, P., and Ploetz, T. (2013) The Break-Time Barometer: An Exploratory System for
Workplace Break-Time Social Awareness. In Proceedings of the 2013 ACM International
Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13). ACM, 73–82.
Kirsh, D. (2010) Thinking with External Representations, AI & Society. Online version: down-
loaded from www.springerlink.com/content/5913082573146k68/ (retrieved May 1, 2010).
Kirsh, D. (2013) Embodied Cognition and the Magical Future of Interaction Design, ACM
Transactions on Computer-Human Interaction, Vol. 20, No. 1, Article 3, 30 pages.
Kjeldskov, J., and Skov, M. (2014) Was it Worth the Hassle? Ten Years of Mobile HCI
Research Discussions on Lab and Field Evaluations. In ACM Proceedings of MobileHCI.
Toronto, Canada, September 23–26.
https://www.bloomberg.com/news/articles/2018-01-08/jana-calpers-push-apple-to-study-iphone-addiction-in-children
https://www.bloomberg.com/news/articles/2018-01-08/jana-calpers-push-apple-to-study-iphone-addiction-in-children
http://www.springerlink.com/content/5913082573146k68/
R e f e R e n c e s 599
Klein, L. (2014) What Do We Actually Mean by ‘Sociotechnical’? On Values, Boundaries and
the Problems of Language. Applied Ergonomics, 45, 137–142.
Klemmer, S. R., Hartmann, B., and Takayama, L. (2006) How Bodies Matter: Five Themes
for Interaction Design. In Proceedings of the 6th Conference on Designing Interactive Sys-
tems, DIS 2006. ACM, New York, NY, pp. 140–149.
Klemmer, S. R., Newman, M. W., Farrell, R., Bilezikjian, M., and Landay, J. A. (2001) The
Designer’s Outpost: A Tangible Interface for Collaborative Website Design. In Symposium on
User Interface Software and Technology. ACM, New York, NY, pp. 1–10.
Knapp, J. with Zeratsky, J., and Kowitz, B. (2016) Sprint: How to Solve Big Problems and
Test New Ideas in Just Five Days. Bantam Press, UK.
Knowles, B., and Hanson, V. L. (2018) The Wisdom of Older Technology (Non-Users). In
Communications of the ACM, 61, 3, 72–77.
Kohavi, R. (2012) Online Controlled Experiments: Introduction, Learnings, and Humbling
Statistics. In Proceedings of the 6th ACM conference on Recommender systems (RecSys ’12).
ACM, New York, NY, pp. 1–2.
Kohavi, R., and Longbotham, R. (2015) Unexpected Results in Online Controlled Experi-
ments. SIGKDD Explorations, 12, 2, 31–35.
Kollman, J., Sharp, H., and Blandford, A. (2009) The Importance of Identify and Vision To
User Experience Designers On Agile Projects. In Proceedings of the 2009 Agile Conference,
IEEE Computer Society, Washington DC.
Komninos, A. (2017) How Emotions Impact Cognition. Downloaded from: https://www.
interaction-design.org/literature/article/how-emotions-impact-cognition.
Kozinets, V. (2010) Netnography. SAGE.
Kraut, R., Fish, R., Root, R., and Chalfonte, B. (1990) Informal Communications in Organi-
zations: Form, Function and Technology. In S. Oskamp and S. Krug (eds) Don’t Make Me
Think. New Riders/Peachpit.
Krippendorff, K. (2013) Content Analysis: An Introduction to Its Methodology (3rd edn).
SAGE Publications.
Krug, S. (2014) Don’t Make Me Think, Revisited: A Common Sense Approach to Web Usa-
bility (3rd edn). Pearson.
Kuhn, T. S. (1972/1962) The Structure of Scientific Revolutions (2nd edn). University of
Chicago Press, Chicago.
Kushniruk A., Monkman H., Borycki E., and Kannry J. (2015) User-Centered Design and
Evaluation of Clinical Information Systems: A Usability Engineering Perspective. In: Patel
V., Kannampallil T., and Kaufman D. (eds.) Cognitive Informatics for Biomedicine. Health
Informatics. Springer.
Lakoff, G., and Johnson, M. (1980) Metaphors We Live By. University of Chicago
Press, Chicago.
Lane, N. D., and Georgiev, P. (2015). Can Deep Learning Revolutionize Mobile Sensing? In
Proceedings of the 16th International Workshop on Mobile Computing Systems and Appli-
cation. ACM, New York, NY, pp. 117–122.
Law, E. L., Roto, V., Hassenzahl, M., Vermeeren, A. P., and Kort, J. (2009) Understand-
ing, Scoping and Defining User Experience: A Survey Approach. In Proceedings of the 27th
https://www.interaction-design.org/literature/article/how-emotions-impact-cognition
https://www.interaction-design.org/literature/article/how-emotions-impact-cognition
R e f e R e n c e s600
International Conference on Human Factors in Computing Systems, CHI 2009. ACM, New
York, NY, pp. 719–728.
Lazar, J., Feng, H. J., and Hochheiser, H. (2017) Research Methods in Human-Computer
Interaction. (2nd ed.). Cambridge, MA: Elsevier/Morgan Kaufmann Publishers.
Lazar, J., Goldstein, D., and Taylor, A. (2015). Ensuring Digital Accessibility Through Process
and Policy. Waltham, MA: Elsevier/Morgan Kaufmann Publishers.
Lazar, J., Jaeger, P., Adams, A., Angelozzi, A., Manohar, J., Marciniak, J., Murphy, J. Norasteh,
P., Olsen, C., Poneres, E., Scott, T., Vaidya, N., and Walsh, J. (2010) Up in the Air: Are Airlines
Following the New DOT Rules on Equal Pricing for People with Disabilities When Websites
are Inaccessible? Government Information Quarterly, 27(4), 329–336.
Lechelt, Z., Rogers, Y., Yuill, N., Nagl, L., Ragone, G., and Marquardt, N. (2018) Inclusive
Computing in Special Needs Classrooms: Designing for All. In Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY,
Paper 517, 12 pages.
Ledgard, H., Singer, A., and Whiteside, J. (1981) Directions in Human Factors for Interac-
tive Systems. In G. Goos and J. Hartmanis (eds) Lecture Notes in Computer Science, 103.
Springer-Verlag, Berlin.
Ley, B., Ogonowski1, C., Jan Hess, M. M., Race, N., Randall, D., Rouncefield, M., and Wulf,
V. (2015) At Home with Users: A Comparative View of Living Labs. Interacting with Com-
puters, Volume 27, Issue 1, pp. 21–35
Li, J., Cho, I., and Wartell I. (2018) Evaluation of Cursor Offset on 3D Selection in VR. In
Proceedings of the Symposium on Spatial User Interaction (SUV’18), pp.120–129.
Lim, B.Y., Dey, A.K., and Avrahami, D. (2009) Why and Why Not Explanations Improve
the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI ’09). ACM, New York, NY,
pp. 2119–2128.
Lim, S. L., and Finkelstein, A. (2012) StakeRare: Using Social Networks and Collaborative
Filtering for Large-Scale Requirements Elicitation, IEEE Transactions on Software Engineer-
ing, 38(2), 707–735.
Lim, Y.-K., Stolterman, E., and Tenenburg, J. (2008) The Anatomy of Prototypes: Prototypes
as Filters, Prototypes as Manifestations of Design Ideas, ACM Transactions on Computer-
Human Interaction, 15(2).
Lin, J., Newman, M.W., Hong, J.I., and Landay, J.A. (2000) DENIM: Finding a Tighter Fit
Between Tools and Practice for Website Design. In Proceedings of CHI ’00, 510–517.
Lingel, J. (2012) Ethics and Dilemmas of Online Ethnography. In CHI ’12 Extended Abstracts
on Human Factors in Computing Systems (CHI EA ’12). ACM, New York, NY, pp. 41–50.
Liu, X., Feng, X., Pan, S., Peng, J., and Zhao, X. (2018) Skeleton Tracking Based on Kinect
Camera and the Application in Virtual Reality System. In Proceedings of the 4th Interna-
tional Conference on Virtual Reality, pp. 21–25.
Liu, Y., and Räihä, K.-J. (2010) Predicting Chinese Text Entry Speeds on Mobile Phones. In
Proceedings of CHI 2010: HCI in China, April 10–15, Atlanta, GA, pp. 2183–2192.
Loma, N. (2018) WTF is Dark Pattern Design? Downloaded from: https://techcrunch.
com/2018/07/01/wtf-is-dark-pattern-design/.
R e f e R e n c e s 601
Loranger, H., and Laubheimer, P. (2017) The State of UX Agile Development, downloaded
from https://www.nngroup.com/articles/state-ux-agile-development/.
Lottridge, D.M., Rosakranse, C., Oh, C.S., Westwood, S.J., Baldoni, K.A., Mann, A.S., and
Nass, C.I. (2015) The Effects of Chronic Multitasking on Analytical Writing. In Proceedings
of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15).
ACM, New York, NY, pp. 2967–2970.
Lotz, N., Sharp, H., Woodroffe, M., Blyth., Rajah, D., and Ranganai, T. (2014) Framing
Behaviours in Novice Interaction Designers, In Proceedings of DRS 2014 Design’s Big
Debates. pp. 1178–1190.
Luce, K. H., Winzelberg, A. J., Das, S., Osborn, M. I., Bryson, S. W., and Taylor, C. B. (2003)
Reliability of Self-Report: Paper Versus Online Administration, Computers in Human Behav-
ior (accessed online January 20, 2005).
Lucero, A. (2015) Using Affinity Diagrams to Evaluate Interactive Prototypes. In Proceedings
of INTERACT 2015, Part II, LNCS 9297, pp. 231–248.
Ludwig, T., Kotthaus, C., and Pipek, V. (2016) Situated and Ubiquitous Crowdsourcing with
Volunteers During Disasters. In Proceedings of the 2016 ACM International Joint Confer-
ence on Pervasive and Ubiquitous Computing: Adjunct (UbiComp ’16). ACM, New York,
NY, pp. 1441–1447.
Lueg, C., Banks, B., Michalek, J., Dimsey, J., and Oswin, D. (2019) Close Encounters of the 5th
Kind: Recognizing System-Initiated Engagement as Interaction Type. JASIST. Wiley Online.
Lugmayr, A., Grenfeld, A., and Zhang, D.J. (2017) Selected Advanced Data Visualizations:
The UX-Machine, Cultural Visualisation, Cognitive Big Data, and Communication of Health
and Wellness Data. In Proceedings of the 26th International Conference on World Wide Web
Companion (WWW ’17 Companion), pp. 247–251.
Mackay, W., and Fayard, A.-L. (1997) HCI, Natural Science and Design: A Framework for
Triangulation Across Disciplines. In Proceedings of the 2nd Conference on Designing Inter-
active Systems: Processes, Practices, Methods, and Techniques (DIS’97) pp. 223–234.
MacKenzie, I. S. (1992) Fitts’ Law as a Research and Design Tool in Human–Computer Inter-
action, Human–Computer Interaction 7, 91–139.
MacKenzie, I. S. (1995). Movement Time Prediction in Human-Computer Interfaces. In
R. M. Baecker, W. A. S. Buxton, J. Grudin, and S. Greenberg (eds.), Readings in Human-
Computer Interaction (2nd edn) (pp. 483–493). Los Altos, CA: Kaufmann
Maglio, P. P., Matlock, T., Raphaely, D., Chernicky, B., and Kirsh, D. (1999) Interactive Skill
in Scrabble. In Proceedings of Twenty-first Annual Conference of the Cognitive Science Soci-
ety. Lawrence Earlbaum Associates, Mahwah, NJ.
Maguire, M. (2014) Socio-Technical Systems and Interaction Design—21st Century Rele-
vance. Applied Ergonomics, 45, 162–170.
Maher, M. L., Preece, J., Yeh, T., Boston, C., Grace, K., Pasupuleti, A., and Stangl, A. (2014)
NatureNet: A Model for Crowdsourcing the Design of Citizen Science Systems. In Proceedings
of the Companion Publication of the 17th ACM Conference on Computer Supported Coop-
erative Work and Social Computing (CSCW Companion ’14) pp. 201–204. New York: ACM.
Mahyar, N., James, M. R., Ng, M. M., Wu, R. A., and Dow, S. P. (2018) CommunityCrit:
Inviting the Public to Improve and Evaluate Urban Design Ideas through Micro-Activities.
https://www.nngroup.com/articles/state-ux-agile-development/
R e f e R e n c e s602
In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI
’18). ACM, New York, NY, Paper 195, 14 pages.
Maiden, N. A. M., Ncube, C., and Robertson, S. (2007) Can Requirements Be Creative?
Experiences with an Enhanced Air Space Management System. In Proceedings of ICSE ’07.
Malamed, C. (2009) Visual Language for Designers: Principles for Creating Graphics that
People Understand. Rockport Publishers.
Mancini, C., Rogers, Y., Bandara, A.K., Coe, T., Joinson, A.N., Jedrzejczyk, L. Price, B. A.
Thomas, K., and Nuseibeh, B. (2010) ContraVision: Exploring Users’ Reactions to Futuristic
Technology. In Proceedings of the CHI Conference on Human Factors in Computing Systems
(CHI ’10). ACM, New York, NY, pp.153–162.
Mancini, C., Thomas, K., Rogers, Y., Price, B. A., Jedrzejczyk, L., Bandara, A. K., Joinson, A.
N., and Nuseibeh, B. (2009) From Spaces to Places: Emerging Contexts in Mobile Privacy,
UbiComp 2009, September 30–October 3.
Mankoff, J., Fait, H., and Tran, T. (2005) Is Your Web Page Accessible? a Comparative Study
of Methods for Assessing Web Page Accessibility for the Blind. In Proceedings of CHI 2003.
ACM, New York, NY, pp. 41–50.
Mankoff, J., Kravets, R., and Blevis, E. (2008) Some Computer Science Issues in Creating a
Sustainable World, Computer, 41(8), 102–105.
Mann, S. (1997) An Historical Account of the ‘WearComp’ and ‘WearCam’ Inventions Devel-
oped for Applications in Personal Imaging. In The First International Symposium on Wear-
able Computers: Digest of Papers. IEEE Computer Society, pp. 66–73.
Manuel, D., Moore, D., and Charissis, V. (2012) An Investigation into Immersion in Games
Through Motion Control and Stereo Audio Reproduction, September 26–28 AM ’12: In Proceed-
ings of the 7th Audio Mostly Conference: A Conference on Interaction with Sound pp. 124–129.
Marquardt, N., Brudy, F., Liu, C., Bengler, B., and Holz. C. (2018) SurfaceConstellations: A
Modular Hardware Platform for Ad-Hoc Reconfigurable Cross-Device Workspaces. In Pro-
ceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18).
ACM, New York, NY, Paper 354, 14 pages.
Marsden, G., Maunder, A., and Parker, M. (2008) People Are People, but Technology is not
Technology, Philosophical Transactions of the Royal Society 366, 3795–3804.
Marshall, P., and Hornecker, E. (2013) Theories of Embodiment in HCI. In The SAGE Hand-
book of Digital Technology Research. 144–158.
Marshall, P., Price, S., and Rogers, Y. (2003) Conceptualizing Tangibles to Support Learning.
In Proceedings of Interaction Design and Children, IDC 2003. ACM, New York, p. 101–109.
Martin, A., Biddle, R., and Noble, J. (2009) XP Customer Practice: A Grounded Theory. In
Proceedings of the 2009 Agile Conference, IEEE Computer Society, Washington DC.
Mason, B. (2017) Virtual Reality has a Motion Sickness Problem. Downloaded from https://
www.sciencenews.org/article/virtual-reality-has-motion-sickness-problem
Maurya, A. (2018) IEEE Big Data 2017 Panel Discussion on Bias and Transparency. AI Mat-
ters, 4 (2).
McCarthy, J., and Wright, P. (2004) Technology as Experience. MIT Press, Cambridge, MA.
McCullough, M. (2004) Digital Ground: Architecture, Pervasive Computing and Environ-
mental Knowing. MIT Press, Cambridge, MA.
R e f e R e n c e s 603
McInerney, P. (2017) UX in Agile projects: Taking Stock After 12 Years, Interactions, March-
April, 58–61.
Mekler, E.D., Tuch, A.N., Martig, A.L., and Opwis, K. (2014) A Diary Study Exploring Game
Completion and Player Experience. In Proceedings of the First ACM SIGGHI Annual Sym-
posium on Computer-Human Interaction In Play (CHI PLAY ’14). ACM, New York, NY,
pp. 433–434.
Mifsud, J. (2011) 12 Effective Guidelines for Breadcrumb Usability and SEO. Downloaded
from https://usabilitygeek.com/12-effective-guidelines-for-breadcrumb-usability-and-seo/.
Miller, G. (1956) The Magical Number Seven, Plus or Minus Two: Some Limits on Our
Capacity for Processing Information, Psychological Review 63, 81–97.
Miller, L. (2006) Interaction Designers and Agile Development: A Partnership. In Proceed-
ings of UPA 2006. Denver/Broomfield: Usability Professionals’ Association.
Miller, L. H., and Johnson, J. (1996) The Xerox Star: An Influential User Interface Design.
In M. Rudisill, C. Lewis, P. G. Polson and T. D. McKay (eds) Human–Computer Interface
Design. Morgan Kaufmann, San Francisco.
Mittelstadt, S., Behrisch, M., Weber, S., Schreck, T., Stoffel, A., Pompl, R., Keim, D., Last,
H., and Zhang. L. (2012) Visual Analytics for The Big Data Era—A Comparative Review
of State-of-The-Art Commercial Systems. In Proceedings of the 2012 IEEE Conference
on Visual Analytics Science and Technology (VAST) (VAST ’12). IEEE Computer Society,
Washington, DC, pp. 173–182.
Miyake, N. (1986) Constructive Interaction and the Iterative Process of Understanding, Cog-
nitive Science, 10(2) pp. 151–177.
Molich, R., Laurel, B., Snyder, C., Quesenbery, W., and Wilson, C.E. (2001) Ethics in HCI,
In CHI ’01 Extended Abstracts on Human Factors in Computing Systems. ACM, New York,
NY, pp. 217–218.
Morrison, C., Villar, N., Thieme, A., Ashktorab, Z., Taysom, E., Salandin, O., Cletheroe, D.,
Saul, G., Blackwell, A.F., Edge, D., Grayson, M., and Zhang. H. (2018) Torino: A Tangible
Programming Language Inclusive of Children with Visual Disabilities. Human-Computer
Interaction, 1–49.
Morville, P. (2005) Ambient Findability. O’Reilly Media Inc.
Müller, J., Oulasvirta, A., and Murray-Smith, R. (2017) Control Theoretic Models of Point-
ing. ACM Transactions on Computer-Human Interaction, 24, 4, Article 27, 36 pages.
Müller-Tomfelde, C. (ed.) (2010) Tabletops: Horizontal Interactive Displays. Springer.
Mullet, K., and Sano, D. (1995) Designing Visual Interfaces. Prentice Hall, Mountain View, CA.
Mumford, E. (2006) The Story of Socio-technical Design: Reflections in its Successes, Fail-
ures, and Potential, Information Systems Journal, 16, 317–342.
Muniz, F. (2016) An Introduction to Heuristic Evaluation. Downloaded from: https://
usabilitygeek.com/heuristic-evaluation-introduction/.
Nario-Redmond, M. R., Gospodinov, D., and Cobb, A. (2017) Crip for a Day: The Unintended
Negative Consequences of Disability Simulations. Rehabilitation Psychology, 62(3), 324–333.
Ncube, C., Oberndorf, P., and Kark, A. W. (2008) Opportunistic Software Development:
Making Systems from What’s Available, IEEE Software, 25(6), 38–41.
Neil, T. (2014) Mobile Design Pattern Gallery (2nd edn). O’Reilly.
R e f e R e n c e s604
Neustaedter, C., Venolia, G., Procyk, J., and Hawkins, D. (2016) To Beam or Not to Beam: A
Study of Remote Telepresence Attendance at an Academic Conference. In Proceedings of the
19th ACM Conference on Computer-Supported Cooperative Work and Social Computing
(CSCW ’16). ACM, New York. pp. 418–431.
Nevo, D., and Wade, M. R. (2007) How to avoid Disappointment by Design, Communica-
tions of the ACM, 50(4), 43–48.
Nielsen (2014) www.useit.com.
Nielsen, J. (1993) Usability Engineering. Morgan Kaufmann, San Francisco.
Nielsen, J. (1994a) Heuristic Evaluation. In J. Nielsen and R. L. Mack (eds) Usability Inspec-
tion Methods. John Wiley & Sons Inc., New York.
Nielsen, J. (1994b) Enhancing the Explanatory Power of Usability Heuristics. In Proceedings
of CHI ’94. ACM, New York, NY, pp. 152–158.
Nielsen, J. (1999) Designing Web Usability: The Practice of Simplicity. New Riders Publish-
ing Thousand Oaks, CA.
Nielsen, J. (2000) Designing Web Usability. New Riders Press, Indianapolis, IN.
Nielsen, J., and Li, A. (2017) Mega Menus Work Well for Site Navigation. Downloaded from
https://www.nngroup.com/articles/mega-menus-work-well/.
Nielsen, J., and Loranger, H. (2006) Prioritizing Web Usability. New Riders Press.
Nielsen, J., and Mack, R.L. (eds) (1994) Usability Inspection Methods. John Wiley & Sons
Inc., New York.
Nielsen, J., and Mohlich, R. (1990) Heuristic Evaluation of User Interfaces. In Proceedings
of CHI ’90. ACM, New York.
Nielsen, J., and Norman, D. (2014) The Definition of User Experience, www.nngroup.com/
articles/definition-user-experience/ (accessed July 2, 2014).
Nielsen, J., and Tahir, M. (2002) Homepage Usability: 50 Websites Deconstructed. New
Riders Press.
Norman, D. (1983) Some Observations on Mental Models. In D. Gentner and A. L. Stevens
(eds) Mental Models. Lawrence Earlbaum Associates, Hillsdale, NJ.
Norman, D. (1986) Cognitive Engineering. In D. Norman and S. W. Draper (eds) User Cen-
tered System Design. Lawrence Earlbaum Associates, Hillsdale, NJ, pp. 31–62.
Norman, D. (1988) The Design of Everyday Things. Basic Books, New York.
Norman, D. (1993) Things That Make Us Smart. Addison-Wesley, Reading, MA.
Norman, D. (1999) Affordances, Conventions and Design, ACM Interactions Magazine,
May/June, 38–42.
Norman, D. (2004) Beauty, Goodness, and Usability/Change Blindness. Human–Computer
Interaction, 19(4), 311–318.
Norman, D. (2005) Emotional Design: Why We Love (or Hate) Everyday Things. Basic
Books, New York.
Norman, D. (2006) Why Doing User Observations First is Wrong, Interactions, July/Aug, 50.
Norman, D. (2010) Natural Interfaces are Not Natural, Interactions, May/June, 6–10.
Norman, D. (2013) The Design of Everyday Things. The MIT Press, Cambridge,
Massachusetts.
http://www.useit.com
https://www.nngroup.com/articles/mega-menus-work-well/
http://www.nngroup.com/articles/definition-user-experience/
http://www.nngroup.com/articles/definition-user-experience/
R e f e R e n c e s 605
North, S, (2017) Hey, Where’s My Hay?: Design Fictions in Horse-Computer Interaction.
In Proceedings of the Fourth International Conference on Animal-Computer Interaction
(ACI2017). ACM, New York, NY, Article 17, 5 pages.
Nudelman, G. (2013) Android Design Patterns. John Wiley.
O’Kaine, A.A., Rogers, Y., and Blandford, A.E. (2015) Concealing or Revealing Mobile Med-
ical Devices? Designing for Onstage and Offstage Presentation. In Proceedings of the 33rd
Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New
York, NY, pp. 1689–1698.
O’Connaill, B., Whittaker, S., and Wilbur, S. (1993) Conversations Over Video Conferences:
An Evaluation of the Spoken Aspects of Video-Mediated Communication, Human–Com-
puter Interaction 8, 389–428.
Ofcom Report (2018) A Decade of Digital Dependency. Downloaded from https://www.ofcom
.org.uk/about-ofcom/latest/features-and-news/decade-of-digital-dependency.
O’Hara, K., Gonzalez, G., Sellen, A., Penney, G., Varnavas, A. Mentis, H., Criminisi, A.,
Corish, R., Rouncefield, M., Dastur, N., and Carrell, T. (2013) Touchless Interaction in Sur-
gery, Communications of the ACM, 57(1)70–77.
Oliveira, N., Jun, E., and Reinecke, K. (2017) Citizen Science Opportunities in Volunteer-
Based Online Experiments. In Proceedings of the 2017 CHI Conference on Human Factors
in Computing Systems (CHI ’17). ACM, New York, NY, pp. 6800–6812.
Oliver, J.L., Brereton, M., Watson, D.M., and Roe, P. (2018) Visualisations Elicit Knowledge
to Refine Citizen Science Technology Design: Spectrograms Resonate with Birders. In Pro-
ceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI ’18).
ACM, New York, NY, pp. 133–144.
Ophir, E., Nass, C. I., and Wagner, A. D. (2009) Cognitive Control in Media Multitaskers,
Proceedings of the National Academy of Sciences USA 106:15583–15587.
Oppenheim, A.N. (2000) Questionnaire Design, Interviewing and Attitude Measurement.
2nd edition. Pinter Publishers.
Ortony, A., Norman, D. A., and Revelle, W. (2005) Affect and proto-affect in effective func-
tioning. In J. M. Fellous and M. A. Arbib (eds) Who Needs Emotions? The Brain Meets the
Machine. New York: Oxford University Press, pp. 173–202.
Oviatt, S., Cohen, A., and Weibel, N. (2013) Multimodal Learning Analytics: Description of
Math Data Corpus of ICMI Grand Challenge Workshop. ICMI ’13: In Proceedings of the
15th ACM International Conference on Multimodal Interaction.
Oviatt, S., Schuller, B., Cohen, P.R., Sonntag, D., Potamianos, G., and Krüger, A. (eds.). (2017)
The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and
Common Modality Combinations—Volume 1. Association for Computing Machinery and
Morgan & Claypool, New York, NY.
Park, S.Y., and Chen, Y. (2015) Individual and Social Recognition: Challenges and Opportu-
nities in Migraine Management. In Proceedings of the 18th ACM Conference on Computer
Supported Cooperative Work and Social Computing (CSCW ’15). ACM, New York, NY,
pp. 1540–1551.
Parker, C., Fredericks, J., Tomitsch, M., and Yoo, S. (2017) Towards Adaptive Height-Aware
Public Interactive Displays. In Adjunct Publication of the 25th Conference on User Mode-
ling, Adaptation and Personalization (UMAP ’17), ACM, New York, NY, pp. 257–260.
https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/decade-of-digital-dependency
https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/decade-of-digital-dependency
R e f e R e n c e s606
Paterson, B., Winschiers-Theophilus, H., Dunne, T. T., Schinzel, B., and Underhill, L. G.
(2011) Interpretation of a Cross-Cultural Usability Evaluation—A Case Study Based on a
Hypermedia System for Rare Species Management in Namibia. Interacting with Computers,
23, 3, 239–246.
Pearl, C. (2016) Designing Voice User Interfaces. O’Reilly.
Peatt, K. (2014) Off the Beaten Canvas: Exploring the Potential of the Off-Canvas Pattern
downloaded from http://www.smashingmagazine.com/2014/02/24/off-the-beaten-canvas-
exploring-the-potential-of-the-off-canvas-pattern/ Sept 2014.
Pêcher, C., Lemercier, C., and Cellier, J.-M. (2009) Emotions Drive Attention: Effects on
Driver’s Behavior. Safety Science, 47, 1254–1259.
Pêcher, C., Lemercier, C., and Cellier, J.-M. (2011) The Influence of Emotions on Driving
Behavior. In Traffic Psychology: An International Perspective. Chapter IX. Publisher: Nova
Science Publishers, Editors: Dwight Hennessy.
Perterer, N., Sundström, P., Meschtscherjakov, A., Wilfinger, D., and Tscheligi, M. (2013)
Come Drive with Me: An Ethnographic Study of Driver-Passenger Pairs to Inform Future In-
Car Assistance. In Proceedings of the Conference on Computer supported cooperative work
(CSCW’13). ACM, New York, NY, pp. 1539–1548.
Petrie, H., Hamilton, F., King, N., and Pavan, P. (2006) Remote Usability Evaluations with
Disabled People. In Proceedings of the SIGCHI Conference on Human Factors in Comput-
ing Systems (CHI ’06), ACM, New York, NY, pp. 1133–1141.
Picard, R. W. (1998) Affective Computing. MIT Press, Cambridge, MA.
Pinelle, D., Wong, N., and Stach, T. (2008) Heuristic Evaluation for Games: Usability Princi-
ples for Video Games. In Proceedings of SIGCHI 2008, Florence, Italy, pp. 1453–1462.
Pinelle, D., Wong, N., Stach, T., and Gutwin, C. (2009) Usability Heuristics for Networked
Multiplayer Games. In ACM Proceedings of GROUP’09.
Porcheron, M., Fischer J.E., Reeves, S., and Sharples, S. (2018) Voice Interfaces in Everyday
Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
(CHI ’18). ACM, New York, NY, Paper 640, 12 pages.
Porter, C., Letier, E., and Sasse, M.A. (2014) Building a National E-Service Using Sentire:
Experience Report on the Use of Sentire: A Volere-Based Requirements Framework Driven
by Calibrated Personas and Simulated User Feedback. In Proceedings of RE’2014, 374–383.
Poulter, N. (2013) 6 Google Analytics Custom Dashboards to Save You Time NOW! Retrieved
from http://www.stateofdigital.com/google-analytics-dashboards/.
Preece, J. (2016) Citizen Science: New Research Challenges in HCI. International Journal of
Human-Computer Interaction, 32, 8, 585–612.
Preece, J. (2017) How Two Billion Smartphone Users Can Save Species, Interactions. Vol.
XXIV.2, 27–33.
Preece, J., and Shneiderman, B. (2009) The Reader to Leader Framework: Motivating Tech-
nology-Mediated Social Participation, AIS Transactions on Human–Computer Interaction,
1(1), 13–32.
Preece, J., Pauw, D., and Clegg, T. (2018) Interaction Design of Community-Driven Environ-
mental Projects (CDEPs): A Case Study from the Anacostia Watershed. Proceedings of the
National Academy of Sciences, USA.
http://www.smashingmagazine.com/2014/02/24/off-the-beaten-canvas-exploring-the-potential-of-the-off-canvas-pattern/
http://www.smashingmagazine.com/2014/02/24/off-the-beaten-canvas-exploring-the-potential-of-the-off-canvas-pattern/
http://www.stateofdigital.com/google-analytics-dashboards/
R e f e R e n c e s 607
Pressman, R.S., and Maxim, B.R. (2014) Software Engineering: A Practitioner’s Approach
(Int’l Ed). McGraw-Hill Education.
Price, B., Kelly, R., Mehta, V., McCormick, C., Ahmed, H., and Pearce, O. (2018) Feel My
Pain; Design and Evaluation of Painpad, a Tangible Device for Supporting Inpatient Self-
Logging of Pain. In Proceedings of the CHI Conference on Human Factors in Computing
Systems (CHI’18). ACM, New York, NY, Paper 169, 13 pages.
Primak, R. (2014) Walden Warming: Climate Change Comes to Thoreau’s Woods. University
of Chicago Press.
Prohaska, T.R., Anderson, L.A., and Binstock, R.H. (2012) Public Health for an Aging Soci-
ety. JHU Press. 249–252.
Purkiss, B., and Khaliq, I. (2015) A Study of Interaction in Idle Games and Perceptions on
the Definition of a Game. In Proceedings of IEEE Games Entertainment Media Conference
(GEM’2015), 1–6.
Putnam, C., Bungum, M., Spinner, D., Parelkar, A.N., Vipparti, S., and Cass, P. (2018) How
User Experience Is Practiced: Two Case Studies from the Field. In Extended Abstracts of the
2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18). ACM, New
York, NY, Paper LBW092, 6 pages.
Putnam, R. D. (2000) Bowling Alone: The Collapse and Revival of American Community.
New York: Simon & Schuster.
Rader, E., Cotter, K., and Cho, J. (2018) Explanations as Mechanisms for Supporting Algo-
rithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (CHI ’18). ACM, New York, NY, Paper 103, 13 pages.
Rae, I., Mutlu, B., Olson, G.M., Olson, J.S., Takayama, L., and Venolia, G. (2015) Every-
day Telepresence: Emerging Practices and Future Research Directions. In Proceedings of the
33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems
(CHI EA ’15). ACM, New York. pp. 2409–2412.
Rajanna, V., and Hammond, T. (2018) A Fitts’ law evaluation of gaze input on large displays
compared to touch and mouse inputs. In Proceedings of the Workshop on Communication
by Gaze Interaction (COGAIN ’18). ACM, New York, NY, Article 8, 5 pages.
Rajkomar, A., and Blandford, A. (2012) Understanding Infusion Administration in the ICU
Through Distributed Cognition. Journal of Biomedical Informatics, 45(3), 580–590.
Rajkomar, A., Mayer, A., and Blandford, A. (2015) Understanding Safety-Critical Interac-
tions with A Home Medical Device Through Distributed Cognition. Journal of Biomedical
Informatics, 56, 179–194.
Ramcharitar, A., and Teather, R. J. (2017) A Fitts’ Law Evaluation of Video Game Control-
lers: Thumbstick, Touchpad and Gyrosensor. In Proceedings of the CHI Conference Extended
Abstracts on Human Factors in Computing Systems. ACM, New York, NY, pp. 2860–2866.
Raptis, D., Jensen, R. H., Kjeldskov, J., and Skov, M.B. (2017) Aesthetic, Functional and
Conceptual Provocation in Research Through Design. In Proceedings of DIS’2017, ACM,
New York, NY, 29–41.
Raskin, J. (2000) The Humane Interface. Addison-Wesley, Harlow, Essex.
Ratcliffe, L., and M. McNeill (2012) Agile Experience Design. New Riders.
Rau, P.P., Plocher, T., and Choong, Y. (2013) Cross-Cultural Design for IT Products and
Services. CRC Press.
R e f e R e n c e s608
Read, J., Macfarlane, S., and Casey, C. (2002) Endurability, Engagement and Expectations:
Measuring Children’s Fun. In Proceedings of Interaction Design and Children 2002, Eind-
hoven, Amsterdam. ACM, New York, NY, pp. 189–198.
Redish, G. (2012) Letting Go of the Words: Writing Web Content That Works (2nd edn).
Morgan Kaufmann.
Reeves, B., and Nass, C. (1996) The Media Equation: How People Treat Computers, Televi-
sion, and New Media Like Real People and Places. Cambridge University Press, Cambridge.
Reeves, L., Lai, J. C., Larson, J. A., Oviatt, S. L., Balaji, T. S., Buisine, S., Collings, P., Cohen,
P. R., Kraal, B., Martin, J.-C., McTear, M. F., Raman, T. V., Stanney, K. M., Su, H., and Wang,
Q. Y. (2004) Guidelines for Multimodal User Interface Design, Communications of the ACM,
47(1), 57–59.
Reynaga, G., Chiasson, S., and van Oorschot, P.C. (2015) Heuristics for the Evaluation of
Captchas on Smartphones. In Proceedings of the 2015 British HCI Conference (British HCI
’15). ACM, New York, NY, 126–135.
Richard, P., Burkhardt, J-M., and Lubart, T. (2014). Users’ Participation to Creative Design
of New Solutions for Mobility: An Exploratory Study. In Proceedings of the 2014 European
Conference on Cognitive Ergonomics (ECCE ’14). ACM, New York, NY, Article 21, 7 pages.
Richards, M., and Woodthorpe J. (2009) Introducing TU100 “My Digital Life”: Ubiquitous
Computing in a Distance Learning Environment. In Proceedings of UbiComp 2009, Walt
Disney Beach Club Resort, Orlando, FL.
Rideout, V.J., Foehr, U.G., and Roberts, D.F. (2010) Generation M2: Media in the Lives of
8- to 18-year-Olds. Menlo Park, CA: Henry J Kaiser Family Foundation.
Ries, E. (2011) The Lean Startup: How Constant Innovation Creates Radically Successful
Businesses. Portfolio Penguin.
Righia, V., Sayagob S., and Blat, J. (2017) When We Talk about Older People in HCI, Who
Are We Talking About? Towards A ‘Turn To Community’ In The Design of Technologies
For a Growing Aging Population. International Journal of Human-Computer Studies, 108,
Issue C, 15–31.
Rizvi, S. A., Tuson, E., Desrochers, B., and Magee, J. (2018) Simulation of Motor Impairment
in Head-Controlled Pointer Fitts’ Law Task. In Proceedings of the 20th International ACM
SIGACCESS Conference on Computers and Accessibility (ASSETS ’18). ACM, New York,
NY, pp. 376–378.
Robertson, S., and Robertson, J. (2013) Mastering the Requirements Process (3rd edn).
Pearson Education, New Jersey.
Robinson, R., Rubin, Z., Márquez Segura, E., and Isbister, K. (2017) All the Feels: Designing
A Tool that Reveals Streamers’ Biometrics to Spectators. In Proceedings of the 12th Interna-
tional Conference on the Foundations of Digital Games (FDG ’17). ACM, New York, NY,
Article 36, 6 pages.
Robson, C., and McCartan, K. (2016) Real World Research. John Wiley & Sons.
Rodden, K., Hutchinson, H., and Fu, X. (2010) Measuring the User Experience on a
Large Scale: User-Centered Metrics for Web Applications, In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI ’10). ACM, New York, NY,
pp. 2395–2398.
R e f e R e n c e s 609
Rogers, Y. (1989) Icons at The Interface: Their Usefulness, Interacting with Computers,
1(1), 105–117.
Rogers, Y. (2006) Moving on from Weiser’s Vision of Calm Computing: Engaging UbiComp
Experiences. In Proceedings of UbiComp 2006, LNCS 4206, Springer-Verlag, Berlin, Heidel-
berg, pp. 404–421.
Rogers, Y. (2011) Interaction Design Gone Wild: Striving for Wild Theory, Interactions,
18(4): 58–62.
Rogers, Y. (2012) HCI Theory: Classical, Modern and Contemporary. Morgan & Claypool.
Rogers, Y., and Aldrich, F. (1996) In Search of Clickable Dons: Learning About HCI Through
Interacting with Norman’s CD-ROM, SIGCHI Bulletin, 28(3).
Rogers, Y., and Lindley, S. (2004) Collaborating Around Vertical and Horizontal Displays:
Which Way is Best? Interacting With Computers 16, N33–N52.
Rogers, Y., and Marsden, G. (2013) Does He Take Sugar? Moving Beyond the Rhetoric of
Compassion, Interactions XX.4 July–August 2013.
Rogers, Y., and Marshall, P. (2017) Research in the Wild. Morgan & Claypool.
Rogers, Y., Hazlewood, W., Marshall, P., Dalton, N. S., and Hertrich, S. (2010a) Ambi-
ent Influence: Can Twinkly Lights Lure and Abstract Representations Trigger Behavioral
Change? In Proceedings of Ubicomp 2010, pp. 261–270.
Rogers, Y., Lim, Y., and Hazlewood, W. (2006) Extending Tabletops to Support Flexible Col-
laborative Interactions. In Proceedings of Tabletop 2006, Adelaide, Australia, January 5–7.
IEEE, pp. 71–78.
Rogers, Y., Lim, Y. Hazlewood, W., and Marshall, P. (2009) Equal Opportunities: Do Share-
able Interfaces Promote More Group Participation than Single Users Displays? Human–
Computer Interaction, 24(2), 79–116.
Rogers, Y., Paay, J., Brereton, M., Vaisutis, K., Marsden, G., and Vetere, F. (2014) Never Too
Old: Engaging Retired People Inventing the Future with MaKey MaKey. In Proceedings of
CHI 2014, ACM, 2675–2684.
Rogers, Y., Payne, S., and Todd, P. (2010b) Projecting Instant Information in Situ: Can it
Help us Make More Informed Decisions? In Ubiprojection 2010: Workshop Proceedings,
Pervasive 2010.
Rogers, Y., Price, S., Randell, C., Fraser, D.S., Weal, M., and Fitzpatrick, G. (2005) Ubi-Learn-
ing Integrates Indoor and Outdoor Experiences, Communications of the ACM, 48(1), 55–59.
Rogers, Y., Yuill, N., and Marshall, P. (2013) Contrasting Lab-Based and in-the-wild Studies
for Evaluating Multi-User Technologies. In Price, S., Jewitt, C., and Brown, B. (eds.) SAGE
Handbook of Technology Research. 359–173.
Rønby-Pedersen, E., McCall, K., Moran, T. P., and Halasz, F. G. (1993) Tivoli: An Electronic
Whiteboard for Informal Workgroup Meetings. In Proceedings of CHI ’93. ACM, New York,
NY, pp. 391–398.
Rooksby, J., Rost, M., Morrison, A., and Chalmers, M. (2014) Personal Tracking as Lived
Informatics. In Proceedings of CHI’14, ACM, 1163–1172.
Rose, D. (2018) Why Gesture is the Next Big Thing in Design. Downloaded from: https://www
.ideo.com/blog/why-gesture-is-the-next-big-thing-in-design.
https://www.ideo.com/blog/why-gesture-is-the-next-big-thing-in-design
https://www.ideo.com/blog/why-gesture-is-the-next-big-thing-in-design
R e f e R e n c e s610
Roth, I. (1986) An Introduction to Object Perception. In I. Roth and J. B. Frisby (eds) Per-
ception and Representation: A Cognitive Approach. The Open University Press, Milton
Keynes, UK.
Rotman, D., Hammock, J., Preece, J., Boston, C. L., Hansen, D. L., Bowser, A., and He, Y.
(2014). Does Motivation in Citizen Science Change with Time and Culture? In Proceedings of
the Companion Publication of the 17th ACM Conference on Computer Supported Coopera-
tive Work and Social Computing (CSCW Companion ’14). ACM, New York, NY, 229–232.
Rotman, D., He, Y., Preece, J., and Druin, A. (2013) Understanding Large Scale Online Envi-
ronments with Qualitative Methods. iConference, February 2012, Texas.
Rotman, D., Preece, J., He, Y., and Druin, A. (2012) Extreme Ethnography: Challenges for
Research in Large Scale Online Environments iConference, February 7–10, 2012, Toronto,
Ontario, Canada.
Russell, D. M., and Yarosh, S. (2018). Can We Look to Science Fiction for Innovation in
HCI? Interactions 25, 2, 36–40.
Ryall, K., Forlines, C., Shen, C., and Ringel-Morris, M. (2004) Exploring the Effects of Group
Size and Table Size on Interactions with Tabletop Shared-Display Groupware. In Proceedings
of Conference on Computer Supported Cooperative Work (CSCW). ACM, New York.
Sacks, H., Schegloff, E., and Jefferson, G. (1978) A Simplest Systematics for the Organization
of Turn-taking for Conversation, Language 50, 696–735.
Saffer, D. (2010) Designing for Interaction: Creating Smart Applications and Clever Devices
(2nd edn). New Riders Press, Indianapolis, IN.
Saffer, D. (2014) Microinteractions: Designing with Details. O’Reilly.
Sakr, S., Bajaber, F., Barnawi, A., Altalhi, A., Elshawi, R., and Batarfi, O. (2015) Big Data
Processing Systems: State-of-the-Art and Open Challenges. In Proceedings of ICCC 2015.
Sambrooks, L., and Wilkinson, B. (2013) Comparison of Gestural, Touch, and Mouse Inter-
action With Fitts’ Law. In Proceedings of the 25th Australian Computer-Human Interac-
tion Conference: Augmentation, Application, Innovation, Collaboration (OzCHI ’13), ACM,
New York, NY, pp. 119–122.
Sarikaya, A., Correll, M., Bartram, L., Tory, M., and Fisher, D. (2018) What Do We Talk About
When We Talk About Dashboards? IEEE Trans Vis Comput. Graph, 1.
Sas, C., and Whittaker, S. (2013) Design for Forgetting: Disposing of Digital Possessions
After a Breakup. In Proceedings of CHI ’13. ACM, pp.1823–1832.
Scaife, M., and Rogers, Y. (1996) External Cognition: How Do Graphical Representations
Work? International Journal of Human–Computer Studies 45, 185–213.
Scapin, D. L. (1981) Computer Commands in Restricted Natural Language: Some aspects of
Memory of Experience, Human Factors 23, 365–375.
Schaffer, E. (2009) Beyond Usability: Designing Web Sites for Persuasion, Emotion and Trust.
Downloaded from www.uxmatters.com/mt/archives/2009/01/beyond-usability-designing-
web-sites-for-persuasion-emotion-and-trust.php (retrieved 8 March, 2019).
Schank, R. C. (1982) Dynamic Memory: A Theory of Learning in Computers and People.
Cambridge University Press, Cambridge.
http://www.uxmatters.com/mt/archives/2009/01/beyond-usability-designing-web-sites-for-persuasion-emotion-and-trust.php
http://www.uxmatters.com/mt/archives/2009/01/beyond-usability-designing-web-sites-for-persuasion-emotion-and-trust.php
R e f e R e n c e s 611
Schegloff, E. (1981) Discourse as an Interactional Achievement: Some Uses of ‘Uh-Huh’ and
Other Things that Come Between Sentences. In D. Tannen (ed.) Analyzing Discourse: Text
and talk. University Press, Georgetown.
Schegloff, E. A., and Sacks, H. (1973) Opening Up Closings, Semiotica 7, 289–327.
Schilit, B., Adams, N., Gold, R., Tso, M., and Want, R. (1993) The PARCTAB Mobile Com-
puting System. In Proceedings of Fourth Workshop on Workstation Operating Systems,
WWOS-IV. IEEE, pp. 34–39.
Schmidt, A. (2017a) Technologies to Amplify the Mind. IEEE Computer, 50(10), 102–106.
Schmidt, A. (2017b) Augmenting Human Intellect and Amplifying Perception and Cognition.
IEEE Pervasive Computing, vol. 16, no. 1, pp. 6–10, 2017.
Schmidt, A., and Herrmann, T. (2017) Intervention User Interfaces: A New Interaction Para-
digm for Automated Systems, Interactions, vol 24, no 5, pp. 40–45.
Schmitz, K., Mahapatra, R., and Nerur, S. (2018) User Engagement in the Era of Hybrid Agile
Methodology. IEEE Software (early access).
Schnall, R., Cho, H., and Liu, J. (2018) Health Information Technology Usability Evalua-
tion Scale (Health-ITUES) for Usability Assessment of Mobile Health Technology: Validation
Study. JMIR Mhealth Uhealth., 6(1):e4.
Schön, D. (1983) The Reflective Practitioner: How Professionals Think in Action. Basic
Books, New York.
Schuler, R. P., Grandhi, S. A., Mayer, J. M., Ricken, S. T., and Jones, Q. (2014) The Doing of
Doing Stuff: Understanding the Coordination of Social Group-Activities. In Proceedings of
CHI ’14, ACM, 119–128.
Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N.J., and Griskevicius, V. (2007) The
Constructive, Destructive, and Reconstructive Power of Social Norms, Psychological Science,
18(5), 429–434.
Schwaber, K., and Beedle, M. (2002) Agile Software Development with Scrum. Prentice Hall,
Englewood Cliffs, NJ.
Seffah, A., Gulliksen, J., and Desmarais, M. C. (2005) Human-Centered Software Engineer-
ing. Springer.
Segura, V.C.B., Barbosa, S.D.J., and Simões, F.P. (2012) UISKEI: A Sketch-Based Prototyping
Tool for Defining and Evaluating User Interface Behavior. In Proceedings of the International
Working Conference on Advanced Visual Interfaces, 18–25.
Sentance, S. Waite, J., Hodges, S., MacLeod, E., and Yeomans, L. (2017) Creating Cool Stuff:
Pupils’ Experience of the BBC micro:bit. In Proceedings of SIGCSE 2017, 531–536.
Sethu-Jones, G.R., Rogers, Y., and Marquardt, N. (2017) Data in the Garden: A Framework
for Exploring Provocative Prototypes as Part of Research in the Wild. In Proceedings of
the 29th Australian Conference on Computer-Human Interaction (OzCHI ’17), ACM, New
York, NY, pp. 318–327.
Shaer, O., and Hornecker, E. (2010) Tangible User Interfaces: Past, Present and Future Direc-
tions, Foundations and Trends in HCI (FnT in HCI) 3(1–2), 1–138.
Shaer, O., Strait, M., Valdes, C., Wang, H., Feng, T., Lintz, M., Ferreirae, M., Grote, C., Tem-
pel, K., and Liu, S. (2012) The Design, Development, and Deployment of a Tabletop Interface
for Collaborative Exploration of Genomic Data, International Journal of Human-Computer
Interaction 70, 746–764.
R e f e R e n c e s612
Sharp, H., Biddle, R., Gray, P. G., Miller, L., and Patton, J. (2006) Agile Development: Oppor-
tunity or Fad? Addendum to Proceedings of CHI 2006, Montreal.
Sharp, H., Galal, G. H., Finkelstein, A. (1999) Stakeholder Identification in the Requirements
Engineering Process. In: Proceedings of the Database and Expert System Applications Work-
shop (DEXA), pp. 387–391.
Sharp, H., Robinson, H.M., and Petre, M. (2009) The Role of Physical Artefacts in Agile Soft-
ware Development: Two Complementary Perspectives, Interacting with Computers, 21(1-2)
108–116.
Shen, C., Everitt, K., and Ryall, K. (2003) UbiTable: Impromptu Face-to-Face Collaboration
on Horizontal Interactive Surfaces. In Proceedings of Ubicomp 2003, pp. 281–288.
Shen, C., Lesh, N. B., Vernier, F., Forlines, C., and Frost, J. (2002) Building and Sharing Digi-
tal Group Histories. In Proceedings CSCW 2002. ACM, New York, NY, pp. 324–333.
Shilton, K. (2018) Values and Ethics in Human-Computer Interaction, Foundations and
Trends in Human-Computer Interaction, 12(2), 107–171.
Shneiderman, B. (1983) Direct Manipulation: A Step Beyond Programming Languages, IEEE
Computer, 16(8), 57–69.
Shneiderman, B. (1992) Tree Visualization with Tree-Maps: 2-d Space-Filling Approach.
ACM Transactions on Graphics, 11(1), 92–99.
Shneiderman, B. (1998) Designing the User Interface: Strategies for Effective Human–
Computer Interaction (3rd edn). Addison-Wesley, Reading, MA.
Shneiderman, B., and Plaisant, C. (2006) Strategies for Evaluating Information Visualiza-
tion Tools: Multi-Dimensional In-Depth Long-Term Case Studies. In Proceedings Beyond
Time and Errors: Novel Evaluation Methods for Information Visualization. Workshop of the
Advanced Visual Interfaces Conference.
Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., and Elmqvist, N. (2016) Designing the
User Interface: Strategies for Effective Human-Computer Interaction (6th ed.). Pearson.
Sidner, C., and Lee, C. (2005) Robots as Laboratory Hosts, Interactions 12, 24–26.
Siek, K. A., Rogers, Y., and Connelly, K. H. (2005) Fat Finger Worries: How Older and
Younger Users Physically Interact with PDAs. In Proceedings of INTERACT ’05, Rome.
Silver, J., and Rosenbaum, E. (2012). Makey Makey: Improvising Tangible and Nature-Based
User Interfaces. In Adjunct Proceedings of TEI’12.
Silverberg, M., MacKenzie, I. S., and Korhonen, P. (2000) Predicting Text Entry Speed on
Mobile Phones. In Proceedings of CHI 2000. ACM, New York, NY, pp. 9–16.
Silverio-Fernández, M., Renukappa S., and Suresh, S. (2018) What is a Smart Device?—A Con-
ceptualisation within the Paradigm of the Internet of Things. Visualization in Engineering, 6:3.
Sim, G., Horton, M., and McKnight, L. (2016) iPad vs Paper Prototypes: Does Form Fac-
tor Affect Children’s Ratings of a Game Concept. In Proceedings of Interaction Design and
Children (IDC ’16), pp. 190–195.
Simonsen, J., and Robertson, T. (2012) Routledge Handbook of Participatory Design.
Routledge, London.
Singer, L.M., and Alexander, P.A. (2017). Reading on Paper and Digitally: What the Past
Decades of Empirical Research Reveal. Review of Educational Research, 87(6), 315–343.
R e f e R e n c e s 613
Singer, P. (2011) The Expanding Circle Ethics, Evolution, and Moral Progress. Princeton
University Press.
Sitbon, L., and Farhin, S. (2017) Co-Designing Interactive Applications with Adults with
Intellectual Disability: A Case Study. In Proceedings of the 29th Australian Conference on
Computer-Human Interaction (OzCHI ’17). ACM, New York, NY, pp. 487–491.
Slater, M., and Wilbur, S. (1997) A Framework for Immersive Virtual Environments (FIVE):
Speculations on the Role of Presence in Virtual Environments, Presence: Teleoperators and
Virtual Environments 6, 603–616.
Slater, M., Pertaub, D., and Steed, A. (1999) Public Speaking in Virtual Reality: Facing an
Audience of Avatars. IEEE Computer Graphics and Applications, 19(2), 6–9.
Sleeper, M., Consolvo, S., and Staddon, J. (2014) Exploring the Benefits and Uses of Web
Analytics Tools for Non-Transactional Websites. In ACM Proceedings of the 2014 Confer-
ence on Designing Interactive Systems (DIS).
Smith, D., Irby, C., Kimball, R., Verplank, B., and Harslem, E. (1982) Designing the Star User
Interface, Byte, 7(4), 242–282.
Smith, M.E., Ascenzi, L. Qin, Yingsi and Wetsman, R. (2018) Designing a Video Co-Watching
Web App to Support Interpersonal Relationship Maintenance. In Proceedings of GROUP
’18, 162–165.
Smyth, J. D., Dillman, D. A., Christian, L. M., and Stern, M. J. (2005) Comparing Check-All
and Forced-Choice Question Formats in Web Surveys: The Role of Satisficing, Depth of Pro-
cessing, and Acquiescence in Explaining Differences. Technical Report #05-029. Washington
State University Social and Economic Sciences Research Center, Pullman, 30 pages.
Sohn, T., Li, K. A., Griswold, W. G., and Hollan, J. D. (2008) A Diary Study of Mobile Infor-
mation Needs. In Proceedings of CHI 2008. ACM, New York.
Solovey, E.T., Afergan, D., Peck, E., Hincks, S., and Jacob, R.J.K. (2014) Designing Implicit
Interfaces for Physiological Computing: Guidelines and Lessons Learned Using fNIRS1.
ACM Transactions on Computer–Human Interaction, 21(6).
Sparrow, B., Liu, J., and Wegner, D. M. (2011) Google Effects on Memory: Cognitive Conse-
quences of Having Information at Our Fingertips. Science, 333(6043), 308–314.
Speicher, M., Hell, P., Daiber, F., Simeone, A., and Krüger, A. (2018) A Virtual Reality Shop-
ping Experience Using the Apartment Metaphor. In AVI ’18: 2018 International Conference
on Advanced Visual Interfaces, 9 pages.
Spelmezan, D., Jacobs, M., Hilgers, A., and Borchers, J. (2009) Tactile Motion Instructions
for Physical Activities. In Proceedings of the 27th International Conference on Human Fac-
tors in Computing Systems, CHI 2009. ACM, New York, NY, pp. 2243–2252.
Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method: Working Around Social
Constraints Encountered in a Software Development Company. In Proceedings of CHI 2000.
ACM, New York, NY, pp. 253–359.
Spool, J. (2018) https://medium.com/@jmspool (accessed December 2018).
Starbird, K., Palen, L., Hughes, A. L., and Vieweg, S. (2010) Chatter on the Red: What Haz-
ards Threat Reveals about the Social Life of Microblogged Information. In Proceedings of
the 2010 ACM Conference on Computer Supported Cooperative Work, CSCW 2010. ACM,
New York, NY, pp. 241–250.
https://medium.com/@jmspool
R e f e R e n c e s614
Stavrinos, D., Jones, J.L., Garner, A.A., Griffin, R., Franklin, C.A., Ball, D., Welburn, S.C.,
Ball. K.K., Sisiopiku, V.P., and Fine, P.R. (2013) Impact of Distracted Driving on Safety and
Traffic Flow. Accident Analysis and Prevention, 61, 63–70.
Steed, A., Ye, P., Zillah, W., and Slater, M. (2018) “We Wait”—The Impact of Character
Responsiveness and Self Embodiment on Presence and Interest in an Immersive News Experi-
ence. Frontiers in Robotics and AI, 5, 112.
Stel, M., and Vonk, R. (2010) Mimicry in Social Interaction: Benefits for Mimickers, Mimic-
kees, and their Interaction. British Journal of Psychology, 101(2), 311–323.
Stephens-Davidowitz, S. (2018) Everybody Lies: Big Data, New Data, and What the Internet
Can Tell Us about Who We Really Are. Penguin.
Stobert, E., and Biddle, R. (2018) The Password Life Cycle. Transactions on Privacy and
Security, 21(3), Article 13, 32 pages.
Strauss, A., and Corbin, J. (1998) Basics of Qualitative Research: Techniques and Procedures
for Developing Grounded Theory (2nd edn). SAGE, London.
Strommen, E. (1998) When the Interface is a Talking Dinosaur: Learning Across Media with
ActiMates Barney. In Proceedings of CHI ’98. ACM, New York, NY, pp. 288–295.
Subrayaman, R., Weisstein, F. L., and Krishnan, M. S. (2010) User Participation in Software
Development Projects, Communications of the ACM, 53(3), 137–141.
Suzuki, K., Yokoyama, M., Yoshida, S., Mochizuki, T., Yamada, T., Narumi, T., Tanikawa,
T., and Hirose, M. (2017) FaceShare: Mirroring with Pseudo-Smile Enriches Video Chat
Communications. In Proceedings of the CHI Conference on Human Factors in Computing
Systems (CHI ’17). ACM, New York. pp. 5313–5317.
Swallow, E. (2013) The U.S. Senate More Divided Than Ever Data Shows, Forbes, Downloaded
from: https://www.forbes.com/sites/ericaswallow/2013/11/17/senate-voting-relationships-
data/#335036bf4031.
Swan, M. (2013) The Quantified Self: Fundamental Disruption in Big Data Science and Bio-
logical Discovery. Big Data, 1(2).
Sy, D. (2007) Adapting Usability Investigations for Development, Journal of Usability Stud-
ies, 2(3), May, 112–130.
Szafir, D. (2018) The Good, the Bad and The Biases: Five Ways Visualizations Can Mislead
and How to Fix Them, Interactions, Xxv.4.
Tallyn, E., Fried, H., Gianni, R., Isard, A., Speed, C. (2018) The Ethnobot: Gathering Ethnog-
raphies in the Age of IoT. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems (CHI ’18). ACM, New York, NY, Paper 604, 13 pages.
Tanenbaum, J. (2014) Design Fictional Interactions: Why HCI Should Care About Stories,
Interactions, XXI.5, 22–23.
Teixeira, C.R.G., Kurtz, G., Leuck, L.P., Tietzmann, R., Souza, D.R., Lerina, J.M.F., Manssour,
I.H., and Silveira, M.S. (2018) Humor, Support, and Criticism: A Taxonomy for Discourse
Analysis About Political Crisis on Twitter. In Proceedings of the 19th Annual International
Conference on Digital Government Research: Governance in the Data Age. ACM, New
York, NY, Article 68, 6 pages.
Thackara, J. (2001) The Design Challenge of Pervasive Computing. In Interactions May/
Jun, 47–52.
Thaler, R. H., and Sunstein, C. R. (2008) Nudge: Improving Decisions About Health, Wealth
and Happiness. Penguin.
https://www.forbes.com/sites/ericaswallow/2013/11/17/senate-voting-relationships-data/#335036bf4031
https://www.forbes.com/sites/ericaswallow/2013/11/17/senate-voting-relationships-data/#335036bf4031
R e f e R e n c e s 615
Thimbleby, H. (1990) User Interface Design. Addison-Wesley, Harlow, Essex.
Thimbleby, H. (2015) Safer User Interfaces: A Case Study in Improving Number Entry. IEEE
Transactions on Software Engineering 41 (7), 711–729.
Tidwell, J. (2006) Designing Interfaces: Patterns for Effective Interaction Design. O’Reilly
Media Inc.
Todd, P.M., Rogers, Y., and Payne, S.J. (2011) Nudging the Trolley in the Supermarket:
How to Deliver the Right Information to Shoppers. International Journal of Mobile HCI,
3(2), 20–34.
Toepoel, V. (2016) Doing Surveys Online. SAGE Publications Ltd.
Toetenel, L., and Rienties, B. (2016) Analyzing 157 Learning Designs Using Learning Ana-
lytic Approaches as a Means to Evaluate the Impact of Pedagogical Decision-Making. British
Journal of Educational Technology, 4(5), 981–992.
Tognazzini, B. (2014) First Principles of HCI Design, Revised and Expanded. Downloaded
from: asktog.com/atc/principles-of-interaction-design/
Tomlin, W. C. (2010) Usability, www.usefulusability.com/ (accessed May 1, 2010).
Towey, M., Zhang, L., Cottman-Fields, M., Wimmer, J., Zhang, J., and Roe, P. (2014) Visu-
alization of Long Duration Acoustic Recordings of the Environment. Procedia Computer
Science, 29, 703–712.
Tractinsky, N. (2013) Replicating and Extending Research on Relations between Visual Aes-
thetics and Usability. In ReplicaCHI 2013.
Travis, D. (2016) 247 Web Usability Guidelines. Downloaded from: https://www.userfocus
.co.uk/resources/guidelines.html.
Trimble, J., Wales, R., and Gossweiler, R. (2002) NASA Position Paper for the CSCW 2002
Workshop on Public, Community and Situated Displays: MERBoard.
Trist, E.L., and Bamforth, K.W. (1951) Some Social and Psychological Consequences of the
Longwall Method of Coal Getting. Human Relations, 4, 3–38.
Tullis, T., and Albert, B. (2013) Measuring the User Experience (2nd ed.). Morgan Kaufmann.
Tullis, T. S. (1997) Screen Design. In M. Helander, T. K. Landauer and P. Trabhu (eds) Hand-
book of Human–Computer Interaction (2nd edn). Elsevier, New York, pp. 377–411.
Turkle, S. (2015) Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin.
Ullmar, B., Ishii, H., and Jacob, R. J. K. (2005) Token + Constraint Systems for Tangible Inter-
action with Digital Information. TOCHI, 12(1), 81–N8.
Underkoffler, J., and Ishii, H. (1998) Illuminating Light: An Optical Design Tool with a Lumi-
nous–Tangible Interface. In Conference In Proceedings on Human Factors in Computing
Systems. ACM Press/Addison-Wesley, pp. 542–549.
Unger, R., and Chandler, C. (2012) A Project Guide to UX Design. New Riders, Berkeley, CA.
Väänänen-Vainio-Mattila, K., and Waljas, M. (2009) Development of Evaluation Heurisitcs
for Web Service User Experience. CHI 2009 Spotlight on Works in Progress, Session 1,
pp. 3679–3684.
Vaish, R., Snehalkumar, S., Gaikwad, N., Kovacs, G., Veit, A., Krishna, R., Ibarra, I.A.,
Simoiu, C., Wilber, M., Belongie, S., Goel, S. Davis, J., and Bernstein, M.S. (2017) Crowd
Research: Open and Scalable University Laboratories. In Proceedings of the 30th Annual
http://asktog.com/atc/principles-of-interaction-design
http://www.usefulusability.com
https://www.userfocus.co.uk/resources/guidelines.html
https://www.userfocus.co.uk/resources/guidelines.html
R e f e R e n c e s616
ACM Symposium on User Interface Software and Technology (UIST ’17). ACM, New York,
NY, pp. 829–843.
Valdesolo, P., and DeSteno, D. (2011) Synchrony and The Social Tuning of Compassion.
Emotion, 11, 2, 262–266.
Valentina, G., and Mohanna, M. (2013) Informal Cognitive Walkthroughs (ICW): Paring
Down and Pairing Up for an Agile World. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, pp. 3093–3096.
van Allen, P. (2018) Ways of Prototyping AI, Interactions, 47–51.
van Berkel, N., Ferreira, D., and Kostakos, V. (2017) The Experience Sampling Method on
Mobile Devices. ACM Computing Surveys, 50, 6, Article 93.
van den Broek, E.L. (2013) Ubiquitous Emotion-Aware Computing. Personal and Ubiquitous
Computing, 17, 53–67.
van den Hoven, P., Pieter E. Vermaas, P.E., and van de Poel, I. (eds.) (2015) Handbook of
Ethics, Values, and Technological Design. Springer.
van der Linden, J., Schoonderwaldt, E., Bird, J., and Johnson, R. (2011) MusicJacket–Com-
bining Motion Capture and Vibrotactile Feedback to Teach Violin Bowing. IEEE Transac-
tions on Instrumentation and Measurement, 60(1), pp. 104–113.
van Rens, L. S. (1997) Usability Problem Classifier. Unpublished master’s thesis, Virginia
Polytechnic Institute and State University, Blacksburg, VA.
Veen, J. (2001) The Art and Science of Web Design. New Riders Press, Indianapolis, IN.
Verma, H., Alavi, H. S., and Lalanne, D. (2017) Studying Space Use: Bringing HCI Tools to
Architectural Projects. In Proceedings of the CHI Conference on Human Factors in Comput-
ing Systems (CHI 2017). ACM, Press, New York, pp. 3856–3866.
Veronikha. E. (2016) Sentiment Analysis on Twitter about the Use of City Public Transpor-
tation Using Support Vector Machine Method. International Journal on Information and
Communication Technology (IJoICT), 2, 57.
Vertegaal, R. (2008) A Fitts’ Law Comparison of Eye Tracking and Manual Input in the
Selection of Visual Targets, ICMI 2008, October 20–22, Chania, Crete, Greece, pp. 241–248.
Villar, N., Scott, J., Hodges, S., Hammil, K., and Miller, C. (2012), NET Gadgeteer: A Plat-
form for Custom Devices. Pervasive, 216–233.
VisiStat (2010) Case Study: The MountainWinery.com, visistat.com/case-study-mountain-
winery.php (accessed September 2010).
von Neumann, J., and Morgenstern, O. (1944) Theory of Games and Economic Behavior.
Princeton University Press.
Wallace, J., McCarthy, J., Wright, P. C., and Olivier, P. (2013) Making Design Probes Work.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI
’13). ACM, New York, NY, pp. 3441–3450.
Wang, P., Sibi, S., Mok, B., and Ju, W. (2017) Marionette: Enabling On-Road Wizard-of-Oz
Autonomous Driving Studies. In Proceedings of Human-Robot Interaction, 234–243.
Wang, Y., Song, G., Qiao, G., Zhang, Y., Zhang, J., and Wang, W. (2013) Wheeled Robot
Control Based on Gesture Recognition Using the Kinect Sensor, In Proceedings of the IEEE
International Conference on Robotics and Biomimetics (ROBIO). Shenzhen, China, Decem-
ber 2013, pp. 378–383.
http://MountainWinery.com
http://visistat.com/case-study-mountain-winery.php
http://visistat.com/case-study-mountain-winery.php
R e f e R e n c e s 617
Warrick, E., Preece, J., Kibutu, J., and Sihanya, B. (2016) Social Media as an Indigenized
Information World for Environmental Stewardship. In Proceedings of the First African Con-
ference on Human Computer Interaction (AfriCHI), pp. 126–137.
Wasserman, S., and Faust, K. (1994) Social Network Analysis: Methods and Applications.
Cambridge University Press, Cambridge, UK.
Waterson, P. (2014) Health Information Technology and Sociotechnical Systems: A Report
on Recent Developments within the UK National Health Service (NHS), Applied Ergonom-
ics, 45, 150–161.
Watson, N., and Naish, H. (2018) Can Computers Understand Human Emotions? A Senti-
ment Analysis of Economic News. Downloaded from: http://blogs.ucl.ac.uk/ippr/can-com-
puters-understand-human-emotions-a-sentiment-analysis-of-economic-news/.
Webster, D., and Celik, O. (2014) Systematic Review of Kinect Applications in Elderly Care
and Stroke Rehabilitation, Journal of NeuroEngineering and Rehabilitation, 11:108.
Weiser, M. (1991) The Computer for the 21st Century, Scientific American, 94–104.
Weller, D. (2004) The Effects of Contrast and Density on Visual Web Search, Usability
News, 6(2): http://psychology.wichita.edu/surl/usabilitynews/62/density.htm (retrieved July
11, 2005).
Wellman, B., and Berkovitz, S.D., (1988) Social Structures: A Network Approach. Cambridge
University Press, Cambridge, UK.
Wharton, C., Rieman, J., Lewis, C., and Polson, P. (1994) The Cognitive Walkthrough
Method: A Practitioner’s Guide. In J. Nielsen and R. L. Mack (eds). Usability Inspection
Methods. John Wiley & Sons Inc., New York.
Whitenton, K. (2018) The Two UX Gulfs: Evaluation and Execution. Downloaded from:
https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/.
Whitenton, K., and Gibbons, S. (2018) Case Study: Iterative Design and Prototype Testing
of the NN/g Homepage. Downloaded from: https://www.nngroup.com/articles/case-study-
iterative-design-prototyping
Whiteside, J., Bennett, J., and Holtzblatt, K. (1988) Usability Engineering: Our Experience
and Evolution. In H. Helander (ed.) Handbook of Human–Computer Interaction. Elsevier
Science Publishers, Amsterdam, pp. 791–817.
Whitney, H. (2012) Data Insights: New Ways to Visualize and Make Sense of Data. Morgan
Kaufmann Publ., San Francisco, CA.
Williamson, V. (2016) Can Crowdsourcing Be Ethical? Downloaded from: https://www.
brookings.edu/blog/techtank/2016/02/03/can-crowdsourcing-be-ethical-2/
Wilson, C., Hargreaves, T., and Hauxwell-Baldwin, R. (2015) Smart Homes and their Users:
A Systematic Analysis and Key Challenges, Personal and Ubiquitous Computing, 19, 463–476.
Winograd, T. (1997) From Computing Machinery to Interaction Design. In P. Denning and
R. Metcalfe (eds). Beyond Calculation: The Next Fifty Years of Computing. Springer-Verlag,
Amsterdam, pp. 149–162.
Winschiers-Theophilus, H., and Bidwell, N.J. (2013) Toward an Afro-Centric Indigenous
HCI Paradigm. International Journal of Human-Computer Interaction, 29: 243–255.
Winschiers-Theophilus, H., Bidwell, N. J., and Blake, E. (2012) Community Consensus:
Design Beyond Participation, Design Issues, 28(3) Summer 2012, 89–100.
http://blogs.ucl.ac.uk/ippr/can-computers-understand-human-emotions-a-sentiment-analysis-of-economic-news/
http://blogs.ucl.ac.uk/ippr/can-computers-understand-human-emotions-a-sentiment-analysis-of-economic-news/
http://psychology.wichita.edu/surl/usabilitynews/62/density.htm
https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/
https://www.nngroup.com/articles/case-study-iterative-design-prototyping
https://www.nngroup.com/articles/case-study-iterative-design-prototyping
https://www.brookings.edu/blog/techtank/2016/02/03/can-crowdsourcing-be-ethical-2/
https://www.brookings.edu/blog/techtank/2016/02/03/can-crowdsourcing-be-ethical-2/
R e f e R e n c e s618
Wixon, D., and Wilson, C. (1997) The Usability Engineering Framework for Product Design
and Evaluation. In M. G. Helander, T. K. Landauer and P. V. Prabju (eds) Handbook of
Human–Computer Interaction. Elsevier, Amsterdam, pp. 653–688.
Wong, R.Y., Van Wyk, E., and Pierce, J. (2017) Real-Fictional Entanglements: Using Science
Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of DIS 2017.
ACM, New York, NY, pp. 567–579.
Woolrych, A., and Cockton, G. (2001) Why and When Five Test Users Aren’t Enough. In Pro-
ceedings of IHM-HCI 2001 Conference, Vol. 2. Cépadèus Éditions, Toulouse, pp. 105–108.
Xambó, A., Hornecker, E., Marshall, P., Jordà, S., Dobbyn, C., and Laney, R. (2013) Let’s Jam
The Reactable: Peer Learning During Musical Improvisation with a Tabletop Tangible Inter-
face. ACM Transactions on Computer-Human Interaction (TOCHI), 20(6), article no. 36.
Yeratziotis, A., and Zaphiris, P. (2018) A Heuristic Evaluation for Deaf Web User Experience
(HEADWUX). International Journal of Human–Computer Interaction, 34:3, 195–217.
Yin R. K. (2013) Case Study Research. SAGE Publications.
Yohanan, S., and MacLean, K. E. (2008) The Haptic Creature Project: Social Human-Robot
Interaction Through Affective Touch. In Proceedings of the AISB 2008 Symposium on the
Reign of Catz and Dogs: The Second AISB Symposium on the Role of Virtual Creatures in a
Computerized Society. 1, 7–11.
Yu, L., Kittur, A., and Kraut, R. E. (2016) Encouraging “Outside-the-box” Thinking in Crowd
Innovation Through Identifying Domains of Expertise. In Proceedings of CSCW ’16. ACM,
New York, NY, pp. 1214–1222.
Yuill, N., and Rogers, Y. (2012) Mechanisms for Collaboration: A Design and Evaluation
Framework for Multi-User Interfaces. ACM Transactions on Computer-Human Interaction,
19(1), Article 1, 25 pages.
Zeiliger, R., Reggers, T., Baldewyns, L., and Jans, V. (1997) Facilitating Web Navigation:
Integrated Tools for Active and Cooperative Learners. In Proceedings of the 5th International
Conference on Computers in Education, ICCE ’97, Kuching, Sarawak, Malaysia.
Zhai, W., and Thill, J-C. (2017) Social Media Discourse in Disaster Situations: A Study of the
Deadly. In Proceedings of EM-GIS’ 17 Proceedings of the 3rd ACM SIGSPATIAL Workshop
on Emergency Management.
Zhang, T., and Chan, A.H.S. (2011) The Association Between Driving Anger and Driving
Outcomes: A Meta-Analysis of Evidence from the Past Twenty Years. Accident Analysis and
Prevention, 90, 50–62.
Zhao, S., Ramos, J., Tao, J., Jiang, Z., Li, S., Wu, Z., Pan, G., and Dey, A.K. (2016) Discov-
ering Different Kinds of Smartphone Users Through Their Application Usage Behaviors. In
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous
Computing (UbiComp ’16). ACM, New York, NY, pp. 498–509.
Zuckerman, O., and Resnick, M. (2005) Extending Tangible Interfaces for Education: Digi-
tal Montessori-Inspired Manipulatives. In Proceedings of Conference on Human Factors in
Computing Systems. ACM, New York, NY, pp. 859–868.
Zufferey, G., Jermann, P., Lucchi, A., and Dillenbourg, P. (2009) TinkerSheets: Using Paper
Forms to Control and Visualize Tangible Simulations. In Proceedings of TEI09. ACM, New
York, NY, pp. 377–384.
3D printing, 423
404 error, 177
A/B testing, 62, 63, 574–575
academic disciplines, interaction design and, 10
accessibility
design and, 17–18
WCAG (Web Content Accessibility
Guidelines), 446
accountability, AI and, 91
activities
404 error, 177
alternative designs, 59
anthropomorphism, 188
appliances, 223
assumptions, 72–73
bank security, 116–117
behavioral changes, 183
claims, 72–73
cognition, information structure and, 105
cognitive walk-through, 564–565
computer game investigation case study, 508
contextual inquiry, 400
conversation breakdown, 142
critical incident analysis, 328
dashboard, 373–374
data analysis, 317–318
data gathering for readers, 274–275
data recording, 268
data visualization, 369
design, 31–32
design patterns, 486–488
design thinking, 456–457
double diamond design, 39–41
ethnographic data at Royal Highland
Show, 512–513
evaluation, 497, 498, 499
face-to-face conversations, 139
Fitbit Chart evaluation, 502
Google Analytics, 570–571
Google Trends, 352
GUIs (graphical user interfaces), 201–202
Happyornot terminal, 167
heuristic comparison, 560–561
icons, 207–208
in-depth
conceptual models, 95
interfaces, 254–255
mental models, 132–133
timepiece, 65
instructing interaction type, 82–83
interaction design tools, 492
interface metaphors, 78, 441–442
iPad usability testing, 530, 531
iPods, 14–15
Lean UX, 480
memory, 112
mental models, 123–124
menus, 204–205
mobile interfaces, 221
multimedia interfaces, 211–212
navigation app, 443–444
observation for data gathering,
287–288
in the field, 289–290
think-aloud technique, 296–297
online likes, 377
Painpad field study, 541
personas, 406–407
Polar A370 evaluation, 502
prototype scenarios, 448–449
questionnaires, 283, 285–286
requirements, 394–395
scenarios, 411–412
shoplifting detection, 378–379
social interaction, 138
storyboarding, 428, 450–451
student activities, 365–366
team makeup, 12
telepresence, 148–149
think-aloud protocol of analysis, 326–327
usability, 63–64
user experience aspects, 23–24
user identification, 56–57
user involvement, 49
user research, 476
website design, 219
activity tracker observation
data scraping, 299–300
diaries, 298–299
interaction logs, 299–300
web analytics, 299–300
adaptive interfaces, 194
advertising, emotional interaction and, 168
Affdex emotional AI, 179
classification of emotions, 179–180
Index
I n d e x620
affect, 169
Affectiva emotional AI, 181
affective computing, 166, 179–182
affective interfaces, 194
affinity diagrams, 323
affordance, 30
agent-based software, 175
AgileUX, 473–475
aligning work practices, 477–480
BDUF (big design up front), 477–478
cycle, 475, 478–480
documentation, 481–484
iteration zero, 475
sprint, 475
timebox, 475
user research, 473–477
AI (artificial intelligence), 90–91
accountability and, 91
affective computing, 166
control, 93–94
emotional AI, 166, 179–182
affective expression in speech, 179
biosensors, 179
body movement sensors, 179
cameras, 179
facial expressions, 179
gesture sensors, 179
transparency and, 91
visions, 91
All the Feels, 181
alternative courses, use cases, 415
alternative designs, 50
cross-fertilization, 58
generating, 58–59
prototyping, 62
selecting, 59–64
usability engineering, 62
alternative hypothesis, 533
Alzheimer’s disease, 117
Amazon Echo, 7
ambient interfaces, 194
analytic frameworks, 329–341
content analysis, 329, 332–333
conversation analysis, 329, 330–331
voice assistants, 330
discourse analysis, 329, 331–332
grounded theory, 329, 334–338
interaction analysis, 329, 333–334
systems-based, 329, 338–341
analytical evaluation, 515
analytics, 506, 514
learning analytics, 568
visual analytics, 568
web analytics, 567–568
ClickTale, 572
Clicky, 572
Crazy Egg, 572
Google Analytics, 568–570
KISSmetrics, 572
Mox Analytics, 572
off-site, 568
on-site, 568
TruSocialMetrics, 572
analyzing data. See data analysis
anatomy of prototypes, 430
annotating, external cognition and, 130–131
annoying interfaces, 174–179
404 error, 177
Anna from IKEA, 176–177
Clippy, 175–176
anthropomorphism, 187–190
anti-aliasing, 206
The App Generation (Gardner and Davis),
122
Apple, VoiceOver, 17
appliances, 222–223
applications, 30–31
approaches to interaction design, 43
AR (augmented reality), 152
assumptions, 71–72
ATM (air traffic management), 58
atomic requirements shell, 388
attention
design implications, 108
multitasking, 105–109
phone use while driving, 107–108
audio, data recording and, 267
augmented reality (AR) interfaces
decision making, 241–242
history, 243
HUDs (head-up displays), 242
mapping, 242–243
mixed reality, 241
research and design, 244
virtual content, 243
Autodesk, SketchBook, 479
automatic emotions, 169
averages
mean, 312
median, 312
mode, 312
outliers, 314
quantitative analysis, 311–312
Aware Home, 503
axial coding, grounded theory, 334, 337–338
Axure RP, 491
I n d e x 621
Balsamiq, 491
Barney the dinosaur, 187
BDUF (big design up front), 477–478
behavioral change, 182–186
environmental concerns, 185
Fitbit, 184
HAPIfork, 183–184
social norms, 185
behavioral level of the brain, 171
bias, evaluation and, 515
big data, 89, 350. See also data at scale
biometric data, 181
brain-computer interfaces (BCI), 250
brain levels, 171
behavioral level, 171
reflective level, 171
visceral level, 171
breadcrumb navigation, 217
Break-Time Barometer, 157
BSI (British Standards Institute), 500
CAD, command interfaces, 195–196
cameras, 117–118
Caplin Systems, 405–406
capturing requirements, 386
Cardiac Tutor, 209
cascading menus, 203
case studies, 302
deaf telephony, 455–456
evaluation
computer game investigation, 507–509
ethnographic data at Royal Highland
Show, 510–513
results, 514–515
field study Painpad, 538–540
prototyping, paper, 432
UX design, 483–484
categorizing data, 324–327
grounded theories, 334
CHI (Computer-Human Interaction)
conference, 262
chunks of information, 114–115
The Circle (Eggers), 411
citizen science, 354–356, 514
claims, 71–72
ClickTale, 572
Clicky, 572
Clippy, 175–176
co-design, 46
co-presence, social interaction and
awareness, 150–151
physical coordination, 150
shareable interfaces, 152–157
Cogapp, 13
cognition
attention, 102, 103
clear goals, 103
design implications, 108
information presentation, 103
multitasking, 105–109
context, 102
decision-making, 103
distributed cognition, 127–129
experiential, 102
external cognition
annotating, 130–131
cognitive tracing, 130–131
computational offloading, 130
memory load reduction, externaliza-
tion, 129–130
fast thinking, 102
information structure, 104
learning, 103, 119
decision-making, 121–122
interactive apps, 120
interactive books, 120
listening, 120–121
natural-language interfaces, 120
planning, 121–122
problem-solving, 121–122
reading, 120–121
reasoning, 121–122
speaking, 120–121
speech recognition systems, 120
tactile interfaces, 120
listening, 103
memory, 103, 111
memory load, 113–118
personal information management, 112–113
perception, 102, 109
design implications, 109
proprioceptors, 109
planning, 103
problem-solving, 103
processes, 102–103
reading, 103
reasoning, 103
reflective, 102
slow thinking, 102
speaking, 103
cognitive frameworks
distributed cognition, 127–129
embodied interaction, 131–132
external cognition
annotating, 130–131
cognitive tracing, 130–131
I n d e x622
computational offloading, 130
memory load reduction, externaliza-
tion, 129–130
gulfs of evaluation, 125–126
gulfs of execution, 125–126
information processing, 126–127
reaction time, 126
mental models, 123–125
cognitive impairment, 17
cognitive tracing, external cognition and, 130–131
cognitive walk-throughs, 506, 560–566
SSCW (streamlined cognitive walk-through), 566
collapsible menus, 204
collecting data. See data collection
command interfaces, 194, 195–196
research and design, 196
TextSL, 196
virtual words, 196
commercial projects, product owner, 44
common ground, design space and, 74
compromises in prototyping, 429–434
computational offloading, external
cognition and, 130
computer game investigation case study, 507–509
conceptual design, conceptual models, 434
expanding, 444–445
interaction types and, 442
interface metaphors and, 439–442
interface types and, 442–443
conceptual models, 434
analogies, 74
expanding, 444–445
interaction types and, 432
interface metaphors, 439–442
interface types and, 432–443
mappings, 75
metaphors, 74
relationships between concepts, 74
conceptualizing interaction, 71–72
design space
common ground, 74
open-mindedness, 74
orientation, 74
conclusions versus data, 264–265
concrete design, 444–445
accessibility, 445
inclusiveness, 445
conscious emotions, 169
consistency, 29
console interfaces, 247
constraints, 28–29
construction, 421–422, 457
physical computing, 458–462
SDKs (software development kits), 463–464
content analysis, 329, 332–333
contextual design, 323, 435
contextual inquiry, 400
apprenticeship model, 400
artifact model, 401
context principle, 400
cool concepts, 401
collaboration model, 401
day-in-the-life model, 401
relationship model, 401
decision point model, 401
flow model, 401
focus, 401
interpretation, 400
joy of life concepts
accomplish, 401
connection, 401
identity, 401
sensation, 401
joy of use concepts
direct in action, 401
hassle factor, 401
learning delta, 401
partnership principle, 400
physical model, 401
sequence model, 401
Wall Walk, 401
contextual menus, 204
controlled experiments, 515
convenience sampling, data gathering, 261
conversation analysis, 329, 330–331
voice assistants, 330–331
conversational mechanisms, 140–141
conversing interaction type, 81, 83–85
Cooper, Alan, 2, 222–223
Cooper, NielsonNorman Group, 13
cooperative design, 46
copyrights, 61
counterbalancing, experiments, 535
Crazy Egg, 572
critical incident analysis, 320, 327–328
cross-cultural design, 446
cross-fertilization, alternative design and, 58
CrowdFlower, 359
crowdsourcing, 46, 353–358, 515
citizen science, 514
consent form, 263
evaluation, 513–514
cultural differences among users, 16
cultural probes, 398
I n d e x 623
customer journey maps, 451–452
CuteCircuit, 245
D3 (Data-Driven Documents), 371
dark patterns, 25
data, 259–260
versus conclusions, 264–265
versus information, 264–265
numbers, abuse of, 309
personal, collecting, 353
qualitative, 308
quantitative, 308
second source, 352
socially acceptable use, 378
sources, combining, 364–366
data analysis, 307–308, 309, 351
frameworks, 329–341
content analysis, 329, 332–333
conversation analysis, 329, 330–331
discourse analysis, 329, 331–332
grounded theory, 329, 334–338
interaction analysis, 329, 333–334
systems-based, 329, 338–341
interviews, 309–311
observation, 311
outliers, 314
presentation, 342–344
storytelling, 344–345
structured notations, 342–344
summarizing findings, 345–346
questionnaires, 311
question design and, 313–314
sentiment analysis, 358–359
SNA (social network analysis), 359–363
spreadsheets, 314
thematic analysis, 322–324
tools, 341
videos, 321–322
data at scale, 349–350
data cleansing, 308
data collection
crowdsourcing, 353–358
field studies, 537
personal data, 353
scraping, 352
second source data, 352
sensing data, 356–357
sentiment analysis, 358–359
SNA (social network analysis), 359–363
sources, combining, 364–366
tracking reactions, 362–363
data ethics canvas, 379
data ethics principles, 379
data gathering, 260
case studies, 302
focus groups, 268
goal setting, 260
grounded theory, 334
interviews
alternative types, 277
Cisco WebEx, 277
experience enrichment, 277–278
focus groups, 268, 271–272
question development, 272–276
retrospective, 277
running, 276
semi-structured, 269–271
Skype, 277
structured, 269
unstructured, 268–269
Zoom, 277
observation, 287
activity tracking, 298–300
controlled environments, 295–298
in the field, 288–295
participants
behaviors, 273
incentives, 264
informed consent, 262–264
number of, 261–262
recording, 267
relationships with, 262–264
study population, 261
variations in types, 270–271
pilot studies, 264
questionnaires, 278–279
administering, 283–285
design, 279
online versus paper and pencil,
286
question format, 280–283
response format, 280–283
structure, 279
requirements activity, 396–398
sampling, 261
convenience sampling, 261
saturation sampling, 261
snowball sampling, 261
technique choice, 300–303
technique combining, 300–303
triangulation, 264–265
data interpretation, 307–308
data presentation, 307–308
data recording
audio, 267
notes, 266–267
I n d e x624
photos
and audio, 267
and notes, 266–267
video, 267
data requirements, 392
data visualization, 366–375
D3 (Data-Driven Documents), 371
dashboard, 370, 371–372
sensing data, 374–375
spectrograms, 368–369
visual literacy, 366
deaf telephony, 455–456
deceptive technology, 186
decision-making
design implications, 122
heuristics, 121
learning and, 121–122
theories, 121
Dedoose, 341
deductive approach to qualitative data,
321
design, 421–422
accessibility and, 17–18
affordance, 30
appliances, 222–223
applications, 30–31
augmented reality (AR) interfaces, 244
brain-computer interfaces (BCI), 250
command interfaces, 196
consistency, 29
constraints, 28–29
copying, 61
double diamond, 38–39
examples
remote control, 5–7
voice-mail system, 3–4
feedback, 27–28
gesture-based systems, 231
graphical interfaces, 202
icons, 208–209
inclusiveness and, 17–18
menus, 205
mobile interfaces, 221–222
multimedia interfaces, 211–212
multimodal interfaces, 233, 235
participatory, 453–454
persuasion and, 24–25
practices, interaction design and, 10
questionnaires, analysis and, 313–314
RITW (research in the wild), 54–55
touchscreens, 228
users and, 9
UX design, 471
visibility, 26–27
VR interfaces, 215–216
VUI (voice user interface), 226
wearables, 247
website design, 218
what to design, 7–9
design concept, 75–76
Design Council (United Kingdom), 38
design fiction, 411
design maps, 451–452
design patterns, 484
anti-patterns, 488
dark pattern, 488
pattern language, 484–486
Swiss Army Knife Navigation, 485
design probes, 398
design space
common ground, 74
open-mindedness, 74
orientation, 74
design thinking, 456–457
Designer’s Model, 93
desktop, graphical, 76
dialog boxes, 199
diaries, requirements activities, 397
DiCoT (distributed cognition of teamwork),
92, 339–341
different-participant design, 534–535
digital forgetting, 117
digital ink, 226–227
direct manipulation framework, 85–86
directional edges, social networks, 360
discourse analysis, 329, 331–332
DisplayR, 359
distributed cognition, 92, 127–129
documentation, AgileUX, 481–484
documents, metadata, adding, 113
double diamond of design, 38–39
evolutionary production, 51
drones, 248–249
DSDM (Dynamic Systems Development
Method), 64
dynamic signs, 563
dyslexia, 120
Echo, 184–185
ecological validity, 515
economic principle of prototyping, 430
edges, social networks, 360
effectiveness, usability and, 19
efficiency, usability and, 20
email, 140
embodied interaction, 131–132
I n d e x 625
emojis, 172–173
emoticons, 138
emotional AI, 166, 179–182
Affdex, 179
Affectiva, 181
affective expression in speech, 179
biometric data, 181
biosensors, 179
body movement sensors, 179
cameras, 179
facial expressions, 179, 180
gesture sensors, 179
emotional design, 166
expressive interfaces and, 172–174
emotional interaction, 165–166
advertising, 168
affective computing, 166, 179–182
annoying interfaces, 174–179
anthropomorphism, 187–190
behavioral change, 182–186
emotional AI, 166, 179–182
emotional design, 166, 172–174
expressions, 169
expressive interfaces, 172–174
persuasive technologies, 182–186
emotional skills, 169
emotions
affect, 169
automatic, 169
conscious emotions, 169
driving behavior, 169
stage fright, 169
UX and, 166–172
empirical measurement, HCI and, 49
engineering versus prototyping, 434
Enguino, 458
environmental concerns, 185
environmental requirements, 392
ERSs (error reporting systems), 47
ESM (experience sampling method), 299
essential use cases, 415
ethical design concerns, 375–383
ethnographies, 292–294
online, 294–295
requirements activities, 398
Royal Highland Show case study, 510–513
evaluation, 50, 496
analytical evaluation, 515
analytics, 506, 514, 549
bias, 515, 519
case studies
computer game investigation, 507–509
ethnographic data at Royal Highland
Show, 510–513
results, 514–515
cognitive walk-through, 506
controlled experiment, 515
controlled settings directly involving users,
500, 501–504
usability testing, 501–504
crowdsourcing, 513–514, 515
ecological validity, 515, 518
expert review or crit, 515
field study, 515
Fitt’s law, 506, 550, 576
formative, 499, 515
heuristic, 505–506, 515, 549
aesthetic and minimalist, 551
consistency and standards, 551
design guidelines and, 559–561
documentation, 551
error messages, 551
error prevention, 551
flexibility and efficiency, 551
golden rules, 559–561
performing, 554–559
principles and, 559–561
recognition versus recall, 551
system/real world match, 550
system status visibility, 550
user control and freedom, 550
websites, 553–554
in-the-wild study, 515
informed consent, 515–517
learning analytics, 506
living labs, 503–504, 515
location, 498–499
methods, 496
combining, 506–507
influences, 517–519
natural settings involving users, 500, 504–505
opportunistic, 507
participant information, 516–517
participants, 515
predictive evaluation, 515
predictive modeling, 549–550
reasons, 496–497
reliability, 515, 518
scope, 515, 519
settings not involving users, 500, 505–506
subject/topic, 497–498
summative, 500, 515
timing, 499–500
usability lab, 515
I n d e x626
usability testing, 501–504
user studies, 515
users, 515
validity, 515, 518
walk-throughs, 549
cognitive, 561–566
pluralistic, 566–567
web analytics, 506
evaluation studies
experiments, 523
experimental design, 534–535
hypotheses testing, 533–534
participants needed, 543
statistics, 536
field studies, 523, 536–537
in-the-wild studies, 538–541
participants needed, 543
usability testing, 523
iPad case study, 528–533
labs and equipment, 525–528
methods, 524–525
participants needed, 542
tasks, 524–525
users, 524–525
evolutionary production, 51
evolutionary prototyping, 433
Excel, 314
exoskeletons, 232
clothing, 245
expanding menus, 203–204
expectation management, 44
experience maps, 453
experiential cognition, 102
experimental design
counterbalancing, 535
different-participant design, 534–535
matched-participant, 535
order effect, 535
pair-wise, 535
same-participant design, 535
within-subjects design, 535
experiments, 503, 523
experimental design
counterbalancing, 535
different-participant design, 534–535
matched-participant, 535
order effect, 535
pair-wise, 535
same-participant design, 535
within-subjects design, 535
hypotheses testing
alternative hypothesis, 533
null hypothesis, 533
one-tailed hypothesis, 533
two-tailed hypothesis, 533
variables, 533
statistics, 536
t-tests, 536
expert review or crit, 515
exploring interaction type, 81, 86–87
expressions, emotions, 169
expressive interfaces, 172–174
animated icons, 173
emoticons, 172–173
Nest thermostat, 174
sonifications, 173
vibrotactile feedback, 173
external cognition
annotating, 130–131
cognitive tracing, 130–131
computational offloading, 130
memory load reduction, externaliza-
tion, 129–130
externalization, memory load reduction,
129–130
eXtreme Programming, 64, 65
face-to-face conersations, 139–142
conversation breakdown, 142
conversational mechanisms, 140
rules for, 140–141
Facebook, 145–146
facial coding, 179
facial expressions
automated facial coding, 179
biosensors, 179
cameras, 179
false alarms, 554–555
fast thinking, 102
FATE (fairness, accountability, transparency,
explainability), 379, 380–382
feedback, 27–28
field observation, 288
conducting, 291
degree of participation
participant observer, 289
passive observer, 289
ethnographies, 292–295
frameworks, 289
planning, 291
field studies, 515, 523, 536–537
in-the-wild studies, 538–541
Painpad, 538–540
filtering, memory and, 111
financial transactions, 113–114
MFA (multifactor authentication), 116
I n d e x 627
Fitbit, 184
fitness trackers, 245
flat menus, 203
flow, 23
focus groups, 268, 271–272
formative evaluations, 499, 515
frameworks, 88–89, 92–93
Designer’s Model, 93
System Image, 93
User’s Model, 93
functional requirements, 390, 392
fundamental prototyping, 429
futuristic scenarios, 410
games, idle games, 336
gathering data. See data gathering
GBIF (Global Biodiversity Information
Facility), 355
GDPR (general data protection regime), 379, 516
gesture-based systems, 229–231
GitHub, 489–490
Google Analytics, 568–571
air quality monitoring, 573–574
Google Design Sprints, 51–52, 52–54
Google Glass, 245–246
Google Lens, 88
Google Now card, 79
Google Sheets, 314
Google Trends, 352
Google Ventures, 52–53
Gorman, Mary, 390
GoToMeeting, 267
graphical desktop, 76
graphical interfaces. See GUIs (graphical user
interfaces)
grounded theory, 329, 334–338
axial coding, 334, 337–338
categories, 334
comparisons, 335
data gathering and, 334
development, 334
diagrams, 335
example, 336–338
idle games and, 336
memos, 335
open coding, 334, 337–338
questioning and, 335
selective coding, 334, 337–338
thematic analysis, 335–336
word, phrase, sentence analysis, 335
GUIs (graphical user interfaces), 82, 89, 194
icons, 197, 205–208
research and design, 208–209
menus, 197
cascading menus, 203
collapsible, 204
contextual menus, 204
expanding menus, 203–204
flat menus, 203
mega menus, 203
research and design, 205
pointing devices, 197
web forms, 200–201
WIMP, 197–198
windows, 197, 198
activation time, 202
dialog boxes, 199
highlights, 199
management, 202
research and design, 202
gulfs of evaluation, 125–126
gulfs of execution, 125–126
hands-on simulations, 209–210
HAPIfork, 183–184
haptic interfaces
exoskeletons, 232
MusicJacket, 232
ultrahaptics, 231
vibrotactile feedback, 231
HBI (human-building interaction), 252
HCI (human-computer interaction)
empirical measurement, 48, 49
focus on users and tasks, 48
frameworks, 92–93
versus ID (interaction design), 10–11
interfaces videos, 195
iterative design, 48, 49
lifecycle models and, 51
paradigm shift, 89
sustainable HCI, 185
visions, 89
hedonic heuristic, 559
heuristic evaluation, 515
briefing sessions, 554
debriefing sessions, 554
design guidelines and, 559–561
evaluation period, 554
golden rules, 559–561
hedonic, 559
principles and, 559–561
websites, 553–554
heuristics, 505–506
decision making, 121
Heuristics (Nielsen), 559
high-fidelity prototypes, 428–429, 431
I n d e x628
highlights, graphical interfaces, 199
HITs (human intelligence tasks), 513–514
horizontal prototying, 433
human mirroring, telepresence robots, 149
hypotheses testing
alternative hypothesis, 533
null hypothesis, 533
one-tailed hypothesis, 533
two-tailed hypothesis, 533
variables
dependent, 533
independent, 533
icons, 205–208
research and design, 208–209
IDE (integrated development environment), 458
IDEO, 13
TechBox, 60
idle games, 336
in situ studies, RITW (research in the wild), 54–55
in-the-wild studies, 515, 538–541
iNaturalist.org, 355–356
incentives for data gathering participants, 264
incidental learning, 119
inclusiveness in design, 17–18
index cards, prototyping and, 427
inductive approach to qualitative data, 321
information
versus data, 264–265
structuring, 104
web pages, 110
information processing, 126–127
informed consent form, 515, 516–517
informed consent from data gathering partici-
pants, 262–264
inspections, heuristic evaluation, 550–561
instructing interaction type, 81, 82–83
intelligent interfaces, 194
intentional learning, 119
interaction analysis, 329, 333–334
interaction design
academic disciplines, 10
alternative designs, 50
approaches, 43
consultancies, 13
design practices, 10
double diamond, 38–39
evaluation, 50
versus HCI, 10–11
interdisciplinary fields, 10
lenses, 9–10
methods, 9–10
overview, 1–2
philosophies, 9–10
prototyping, 50
requirements discovery, 50
team makeup, 11–12
tools, 491–492
user experiences, 9
interaction types, 81–82, 442
conversing, 81, 83–85
exploring, 81, 86–87
instructing, 81, 82–83
manipulating, 81, 85–86
responding, 81, 87–88
interactive apps, learning and, 120
interactive books, learning and, 120
interactive products, 1–2, 36
marble answering machine, 4
what do design, 7–9
interactive tabletops, 235–236
interdisciplinary fields, interaction design and,
10
interface metaphors, 78–80, 439–442
interfaces
adaptive, 194
affective, 194
ambient, 194
annoying, 174–179
appliances, 222–223
augmented reality (AR)
decision making, 241–242
history, 243
HUDs (head-up displays), 242
mapping, 242–243
mixed reality, 241
research and design, 244
virtual content, 243
brain-computer interfaces (BCI), 250
command, 194, 195–196
research and design, 196
console, 247
drones, 248–249
gesture-based systems, 229–231
graphical (GUIs), 194, 197–198
icons, 197, 205–208
menus, 197, 203–205
pointing devices, 197
research and design, 202
web forms, 200–201
WIMP, 197–198
windows, 198–199
haptic
exoskeletons, 232
ultrahaptics, 231
vibrotactile feedback, 231
I n d e x 629
HCI videos, 195
intelligent, 194
introduction, 193–194
invisible, 194
mobile, 194
mobile devices, 219–222
research and design, 221–222
multimedia, 194, 209–212
development reasons, 210–211
hands-on simulations, 209–210
research and design, 211–212
multimodal, 194, 232–233
research and design, 233, 235
natural, 194
NUI (natural user interface), 252–253
pen-based devices, 226–228
robots, 247–248
selecting, 254
shareable, 235, 443
interactive tabletops, 235–236
research and design, 237
shared editing tools, 237
SmartBoards, 235, 236
smart, 194, 251–252
speech, 194
tangible, 194, 238, 443
research and design, 241
Resnick, Mitchel, 238
RFID tags, 238
tangible computing, 238–241
Tinkersheets, 238
VoxBox, 240
Zuckerman, Oren, 238
touchless, 194
touchscreens, 228–229
virtual reality, 443
VR (virtual reality), 212, 213, 214,
215–216
VUI (voice user interface)
barge in, 225
dialogues, 225
machine learning algorithms, 224
phone apps, 225
research and design, 226
routing, 224
speech-to-text systems, 224
voice assistants, 225
wearables, 245–247
website design, 216
breadcrumb navigation, 217
languages, 217
mobile devices, 217
research and design, 218
interviews for data gathering
alternative types, 277
analysis, 311
Cisco WebEx, 277
experience enrichment, 277–278
focus groups, 271–272
questions
checkboxes, 275–276
closed, 272
guidelines, 273
open, 272
requirements activities, 397
retrospective interviews, 277
running, 276
semi-structured, 269–271
Skype, 277
structured, 269
unstructured interviews, 268–269
Zoom, 277
invisible interfaces, 194
IoT (Internet of Things), 8, 89
iPad usability testing, 528–533
iPods, 14–15
IRBs (institutional review boards), 516
ISO (International Standards Organization), 500
iterative design, HCI and, 49
Jira, 388
Kanban, 64
Kinect (Microsoft), 230
SDK, 463
KineticDress, 245
KISSmetrics, 572
Knowledge Navigator (Apple), 90
lab-in-a-box, 526
lab-in-a-suitcase, 526
labs, usability testing and, 525–528
Lean UX, 476–477, 480
learnability, usability and, 20–21
learning
decision-making, 121–122
design implications, 119–120
dyslexia, 120
incidental learning, 119
intentional learning, 119
interactive apps, 120
interactive books, 120
listening, 120–121
natural-language interfaces, 120
planning, 121–122
problem-solving, 121–122
reading, 120–121
reasoning, 121–122
I n d e x630
speaking, 120–121
speech recognition systems, 120
tactile interfaces, 120
learning analytics, 506
lifecycle, evaluation, 499–500
lifecycle model, 51–52
LilyPad, 458
listening
design implications, 121
learning and, 120–121
permanence, 120
living labs, 503–504, 515
low-fidelity prototypes, 426, 431
card-based, 449–457
index cards, 427
Rainbow system, 426
sketching, 427
storyboarding, 426, 447–449
Wizard of Oz, 428
Mac icons, 172–173
machine learning, VUI (voice user interface), 224
Madgex, 13
maker movement, 461–462
Manifesto for Agile Software Development, 64–65
manipulating interaction type, 81, 85–86
manners, voice assistants and, 178–179
marble answering machine, 4
matched-participant design, 535
Mechanical Turk, 513
medicine, gesture-based systems, 230
mega menus, 203
memorability, usability and, 21–22
memory, 111
Alzheimer’s disease, 117
amnesia, 117–118
design implications, 118
digital forgetting, 117
filtering, 111
load reduction, externalization, 129–130
long-term, triggering, 118
memory load, 113–118
PIM (personal information manage-
ment), 112–113
seven chunks of information, 114
short-term, 114–115
smartphones, 112
mental models, 123–125
erroneous, 124
UX (user experience), 124
menus
cascading, 203
collapsible, 204
contextual menus, 204
expanding, 203–204
flat, 203
mega, 203
research and design, 205
MERboard, 292–293
metadata, documents, adding, 113
metalinguistic signs, 563
metaphors, 78
interface metaphors, 78–80
popularity, 80
MFA (multifactor authentication), 116
micro-interactions, 23
Microsoft, Kinect, 230
Miller, George, 114–115
miscommunication, requirements and, 387
mobile devices
QR readers, 220
smartphones, 219–220
tablets, 219
mobile interfaces, 194, 217, 219–222
research and design, 221–222
mobile phone, using while driving, 107–108
models, 88, 93
conceptual models, 92
MonkeyLearn, 359
MOOCs (massive open online courses), 506, 568
Mox Analytics, 572
multimedia interfaces, 209
development reasons, 210–211
hands-on simulations, 209–210
research and design, 211–212
multimodal interfaces, 194, 232–233
research and design, 233, 235
multitasking, 105–109
phone use while driving, 107–108
multitouch surfaces, 228
MusicJacket, 232
narratives, presentations and, 344–345
natural interfaces, 194
natural-language interfaces, learning and, 120
Nest thermostat, 174
Nestor Navigator, 324
.NET Gadgeteer, 459
NIST (National Institute of Standards and
Technology), 500
NN/g (Nielsen/Norman) Usability Consulting
Group, 501
nodes, social networks, 360
NodeXL, 360
nonfunctional requirements, 390
normal course, use cases, 415
I n d e x 631
notes, data recording and, 266–267
NUI (natural user interface), 252–253
null hypothesis, 533
Nvivo, 341
observation, data gathering
activity tracking
data scraping, 299–300
diaries, 298–299
interaction logs, 299–300
web analytics, 299–300
analysis, 311
controlled environments, 295
think-aloud technique, 296–297
ESM (experience sampling method), 299
in the field, 288
conducting, 291
degree of participation, 290
ethnographies, 292–295
frameworks, 289
planning, 291
requirements activities, 396
stopping, 291
OERs (open educational resources), 506, 568
OFE (online feedback exchange) systems, 46
older uses, 16
OmniGraffle, 491
one-tailed hypothesis, 533
online crashing analysis, 47
open coding, grounded theory, 334, 337–338
Open Data Institute, data ethics canvas, 379
open-mindedness, design space and, 74
open source software, 489–490
opportunistic evaluation, 507
order effect, experiments, 535
organizational environment, requirements and, 392
orientation, design space and, 74
ownership, users and, 44–45
pair-wise design, 535
PalmPilot, 422
paradigms, 88, 89–90, 93
PARCTAB system, 237
participants, 515, 516–517
vulnerability, 517
participants in data gathering
incentives, 264
informed consent, 262–264
number of, 261–262
recording, 267
study population, 261
participatory design, 46, 453–454
patents, 61
pattern language, 484–486
patterns, 76
pen-based devices, 226–228
percentages, quantitative analysis, 311–312
perception
design implications, 109
proprioceptors, 109
permanent impairment, 17
personal data, 382–383
collecting, 353
personas, 403–404, 414
persona-driven development, 405–406
primary, 404
styles, 404
persuasive design, 182
persuasive technologies, 182–186
phishing scams, 186
photos, data recording and, 266–267
physical computing, 458–462
physical environment, requirements and, 392
physical impairment, 17
Physikit project kit, 357
pilot studies, data gathering, 264
PIM (personal information management), 113
planning
design implications, 122
learning and, 121–122
pluralistic walk-throughs, 566
Pokéman Pikachu device, 182
poor design, voice-mail system, 3–4
predictive evaluation, 515
Fitt’s law, 576–577
predictive models, 576–577
Presence Project, 398
presentation of findings
storytelling, 344–345
structured notations, 342–344
summarizing findings, 345–346
probing during interviews, 269
requirements, 398–399
problem-solving
design implications, 122
learning and, 121–122
problem space, 41–43
process model, 51–52
product owner, 44
proof of concept, 69
proprioceptors, 109
protocols, 311
prototyping, 50, 62, 421–422
anatomy of prototypes, 430
compromises, 429–434
3D printing, 423
I n d e x632
economic principle, 430
versus engineering, 434
evolutionary, 433
filters, 429–430
fundamental, 429
horizontal, 433
manifestations, 429–430
prototypes
card-based, 449–457
compromises, 429–434
description, 422–424
generating, 447–57
high-fidelity, 428–429
low-fidelity, 426–428, 447–457
reasons for, 424–425
throwaway, 433
user empathy, 437–439
vertical, 433
provocative probes, 399
QA (quantified-self) movement, data
collection and, 353
QR readers, 220
qualitative analysis, 309
categorizing data, 324–327
critical incident analysis, 320, 327–328
thematic analysis, 322–324
affinity diagrams, 323
qualitative data, 308
analysis
deductive approach, 321
inductive approach, 321
numbers, 309
quantitative analysis, 309, 311–320
quantitative data, 308
questionnaires, 278–279
administering, 283–285
computer game investigation case study, 508
data analysis, 311
design, 279
analysis and, 313–314
online versus paper and pencil, 286
question format, 280–283
response format, 280–283
check boxes, 280
Likert scales, 280–281
ranges, 280
rating scales, 280, 282
semantic differential scales, 281
structure, 279
Raspberry Pi, 459
reading
design implications, 121
learning and, 120–121
reasoning
design implications, 122
learning and, 121–122
reflective cognition, 102
reflective level of the brain, 171
reliability, evaluation and, 515
remote control design, 5–7
remote conversations
Facebook, 145–146
human mirroring, 149
social presence, 146
telepresence, 144–145
robots, 146–147
videoconferencing, 143–144
VideoWindow, 143
requirements, 385–386
atomic requirements shell, 388
brainstorming, innovation and,
402–403
capturing, 386
contextual inquiry, 400
apprenticeship model, 400
context principle, 400
cool concepts, 401
focus, 401
interpretation, 400
joy of life concepts, 401
joy of use concepts, 401
partnership principle, 400
data, 392
data gathering, 395–398
discovering, 50
environmental, 392
organizational environment, 392
physical environment, 392
social environment, 392
technical environment, 392
functional, 390, 392
miscommunication and, 387
nonfunctional, 390
overview, 387–388
personas, 403–408, 414
probes
cultural probes, 398
design probes, 398
provocative probes, 399
technology probes, 399
product dimensions, 390–391
scenarios, 408–414
security, 393
usability engineers, 393
usability goals, 392–393
user characteristics, 392
I n d e x 633
user experience goals, 392–393
user stories, 388–390
Volere framework, 388
requirements activity
data gathering, 396–398
observation, 396
purpose, 386
research
appliances, 222–223
augmented reality (AR) interfaces, 244
brain-computer interfaces (BCI), 250
command interfaces, 196
gesture-based systems, 231
graphical interfaces, 202
icons, 208–209
menus, 205
mobile interfaces, 221–222
multimedia interfaces, 211–212
multimodal interfaces, 233, 235
touchscreens, 228
VR interfaces, 215–216
VUI (voice user interface), 226
wearables, 247
website design, 218
research and design, shareable interfaces,
237
responding interaction type, 81, 87–88
retrospective interviews, 277
RITW (research in the wild), 54
robots, 248
rules for conversation, 140–141
safety, usability and, 20
same-participant design, 535
sampling, data gathering, 261
convenience sampling, 261
saturation sampling, 261
snowball sampling, 261
SAS (Statistical Analysis Software), 341
saturation sampling, data gathering, 261
scams, phishing, 186
scenarios, 408–414
science fiction as inspiration, 91
scope, evaluation and, 515
scraping data, 352
Scrum, 64
SDKs (software development kits), 463–464
search engines, 78
Second Life, 196
second source data, 352
security, 116
financial transactions, 116
MFA (multifactor authentication), 116
requirements and, 393
selective coding, grounded theory, 334,
337–338
self-tracking, personal data collection and, 353
semi-structured interviews for data gather-
ing, 269–271
semiotics, 562–564
Senseboard, 459
SenseCam, 117–118
sensing data, 356–357
visualization, 374–375
sensory impairment, 17
sentiment analysis, 333, 358
shareable interfaces, 235
interactive tabletops, 235–236
research and design, 237
shared editing tools, 237
SmartBoards, 235, 236
social translucence, 155
shared editing tools, 237
short-term memory, 114–115
SigniFYIng Message, 563
Siri, 84
situational impairment, 17
Sketch, 491
sketching, prototyping, 427
Skype, 260
slow design, 483
slow thinking, 102
smart buildings, 89
Smart Citizen, 356
smart interfaces, 194, 251–252
Smart Living Lab, 504
smart TVs, interaction, 6
SmartBoards, 235, 236
smartphones, 219–220
memory, 112
video data recording, 267
SNA (social network analysis), 359–363
snowball sampling, data gathering, 261
social computing, 136
emoticons, 138
Facebook, 145–146
social engagement, 158–161
social environment, requirements and, 392
social interaction, 135–136
being social, 136–138
co-presence
awareness, 150–151
physical coordination, 150
shareable interfaces, 152–157
email, 140
face-to-face conversations, 139–142
human mirroring, 149
remote conversations, 143–149
I n d e x634
social engagement, 158–161
social media, 136
social networks, 360
social norms, 185
social translucence, 155
socially acceptable use of data, 378
speaking
design implications, 121
learning and, 120–121
speech interfaces, 194
speech recognition systems, learning and, 120
speech-to-text systems, 224
spreadsheets, 76
data analysis, 314
SPSS (Statistical Package for the Social
Sciences), 341
SSCW (streamlined cognitive walk-through), 566
stage fright, 169
stakeholders, 56
StakeNet, 56
StakeRare, 56
Star Trek, holodeck, 91
static signs, 563
statistics, 536
storyboarding, 426, 447–449
storytelling, presentations and, 344–345
streamers, biometric data, 181
structured interviews for data gathering, 269
structured notations, 342–344
STS (socio-technical systems) theory, 338–339
StudentLife, 364–366
Subrayaman, Ramanath, 45
summarizing findings, 345–346
summative evaluation, 500, 515
Superflux, 94
sustainable HCI, 185
Swiss Army Knife Navigation, 485
System Image, 93
systems-based, 329
systems-based analysis, 338–341
DiCoT (distributed cognition of team-
work), 339–341
STS (socio-technical systems) theory, 338–339
t-tests, 536
tablets, 219
tactile interfaces, learning and, 120
tangible interfaces, 194, 238
research and design, 241
Resnick, Mitchel, 238
RFID tags, 238
tangible computing
Code Juniper, 239
littleBits, 238
MagicCubes, 238–239
MicroBit, 238
Torino, 239
Tinkersheets, 238
VoxBox, 240
Zuckerman, Oren, 238
technical debt, 472–473
technical environment, requirements and, 392
technology, RITW (research in the wild), 54–55
Technology as Experience (McCarthy and
Wright), 15
technology probes, 399
Teixeira, Carlos Robert, 331–332
telepresence robots, 146–147
telepresence rooms, 144–145
temporary impairment, 17
TextSL interface, 196
Thackara, John, 9
thematic analysis, 322–324
affinity diagrams, 323
grounded theory, 335–336
theories, 88, 92, 93
theory, RITW (research in the wild), 54–55
think-aloud protocol, 296–297, 324
requirements activities, 397
Third Age suite, 438–439
throwaway prototyping, 433
Tidy Streeet project, 185
tinkering, 429
TiVo remote control design, 5–7
Tobii Glasses Mobile Eye Tracking glasses, 527
tools
data analysis, 341
interaction design, 491–492
Top Trumps, 398–399
touchless interfaces, 194
touchscreens
multitouch surfaces, 228
research and design, 229
single, 228
transparency, AI and, 91
triangulation, data gathering and
investigator triangulation, 264
methodological triangulation, 264
triangulation of data, 264
triangulation of theories, 264
TruSocialMetrics, 572
trustworthiness, 378
Twitter, digital volunteers, 158
two-tailed hypothesis, 533
ubiquitous computing devices, 89
UI (user interface), design, 9
ultrahaptics, 231
I n d e x 635
Uninvited Guests, 94
unstructured interviews for data
gathering, 268–269
usability
effectiveness, 19
efficiency, 20
learnability, 20–21
memorability, 21–22
safety, 20
utility, 20
usability lab, 515
usability specification, 501
usability testing, 501–504, 523
case studies, iPad, 528–533
labs and equipment, 525–528
methods, 524–525
tasks, 524–525
users, 524–525
use cases, 415
alternative courses, 415
essential use cases, 415
normal course, 415
user story, 415
user-centered approach, 47–49
user involvement, 43–44
cooperative design, 46
crowdsourcing and, 46
degrees, 45–46
participatory design, 46
post-product release, 46–47
user research, 473–477
user stories
requirements and, 388–390
use cases, 415
user studies, 515
users. See also UX (user experience)
adults versus children, 16
behavior, 48
characteristics, 48
consulting, 48
context of use, 48
cultural differences, 16
design and, 9
empathy, 437–439
evaluation and, 515
expectation management, 44
focus on, 48
goals, 48
identifying, 55–56
interaction design, 9
needs, 57
older persons, 16
ownership, 44–45
product owner, 44
Screen Checkers, 56
stakeholders, 56
tasks, 48
usability testing, 524–525
vulnerability, 517
Young Parents, 56
User’s Model, 93
utility, usability and, 20
UX design, 471
UX designers, 471
UX (user experience), 2. See also users
aesthetics, 15
citizen science and, 355–356
consultancies, 13
content, 15
desirable aspects, 22
emotional appeal, 15
emotions and, 166–172
experiential aspect, 15
functionality, 15
goals, 22–23
look and feel, 15
mental models, 124
micro-interactions, 23
overview, 13–15
proof of concept and, 69–70
undesirable aspects, 22
usability, 15
visualizing data, 367
UXD (user experience design), 14
V&A museum, 152
validity, evaluation and, 515
variables
dependent, 533
independent, 533
vertical prototying, 433
vibrotactile feedback, 231
video
data recording and, 267
interaction analysis, 333–334
videoconferencing, 143–144
VideoWindow, 143
virtual worlds, command interfaces and,
196
visceral level of the brain, 171
visibility, 26–27, 93
Visio, 491
visions, 88
AI (artificial intelligence), 91
HCI, 89
Knowledge Navigator (Apple), 90
visual analytics, 568
visualizing data, 366–375
I n d e x636
D3 (Data-Driven Documents), 371
dashboard, 370, 371–372
sensing data, 374–375
spectrograms, 368–369
visual literacy, 366
voice assistants, 225, 330–331
manners and, 178–179
voice-mail system design, 3–4
marble answering machine, 4
VoiceOver (Apple), 17
Volere requirements framework, 388
Volkswagen fun theory, 183
VR (virtual reality) interfaces, 212
arcades, 213
CAVE (Cave Automatic Virtual
Environment), 212
perspective, 213
presence, 212
reporting, 213
research and design, 215–216
travel, 214
viewpoints, 213
VUI (voice user interface)
barge in, 225
dialogues, 225
machine learning algorithms, 224
phone apps, 225
research and design, 226
routing, 224
speech-to-text systems, 224
voice assistants, 225
vulnerability of users, 517
walk-throughs
cognitive, 561–566
cognitive walk-throughs, SSCW (streamlined
cognitive walk-through), 566
pluralistic, 566–567
Wall Walk, 401, 434
WCAG (Web Content Accessibility Guidelines),
446, 500, 557
wearables, 245–247
web analytics, 506
ClickTale, 572
Clicky, 572
Crazy Egg, 572
Google Analytics, 568–570
KISSmetrics, 572
Mox Analytics, 572
off-site, 568
on-site, 568
TruSocialMetrics, 572
visitor tracking, 572–573
web forms, 200–201
web pages, information structure,
110
website design, 216–217
breadcrumb navigation, 217
heurist evaluation, 553–554
languages, 217
mobile devices, 217
research and design, 218
Weibo, 332
white space, 109
Wi-Fi, mental models, 123
widgets, windows and, 202
WIMP (Windows, Icons, Menus, and Pointers),
89, 197–198
windows (GUIs), 197, 198
activation time, 202
dialog boxes, 199
highlights, 199
management, 202
widgets and, 202
within-subjects design, 535
Wizard of Oz prototpying, 428
World Wide Web, 76
written language permanence, 120
Xerox, 76
Xerox Star computer interface, 77
Zoom, 260
Cover
Title Page
Copyright
About the Authors
Credits
Acknowledgments
Contents
What’s Inside?
Changes from Previous Editions
Chapter 1 What Is Interaction Design?
1.1 Introduction
1.2 Good and Poor Design
1.2.1 Voice-Mail System
1.2.2 Remote Control
1.2.1 What to Design
1.3 What Is Interaction Design?
1.3.1 The Components of Interaction Design
1.3.2 Who Is Involved in Interaction Design?
1.3.3 Interaction Design Consultancies
1.4 The User Experience
1.5 Understanding Users
1.6 Accessibility and Inclusiveness
1.7 Usability and User Experience Goals
1.7.1 Usability Goals
1.7.2 User Experience Goals
1.7.3 Design Principles
Summary
Further Reading
Interview with Harry Brignull
Chapter 2 The Process of Interaction Design
2.1 Introduction
2.2 What Is Involved in Interaction Design?
2.2.1 Understanding the Problem Space
2.2.2 The Importance of Involving Users
2.2.3 Degrees of User Involvement
2.2.4 What Is a User-Centered Approach?
2.2.5 Four Basic Activities of Interaction Design
2.2.6 A Simple Lifecycle Model for Interaction Design
2.3 Some Practical Issues
2.3.1 Who Are the Users?
2.3.2 What Are the Users’ Needs?
2.3.3 How to Generate Alternative Designs
2.3.4 How to Choose Among Alternative Designs
2.3.5 How to Integrate Interaction Design Activities Within Other Lifecycle Models
Summary
Further Reading
Chapter 3 Conceptualizing Interaction
3.1 Introduction
3.2 Conceptualizing Interaction
3.3 Conceptual Models
3.4 Interface Metaphors
3.5 Interaction Types
3.5.1 Instructing
3.5.2 Conversing
3.5.3 Manipulating
3.5.4 Exploring
3.5.5 Responding
3.6 Paradigms, Visions, Theories, Models, and Frameworks
3.6.1 Paradigms
3.6.2 Visions
3.6.3 Theories
3.6.4 Models
3.6.5 Frameworks
Summary
Further Reading
Interview with Albrecht Schmidt
Chapter 4 Cognitive Aspects
4.1 Introduction
4.2 What Is Cognition?
4.2.1 Attention
4.2.2 Perception
4.2.3 Memory
4.2.4 Learning
4.2.5 Reading, Speaking, and Listening
4.2.6 Problem-Solving, Planning, Reasoning, and Decision-Making
4.3 Cognitive Frameworks
4.3.1 Mental Models
4.3.2 Gulfs of Execution and Evaluation
4.3.3. Information Processing
4.3.4 Distributed Cognition
4.3.5 External Cognition
4.3.6 Embodied Interaction
Summary
Further Reading
Chapter 5 Social Interaction
5.1 Introduction
5.2 Being Social
5.3 Face-to-Face Conversations
5.4 Remote Conversations
5.5 Co-presence
5.5.1 Physical Coordination
5.5.2 Awareness
5.5.3 Shareable Interfaces
5.6 Social Engagement
Summary
Further Reading
Chapter 6 Emotional Interaction
6.1 Introduction
6.2 Emotions and the User Experience
6.3 Expressive Interfaces and Emotional Design
6.4 Annoying Interfaces
6.5 Affective Computing and Emotional AI
6.6 Persuasive Technologies and Behavioral Change
6.7 Anthropomorphism
Summary
Further Reading
Chapter 7 Interfaces
7.1 Introduction
7.2 Interface Types
7.2.1 Command-Line Interfaces
7.2.2 Graphical User Interfaces
7.2.3 Multimedia
7.2.4 Virtual Reality
7.2.5 Website Design
7.2.6 Mobile Devices
7.2.7 Appliances
7.2.8 Voice User Interfaces
7.2.9 Pen-Based Devices
7.2.10 Touchscreens
7.2.11 Gesture-Based Systems
7.2.12 Haptic Interfaces
7.2.13 Multimodal Interfaces
7.2.14 Shareable Interfaces
7.2.15 Tangible Interfaces
7.2.16 Augmented Reality
7.2.17 Wearables
7.2.18 Robots and Drones
7.2.19 Brain–Computer Interfaces
7.2.20 Smart Interfaces
7.3 Natural User Interfaces and Beyond
7.4 Which Interface?
Summary
Further Reading
Interview with Leah Buechley
Chapter 8 Data Gathering
8.1 Introduction
8.2 Five Key Issues
8.2.1 Setting Goals
8.2.2 Identifying Participants
8.2.3 Relationship with Participants
8.2.4 Triangulation
8.2.5 Pilot Studies
8.3 Data Recording
8.3.1 Notes Plus Photographs
8.3.2 Audio Plus Photographs
8.3.3 Video
8.4 Interviews
8.4.1 Unstructured Interviews
8.4.2 Structured Interviews
8.4.3 Semi-structured Interviews
8.4.4 Focus Groups
8.4.5 Planning and Conducting an Interview
8.4.6 Other Forms of Interview
8.4.7 Enriching the Interview Experience
8.5 Questionnaires
8.5.1 Questionnaire Structure
8.5.2 Question and Response Format
8.5.3 Administering Questionnaires
8.6 Observation
8.6.1 Direct Observation in the Field
8.6.2 Direct Observation in Controlled Environments
8.6.3 Indirect Observation: Tracking Users’ Activities
8.7 Choosing and Combining Techniques
Summary
Further Reading
Chapter 9 Data Analysis, Interpretation, and Presentation
9.1 Introduction
9.2 Quantitative and Qualitative
9.2.1 First Steps in Analyzing Data
9.3 Basic Quantitative Analysis
9.4 Basic Qualitative Analysis
9.4.1 Identifying Themes
9.4.2 Categorizing Data
9.4.3 Critical Incident Analysis
9.5 Which Kind of Analytic Framework to Use?
9.5.1 Conversation Analysis
9.5.2 Discourse Analysis
9.5.3 Content Analysis
9.5.4 Interaction Analysis
9.5.5 Grounded Theory
9.5.6 Systems-Based Frameworks
9.6 Tools to Support Data Analysis
9.7 Interpreting and Presenting the Findings
9.7.1 Structured Notations
9.7.2 Using Stories
9.7.3 Summarizing the Findings
Summary
Further Reading
Chapter 10 Data at Scale
10.1 Introduction
10.2 Approaches to Collecting and Analyzing Data
10.2.1 Scraping and “Second Source” Data
10.2.2 Collecting Personal Data
10.2.3 Crowdsourcing Data
10.2.4 Sentiment Analysis
10.2.5 Social Network Analysis
10.2.6 Combining Multiple Sources of Data
10.3 Visualizing and Exploring Data
10.4 Ethical Design Concerns
Summary
Further Reading
Chapter 11 Discovering Requirements
11.1 Introduction
11.2 What, How, and Why?
11.2.1 What Is the Purpose of the Requirements Activity?
11.2.2 How to Capture Requirements Once They Are Discovered?
11.2.3 Why Bother? Avoiding Miscommunication
11.3 What Are Requirements?
11.3.1 Different Kinds of Requirements
11.4 Data Gathering for Requirements
11.4.1 Using Probes to Engage with Users
11.4.2 Contextual Inquiry
11.4.3 Brainstorming for Innovation
11.5 Bringing Requirements to Life: Personas and Scenarios
11.5.1 Personas
11.5.2 Scenarios
11.6 Capturing Interaction with Use Cases
Summary
Further Reading
Interview with Ellen Gottesdiener
Chapter 12 Design, Prototyping, and Construction
12.1 Introduction
12.2 Prototyping
12.2.1 What Is a Prototype?
12.2.2 Why Prototype?
12.2.3 Low-Fidelity Prototyping
12.2.4 High-Fidelity Prototyping
12.2.5 Compromises in Prototyping
12.3 Conceptual Design
12.3.1 Developing an Initial Conceptual Model
12.3.2 Expanding the Initial Conceptual Model
12.4 Concrete Design
12.5 Generating Prototypes
12.5.1 Generating Storyboards
12.5.2 Generating Card-Based Prototypes
12.6 Construction
12.6.1 Physical Computing
12.6.2 SDKs: Software Development Kits
Summary
Further Reading
Interview with Jon Froehlich
Chapter 13 Interaction Design in Practice
13.1 Introduction
13.2 AgileUX
13.2.1 User Research
13.2.2 Aligning Work Practices
13.2.3 Documentation
13.3 Design Patterns
13.4 Open Source Resources
13.5 Tools for Interaction Design
Summary
Further Reading
Chapter 14 Introducing Evaluation
14.1 Introduction
14.2 The Why, What, Where, and When of Evaluation
14.2.1 Why Evaluate?
14.2.2 What to Evaluate
14.2.3 Where to Evaluate
14.2.4 When to Evaluate
14.3 Types of Evaluation
14.3.1 Controlled Settings Involving Users
14.3.2 Natural Settings Involving Users
14.3.3 Any Settings Not Involving Users
14.3.4 Selecting and Combining Methods
14.3.5 Opportunistic Evaluations
14.4 Evaluation Case Studies
14.4.1 Case Study 1: An Experiment Investigating a Computer Game
14.4.2 Case Study 2: Gathering Ethnographic Data at the Royal Highland Show
14.5 What Did We Learn from the Case Studies?
14.6 Other Issues to Consider When Doing Evaluation
14.6.1 Informing Participants About Their Rights and Getting Their Consent
14.6.2 Issues That Influence the Choice of Method and How the Data Is Interpreted
Summary
Further Reading
Chapter 15 Evaluation Studies: From Controlled to Natural Settings
15.1 Introduction
15.2 Usability Testing
15.2.1 Methods, Tasks, and Users
15.2.2 Labs and Equipment
15.2.3 Case Study: Testing the iPad Usability
15.3 Conducting Experiments
15.3.1 Hypotheses Testing
15.3.2 Experimental Design
15.3.3 Statistics: t-tests
15.4 Field Studies
15.4.1 In-the-Wild Studies
15.4.2 Other Perspectives
Summary
Further Reading
Interview with danah boyd
Chapter 16 Evaluation: Inspections, Analytics, and Models
16.1 Introduction
16.2 Inspections: Heuristic Evaluation and Walk-Throughs
16.2.1 Heuristic Evaluation
16.2.2 Walk-Throughs
16.3 Analytics and A/B Testing
16.3.1 Web Analytics
16.3.2 A/B Testing
16.4 Predictive Models
16.4.1 Fitts’ Law
Summary
Further Reading
References
Index
EULA
Information Technology
and Organizational
Learning
Managing Behavioral Change
in the Digital Age
Third Edition
Information Technology
and Organizational
Learning
Managing Behavioral Change
in the Digital Age
Third Edition
Arthur M. Langer
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2018 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
International Standard Book Number-13: 978-1-4987-7575-5 (Paperback)
International Standard Book Number-13: 978-1-138-23858-9 (Hardback)
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that
provides licenses and registration for a variety of users. For organizations that have been granted a
photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
v
Contents
Foreword xi
Acknowledgments xiii
Author xv
IntroductIon xvii
chApter 1 the “rAvell” corporAtIon 1
Introduction 1
A New Approach 3
The Blueprint for Integration 5
Enlisting Support 6
Assessing Progress 7
Resistance in the Ranks 8
Line Management to the Rescue 8
IT Begins to Reflect 9
Defining an Identity for Information Technology 10
Implementing the Integration: A Move toward Trust and
Reflection 12
Key Lessons 14
Defining Reflection and Learning for an Organization 14
Working toward a Clear Goal 15
Commitment to Quality 15
Teaching Staff “Not to Know” 16
Transformation of Culture 16
Alignment with Administrative Departments 17
Conclusion 19
vi Contents
chApter 2 the It dIlemmA 21
Introduction 21
Recent Background 23
IT in the Organizational Context 24
IT and Organizational Structure 24
The Role of IT in Business Strategy 25
Ways of Evaluating IT 27
Executive Knowledge and Management of IT 28
IT: A View from the Top 29
Section 1: Chief Executive Perception of the Role of IT 32
Section 2: Management and Strategic Issues 34
Section 3: Measuring IT Performance and Activities 35
General Results 36
Defining the IT Dilemma 36
Recent Developments in Operational Excellence 38
chApter 3 technology As A vArIAble And responsIve
orgAnIzAtIonAl dynAmIsm 41
Introduction 41
Technological Dynamism 41
Responsive Organizational Dynamism 42
Strategic Integration 43
Summary 48
Cultural Assimilation 48
IT Organization Communications with “ Others” 49
Movement of Traditional IT Staff 49
Summary 51
Technology Business Cycle 52
Feasibility 53
Measurement 53
Planning 54
Implementation 55
Evolution 57
Drivers and Supporters 58
Santander versus Citibank 60
Information Technology Roles and Responsibilities 60
Replacement or Outsource 61
chApter 4 orgAnIzAtIonAl leArnIng theorIes And
technology 63
Introduction 63
Learning Organizations 72
Communities of Practice 75
Learning Preferences and Experiential Learning 83
Social Discourse and the Use of Language 89
Identity 91
Skills 92
viiContents
Emotion 92
Linear Development in Learning Approaches 96
chApter 5 mAnAgIng orgAnIzAtIonAl leArnIng And
technology 109
The Role of Line Management 109
Line Managers 111
First-Line Managers 111
Supervisor 111
Management Vectors 112
Knowledge Management 116
Ch ange Management 120
Change Management for IT Organizations 123
Social Networks and Information Technology 134
chApter 6 orgAnIzAtIonAl trAnsFormAtIon And the
bAlAnced scorecArd 139
Introduction 139
Methods of Ongoing Evaluation 146
Balanced Scorecards and Discourse 156
Knowledge Creation, Culture, and Strategy 158
chApter 7 vIrtuAl teAms And outsourcIng 163
Introduction 163
Status of Virtual Teams 165
Management Considerations 166
Dealing with Multiple Locations 166
Externalization 169
Internalization 171
Combination 171
Socialization 172
Externalization Dynamism 172
Internalization Dynamism 173
Combination Dynamism 173
Socialization Dynamism 173
Dealing with Multiple Locations and Outsourcing 177
Revisiting Social Discourse 178
Identity 179
Skills 180
Emotion 181
chApter 8 synergIstIc unIon oF It And
orgAnIzAtIonAl leArnIng 187
Introduction 187
Siemens AG 187
Aftermath 202
ICAP 203
viii Contents
Five Years Later 224
HTC 225
IT History at HTC 226
Interactions of the CEO 227
The Process 228
Transformation from the Transition 229
Five Years Later 231
Summary 233
chApter 9 FormIng A cyber securIty culture 239
Introduction 239
History 239
Talking to the Board 241
Establishing a Security Culture 241
Understanding What It Means to be Compromised 242
Cyber Security Dynamism and Responsive Organizational
Dynamism 242
Cyber Strategic Integration 243
Cyber Cultural Assimilation 245
Summary 246
Organizational Learning and Application Development 246
Cyber Security Risk 247
Risk Responsibility 248
Driver /Supporter Implications 250
chApter 10 dIgItAl trAnsFormAtIon And chAnges In
consumer behAvIor 251
Introduction 251
Requirements without Users and without Input 254
Concepts of the S-Curve and Digital Transformation
Analysis and Design 258
Organizational Learning and the S-Curve 260
Communities of Practice 261
The IT Leader in the Digital Transformation Era 262
How Technology Disrupts Firms and Industries 264
Dynamism and Digital Disruption 264
Critical Components of “ Digital” Organization 265
Assimilating Digital Technology Operationally and Culturally 267
Conclusion 268
chApter 11 IntegrAtIng generAtIon y employees to
AccelerAte competItIve AdvAntAge 269
Introduction 269
The Employment Challenge in the Digital Era 270
Gen Y Population Attributes 272
Advantages of Employing Millennials to Support Digital
Transformation 272
Integration of Gen Y with Baby Boomers and Gen X 273
ixContents
Designing the Digital Enterprise 274
Assimilating Gen Y Talent from Underserved and Socially
Excluded Populations 276
Langer Workforce Maturity Arc 277
Theoretical Constructs of the LWMA 278
The LWMA and Action Research 281
Implications for New Pathways for Digital Talent 282
Demographic Shifts in Talent Resources 282
Economic Sustainability 283
Integration and Trust 283
Global Implications for Sources of Talent 284
Conclusion 284
chApter 12 towArd best prActIces 287
Introduction 287
Chief IT Executive 288
Definitions of Maturity Stages and Dimension Variables in
the Chief IT Executive Best Practices Arc 297
Maturity Stages 297
Performance Dimensions 298
Chief Executive Officer 299
CIO Direct Reporting to the CEO 305
Outsourcing 306
Centralization versus Decentralization of IT 306
CIO Needs Advanced Degrees 307
Need for Standards 307
Risk Management 307
The CEO Best Practices Technology Arc 313
Definitions of Maturity Stages and Dimension Variables in
the CEO Technology Best Practices Arc 314
Maturity Stages 314
Performance Dimensions 315
Middle Management 316
The Middle Management Best Practices Technology Arc 323
Definitions of Maturity Stages and Dimension Variables in
the Middle Manager Best Practices Arc 325
Maturity Stages 325
Performance Dimensions 326
Summary 327
Ethics and Maturity 333
chApter 13 conclusIons 339
Introduction 339
glossAry 357
reFerences 363
Index 373
xi
Foreword
Digital technologies are transforming the global economy. Increasingly,
firms and other organizations are assessing their opportunities, develop-
ing and delivering products and services, and interacting with custom-
ers and other stakeholders digitally. Established companies recognize
that digital technologies can help them operate their businesses with
greater speed and lower costs and, in many cases, offer their custom-
ers opportunities to co-design and co-produce products and services.
Many start-up companies use digital technologies to develop new prod-
ucts and business models that disrupt the present way of doing busi-
ness, taking customers away from firms that cannot change and adapt.
In recent years, digital technology and new business models have dis-
rupted one industry after another, and these developments are rapidly
transforming how people communicate, learn, and work.
Against this backdrop, the third edition of Arthur Langer’ s
Information Technology and Organizational Learning is most welcome.
For decades, Langer has been studying how firms adapt to new or
changing conditions by increasing their ability to incorporate and use
advanced information technologies. Most organizations do not adopt
new technology easily or readily. Organizational inertia and embed-
ded legacy systems are powerful forces working against the adoption
of new technology, even when the advantages of improved technology
are recognized. Investing in new technology is costly, and it requires
xii Foreword
aligning technology with business strategies and transforming cor-
porate cultures so that organization members use the technology to
become more productive.
Information Technology and Organizational Learning addresses these
important issues— and much more. There are four features of the new
edition that I would like to draw attention to that, I believe, make
this a valuable book. First, Langer adopts a behavioral perspective
rather than a technical perspective. Instead of simply offering norma-
tive advice about technology adoption, he shows how sound learn-
ing theory and principles can be used to incorporate technology into
the organization. His discussion ranges across the dynamic learning
organization, knowledge management, change management, com-
munities of practice, and virtual teams. Second, he shows how an
organization can move beyond technology alignment to true technol-
ogy integration. Part of this process involves redefining the traditional
support role of the IT department to a leadership role in which IT
helps to drive business strategy through a technology-based learn-
ing organization. Third, the book contains case studies that make the
material come alive. The book begins with a comprehensive real-life
case that sets the stage for the issues to be resolved, and smaller case
illustrations are sprinkled throughout the chapters, to make concepts
and techniques easily understandable. Lastly, Langer has a wealth of
experience that he brings to his book. He spent more than 25 years
as an IT consultant and is the founder of the Center for Technology
Management at Columbia University, where he directs certificate and
executive programs on various aspects of technology innovation and
management. He has organized a vast professional network of tech-
nology executives whose companies serve as learning laboratories for
his students and research. When you read the book, the knowledge
and insight gained from these experiences is readily apparent.
If you are an IT professional, Information Technology and Organi
zational Learning should be required reading. However, anyone who
is part of a firm or agency that wants to capitalize on the opportunities
provided by digital technology will benefit from reading the book.
Charles C. Snow
Professor Emeritus, Penn State University
CoEditor, Journal of Organization Design
xiii
Acknowledgments
Many colleagues and clients have provided significant support during
the development of the third edition of Information Technology and
Organizational Learning.
I owe much to my colleagues at Teachers College, namely, Professor
Victoria Marsick and Lyle Yorks, who guided me on many of the the-
ories on organizational learning, and Professor Lee Knefelkamp, for
her ongoing mentorship on adult learning and developmental theo-
ries. Professor David Thomas from the Harvard Business School also
provided valuable direction on the complex issues surrounding diver-
sity, and its importance in workforce development.
I appreciate the corporate executives who agreed to participate
in the studies that allowed me to apply learning theories to actual
organizational practices. Stephen McDermott from ICAP provided
invaluable input on how chief executive officers (CEOs) can success-
fully learn to manage emerging technologies. Dana Deasy, now global
chief information officer (CIO) of JP Morgan Chase, contributed
enormous information on how corporate CIOs can integrate tech-
nology into business strategy. Lynn O’ Connor Vos, CEO of Grey
Healthcare, also showed me how technology can produce direct mon-
etary returns, especially when the CEO is actively involved.
And, of course, thank you to my wonderful students at Columbia
University. They continue to be at the core of my inspiration and love
for writing, teaching, and scholarly research.
xv
Author
Arthur M. Langer, EdD, is professor of professional practice
of management and the director of the Center for Technology
Management at Columbia University. He is the academic direc-
tor of the Executive Masters of Science program in Technology
Management, vice chair of faculty and executive advisor to the dean
at the School of Professional Studies and is on the faculty of the
Department of Organization and Leadership at the Graduate School
of Education (Teachers College). He has also served as a member of
the Columbia University Faculty Senate. Dr. Langer is the author
of Guide to Software Development: Designing & Managing the Life
Cycle. 2nd Edition (2016), Strategic IT: Best Practices for Managers
and Executives (2013 with Lyle Yorks), Information Technology and
Organizational Learning (2011), Analysis and Design of Information
Systems (2007), Applied Ecommerce (2002), and The Art of Analysis
(1997), and has numerous published articles and papers, relating
to digital transformation, service learning for underserved popula-
tions, IT organizational integration, mentoring, and staff develop-
ment. Dr. Langer consults with corporations and universities on
information technology, cyber security, staff development, man-
agement transformation, and curriculum development around the
Globe. Dr. Langer is also the chairman and founder of Workforce
Opportunity Services (www.wforce.org), a non-profit social venture
xvi Author
that provides scholarships and careers to underserved populations
around the world.
Dr. Langer earned a BA in computer science, an MBA in
accounting/finance, and a Doctorate of Education from Columbia
University.
xvii
Introduction
Background
Information technology (IT) has become a more significant part of
workplace operations, and as a result, information systems person-
nel are key to the success of corporate enterprises, especially with
the recent effects of the digital revolution on every aspect of business
and social life (Bradley & Nolan, 1998; Langer, 1997, 2011; Lipman-
Blumen, 1996). This digital revolution is defined as a form of “ dis-
ruption.” Indeed, the big question facing many enterprises today is,
How can executives anticipate the unexpected threats brought on by
technological advances that could devastate their business? This book
focuses on the vital role that information and digital technology orga-
nizations need to play in the course of organizational development
and learning, and on the growing need to integrate technology fully
into the processes of workplace organizational learning. Technology
personnel have long been criticized for their inability to function as
part of the business, and they are often seen as a group outside the
corporate norm (Schein, 1992). This is a problem of cultural assimila-
tion, and it represents one of the two major fronts that organizations
now face in their efforts to gain a grip on the new, growing power of
technology, and to be competitive in a global world. The other major
xviii IntroduCtIon
front concerns the strategic integration of new digital technologies
into business line management.
Because technology continues to change at such a rapid pace, the
ability of organizations to operate within a new paradigm of dynamic
change emphasizes the need to employ action learning as a way to
build competitive learning organizations in the twenty-first century.
Information Technology and Organizational Learning integrates some
of the fundamental issues bearing on IT today with concepts from
organizational learning theory, providing comprehensive guidance,
based on real-life business experiences and concrete research.
This book also focuses on another aspect of what IT can mean to
an organization. IT represents a broadening dimension of business life
that affects everything we do inside an organization. This new reality is
shaped by the increasing and irreversible dissemination of technology.
To maximize the usefulness of its encroaching presence in everyday
business affairs, organizations will require an optimal understanding
of how to integrate technology into everything they do. To this end,
this book seeks to break new ground on how to approach and concep-
tualize this salient issue— that is, that the optimization of information
and digital technologies is best pursued with a synchronous imple-
mentation of organizational learning concepts. Furthermore, these
concepts cannot be implemented without utilizing theories of strategic
learning. Therefore, this book takes the position that technology liter-
acy requires individual and group strategic learning if it is to transform
a business into a technology-based learning organization. Technology
based organizations are defined as those that have implemented a means
of successfully integrating technology into their process of organiza-
tional learning. Such organizations recognize and experience the real-
ity of technology as part of their everyday business function. It is what
many organizations are calling “ being digital.”
This book will also examine some of the many existing organi-
zational learning theories, and the historical problems that have
occurred with companies that have used them, or that have failed
to use them. Thus, the introduction of technology into organizations
actually provides an opportunity to reassess and reapply many of the
past concepts, theories, and practices that have been used to support
the importance of organizational learning. It is important, however,
not to confuse this message with a reason for promoting organizational
xixIntroduCtIon
learning, but rather, to understand the seamless nature of the relation-
ship between IT and organizational learning. Each needs the other to
succeed. Indeed, technology has only served to expose problems that
have existed in organizations for decades, e.g., the inability to drive
down responsibilities to the operational levels of the organization, and
to be more agile with their consumers.
This book is designed to help businesses and individual manag-
ers understand and cope with the many issues involved in developing
organizational learning programs, and in integrating an important
component: their IT and digital organizations. It aims to provide a
combination of research case studies, together with existing theories
on organizational learning in the workplace. The goal is also to pro-
vide researchers and corporate practitioners with a book that allows
them to incorporate a growing IT infrastructure with their exist-
ing workforce culture. Professional organizations need to integrate
IT into their organizational processes to compete effectively in the
technology-driven business climate of today. This book responds to
the complex and various dilemmas faced by many human resource
managers and corporate executives regarding how to actually deal
with many marginalized technology personnel who somehow always
operate outside the normal flow of the core business.
While the history of IT, as a marginalized organization, is rela-
tively short, in comparison to that of other professions, the problems
of IT have been consistent since its insertion into business organiza-
tions in the early 1960s. Indeed, while technology has changed, the
position and valuation of IT have continued to challenge how execu-
tives manage it, account for it, and, most important, ultimately value
its contributions to the organization. Technology personnel continue
to be criticized for their inability to function as part of the business,
and they are often seen as outside the business norm. IT employees
are frequently stereotyped as “ techies,” and are segregated in such a
way that they become isolated from the organization. This book pro-
vides a method for integrating IT, and redefining its role in organiza-
tions, especially as a partner in formulating and implementing key
business strategies that are crucial for the survival of many companies
in the new digital age. Rather than provide a long and extensive list of
common issues, I have decided it best to uncover the challenges of IT
integration and performance through the case study approach.
xx IntroduCtIon
IT continues to be one of the most important yet least understood
departments in an organization. It has also become one of the most
significant components for competing in the global markets of today.
IT is now an integral part of the way companies become successful,
and is now being referred to as the digital arm of the business. This
is true across all industries. The role of IT has grown enormously in
companies throughout the world, and it has a mission to provide stra-
tegic solutions that can make companies more competitive. Indeed,
the success of IT, and its ability to operate as part of the learning
organization, can mean the difference between the success and failure
of entire companies. However, IT must be careful that it is not seen as
just a factory of support personnel, and does not lose its justification
as driving competitive advantage. We see in many organizations that
other digital-based departments are being created, due to frustration
with the traditional IT culture, or because they simply do not see IT
as meeting the current needs for operating in a digital economy.
This book provides answers to other important questions that have
challenged many organizations for decades. First, how can manag-
ers master emerging digital technologies, sustain a relationship with
organizational learning, and link it to strategy and performance?
Second, what is the process by which to determine the value of using
technology, and how does it relate to traditional ways of calculating
return on investment, and establishing risk models? Third, what are
the cyber security implications of technology-based products and
services? Fourth, what are the roles and responsibilities of the IT
executive, and the department in general? To answer these questions,
managers need to focus on the following objectives:
• Address the operational weaknesses in organizations, in
terms of how to deal with new technologies, and how to bet-
ter realize business benefits.
• Provide a mechanism that both enables organizations to deal
with accelerated change caused by technological innovations,
and integrates them into a new cycle of processing, and han-
dling of change.
• Provide a strategic learning framework, by which every new
technology variable adds to organizational knowledge and
can develop a risk and security culture.
xxiIntroduCtIon
• Establish an integrated approach that ties technology account-
ability to other measurable outcomes, using organizational
learning techniques and theories.
To realize these objectives, organizations must be able to
• create dynamic internal processes that can deal, on a daily
basis, with understanding the potential fit of new technologies
and their overall value within the structure of the business;
• provide the discourse to bridge the gaps between IT- and non-
IT-related investments, and uses, into one integrated system;
• monitor investments and determine modifications to the life
cycle;
• implement various organizational learning practices, includ-
ing learning organization, knowledge management, change
management, and communities of practice, all of which help
foster strategic thinking, and learning, and can be linked to
performance (Gephardt & Marsick, 2003).
The strengths of this book are that it integrates theory and practice
and provides answers to the four common questions mentioned. Many
of the answers provided in these pages are founded on theory and
research and are supported by practical experience. Thus, evidence of
the performance of the theories is presented via case studies, which
are designed to assist the readers in determining how such theories
and proven practices can be applied to their specific organization.
A common theme in this book involves three important terms:
dynamic , unpredictable , and acceleration . Dynamic is a term that rep-
resents spontaneous and vibrant things— a motive force. Technology
behaves with such a force and requires organizations to deal with its
capabilities. Glasmeier (1997) postulates that technology evolution,
innovation, and change are dynamic processes. The force then is tech-
nology, and it carries many motives, as we shall see throughout this
book. Unpredictable suggests that we cannot plan what will happen
or will be needed. Many organizational individuals, including execu-
tives, have attempted to predict when, how, or why technology will
affect their organization. Throughout our recent history, especially
during the “ digital disruption” era, we have found that it is difficult,
if not impossible, to predict how technology will ultimately benefit or
xxii IntroduCtIon
hurt organizational growth and competitive advantage. I believe that
technology is volatile and erratic at times. Indeed, harnessing tech-
nology is not at all an exact science; certainly not in the ways in which
it can and should be used in today’ s modern organization. Finally, I
use the term acceleration to convey the way technology is speeding up
our lives. Not only have emerging technologies created this unpre-
dictable environment of change, but they also continue to change it
rapidly— even from the demise of the dot-com era decades ago. Thus,
what becomes important is the need to respond quickly to technology.
The inability to be responsive to change brought about by technologi-
cal innovations can result in significant competitive disadvantages for
organizations.
This new edition shows why this is a fact especially when examining
the shrinking S-Curve. So, we look at these three words— dynamic,
unpredictable, and acceleration— as a way to define how technology
affects organizations; that is, technology is an accelerating motive
force that occurs irregularly. These words name the challenges that
organizations need to address if they are to manage technological
innovations and integrate them with business strategy and competi-
tive advantage. It only makes sense that the challenge of integrating
technology into business requires us first to understand its potential
impact, determine how it occurs, and see what is likely to follow.
There are no quick remedies to dealing with emerging technologies,
just common practices and sustained processes that must be adopted
for organizations to survive in the future.
I had four goals in mind in writing this book. First, I am inter-
ested in writing about the challenges of using digital technologies
strategically. What particularly concerns me is the lack of literature
that truly addresses this issue. What is also troublesome is the lack
of reliable techniques for the evaluation of IT, especially since IT
is used in almost every aspect of business life. So, as we increase
our use and dependency on technology, we seem to understand less
about how to measure and validate its outcomes. I also want to
convey my thoughts about the importance of embracing nonmon-
etary methods for evaluating technology, particularly as they relate
to determining return on investment. Indeed, indirect and non-
monetary benefits need to be part of the process of assessing and
approving IT projects.
xxiiiIntroduCtIon
Second, I want to apply organizational learning theory to the field
of IT and use proven learning models to help transform IT staff into
becoming better members of their organizations. Everyone seems to
know about the inability of IT people to integrate with other depart-
ments, yet no one has really created a solution to the problem. I find
that organizational learning techniques are an effective way of coach-
ing IT staff to operate more consistently with the goals of the busi-
nesses that they support.
Third, I want to present cogent theories about IT and organiza-
tional learning; theories that establish new ways for organizations to
adapt new technologies. I want to share my experiences and those of
other professionals who have found approaches that can provide posi-
tive outcomes from technology investments.
Fourth, I have decided to express my concerns about the valid-
ity and reliability of organizational learning theories and practices as
they apply to the field of IT. I find that most of these models need to
be enhanced to better fit the unique aspects of the digital age. These
modified models enable the original learning techniques to address
IT-specific issues. In this way, the organization can develop a more
holistic approach toward a common goal for using technology.
Certainly, the balance of how technology ties in with strategy is
essential. However, there has been much debate over whether tech-
nology should drive business strategy or vice versa. We will find that
the answer to this is “ yes.” Yes, in the sense that technology can affect
the way organizations determine their missions and business strate-
gies; but “ no” in that technology should not be the only component
for determining mission and strategy. Many managers have realized
that business is still business, meaning that technology is not a “ sil-
ver bullet.” The challenge, then, is to determine how best to fit tech-
nology into the process of creating and supporting business strategy.
Few would doubt today that technology is, indeed, the most signifi-
cant variable affecting business strategy. However, the most viable
approach is to incorporate technology into the process of determin-
ing business strategy. I have found that many businesses still formu-
late their strategies first, and then look at technology, as a means to
efficiently implement objectives and goals. Executives need to better
understand the unique and important role that technology provides
us; it can drive business strategy, and support it, at the same time.
xxiv IntroduCtIon
Managers should not solely focus their attention on generating
breakthrough innovations that will create spectacular results. Most
good uses of technology are much subtler, and longer-lasting. For this
reason, this book discusses and defines new technology life cycles
that blend business strategy and strategic learning. Building on this
theme, I introduce the idea of responsive organizational dynamism as
the core theory of this book. Responsive organizational dynamism
defines an environment that can respond to the three important
terms (dynamic, unpredictable, and acceleration). Indeed, technology
requires organizations that can sustain a system, in which individu-
als can deal with dynamic, unpredictable, and accelerated change, as
part of their regular process of production. The basis of this concept
is that organizations must create and sustain such an environment to
be competitive in a global technologically-driven economy. I further
analyze responsive organizational dynamism in its two subcompo-
nents: strategic integration and cultural assimilation, which address
how technology needs to be measured as it relates to business strategy,
and what related social– structural changes are needed, respectively.
Change is an important principle of this book. I talk about the
importance of how to change, how to manage such change, and why
emerging technologies are a significant agent of change. I support
the need for change, as an opportunity to use many of the learning
theories that have been historically difficult to implement. That is,
implementing change brought on by technological innovation is an
opportunity to make the organization more “ change ready” or, as we
define it today, more “ agile.” However, we also know that little is
known about how organizations should actually go about modifying
existing processes to adapt to new technologies and become digital
entities— and to be accustomed to doing this regularly. Managing
through such periods of change requires that we develop a model that
can deal with dynamic, unpredictable, and accelerated change. This is
what responsive organizational dynamism is designed to do.
We know that over 20% of IT projects still fail to be completed.
Another 54% fail to meet their projected completion date. We now sit
at the forefront of another technological spurt of innovations that will
necessitate major renovations to existing legacy systems, requiring that
they be linked to sophisticated e-business systems. These e-business
systems will continue to utilize the Internet, and emerging mobile
xxvIntroduCtIon
technologies. While we tend to focus primarily on what technology
generically does, organizations need urgently to prepare themselves
for the next generation of advances, by forming structures that can
deal with continued, accelerated change, as the norm of daily opera-
tions. For this edition, I have added new sections and chapters that
address the digital transformation, ways of dealing with changing
consumer behavior, the need to form evolving cyber security cultures,
and the importance of integrating Gen Y employees to accelerate
competitive advantage.
This book provides answers to a number of dilemmas but ultimately
offers an imbricate cure for the problem of latency in performance and
quality afflicting many technologically-based projects. Traditionally,
management has attempted to improve IT performance by increasing
technical skills and project manager expertise through new processes.
While there has been an effort to educate IT managers to become
more interested and participative in business issues, their involvement
continues to be based more on service than on strategy. Yet, at the
heart of the issue is the entirety of the organization. It is my belief that
many of the programmatic efforts conducted in traditional ways and
attempting to mature and integrate IT with the rest of the organiza-
tion will continue to deliver disappointing results.
My personal experience goes well beyond research; it draws from
living and breathing the IT experience for the past 35 years, and
from an understanding of the dynamics of what occurs inside and
outside the IT department in most organizations. With such experi-
ence, I can offer a path that engages the participation of the entire
management team and operations staff of the organization. While
my vision for this kind of digital transformation is different from
other approaches, it is consistent with organizational learning theo-
ries that promote the integration of individuals, communities, and
senior management to participate in more democratic and vision-
ary forms of thinking, reflection, and learning. It is my belief that
many of the dilemmas presented by IT have existed in other parts of
organizations for years, and that the Internet revolution only served
to expose them. If we believe this to be true, then we must begin
the process of integrating technology into strategic thinking and
stop depending on IT to provide magical answers, and inappropriate
expectations of performance.
xxvi IntroduCtIon
Technology is not the responsibility of any one person or depart-
ment; rather, it is part of the responsibility of every employee. Thus,
the challenge is to allow organizations to understand how to modify
their processes, and the roles and responsibilities of their employees,
to incorporate digital technologies as part of normal workplace activi-
ties. Technology then becomes more a subject and a component of
discourse. IT staff members need to emerge as specialists who par-
ticipate in decision making, development, and sustained support of
business evolution. There are also technology-based topics that do
not require the typical expertise that IT personnel provide. This is
a literacy issue that requires different ways of thinking and learning
during the everyday part of operations. For example, using desktop
tools, communicating via e-mail, and saving files and data, are inte-
gral to everyday operations. These activities affect projects, yet they
are not really part of the responsibilities of IT departments. Given
the knowledge that technology is everywhere, we must change the
approach that we take to be successful. Another way of looking at this
phenomenon is to define technology more as a commodity, readily
available to all individuals. This means that the notion of technology
as organizationally segregated into separate cubes of expertise is prob-
lematic, particularly on a global front.
Thus, the overall aim of this book is to promote organizational
learning that disseminates the uses of technology throughout a busi-
ness, so that IT departments are a partner in its use, as opposed to
being its sole owner. The cure to IT project failure, then, is to engage
the business in technology decisions in such a way that individuals
and business units are fundamentally involved in the process. Such
processes need to be designed to dynamically respond to technology
opportunities and thus should not be overly bureaucratic. There is a
balance between establishing organizations that can readily deal with
technology versus those that become too complex and inefficient.
This balance can only be attained using organizational learning
techniques as the method to grow and reach technology maturation.
Overview of the Chapters
Chapter 1 provides an important case study of the Ravell Corporation
(a pseudonym), where I was retained for over five years. During this
xxviiIntroduCtIon
period, I applied numerous organizational learning methods toward
the integration of the IT department with the rest of the organiza-
tion. The chapter allows readers to understand how the theories of
organizational learning can be applied in actual practice, and how
those theories are particularly beneficial to the IT community. The
chapter also shows the practical side of how learning techniques can
be linked to measurable outcomes, and ultimately related to business
strategy. This concept will become the basis of integrating learning
with strategy (i.e., “ strategic learning” ). The Ravell case study also
sets the tone of what I call the IT dilemma, which represents the
core problem faced by organizations today. Furthermore, the Ravell
case study becomes the cornerstone example throughout the book and
is used to relate many of the theories of learning and their practical
applicability in organizations. The Ravell case has also been updated
in this second edition to include recent results that support the impor-
tance of alignment with the human resources department.
Chapter 2 presents the details of the IT dilemma. This chapter
addresses issues such as isolation of IT staff, which results in their
marginalization from the rest of the organization. I explain that while
executives want technology to be an important part of business strat-
egy, few understand how to accomplish it. In general, I show that
individuals have a lack of knowledge about how technology and busi-
ness strategy can, and should, be linked, to form common business
objectives. The chapter provides the results of a three-year study of
how chief executives link the role of technology with business strat-
egy. The study captures information relating to how chief executives
perceive the role of IT, how they manage it, and use it strategically,
and the way they measure IT performance and activities.
Chapter 3 focuses on defining how organizations need to respond
to the challenges posed by technology. I analyze technological dyna-
mism in its core components so that readers understand the different
facets that comprise its many applications. I begin by presenting tech-
nology as a dynamic variable that is capable of affecting organizations
in a unique way. I specifically emphasize the unpredictability of tech-
nology, and its capacity to accelerate change— ultimately concluding
that technology, as an independent variable, has a dynamic effect on
organizational development. This chapter also introduces my theory
of responsive organizational dynamism, defined as a disposition in
xxviii IntroduCtIon
organizational behavior that can respond to the demands of tech-
nology as a dynamic variable. I establish two core components of
responsive organizational dynamism: strategic integration and cultural
assimilation . Each of these components is designed to tackle a specific
problem introduced by technology. Strategic integration addresses the
way in which organizations determine how to use technology as part
of business strategy. Cultural assimilation, on the other hand, seeks
to answer how the organization, both structurally and culturally, will
accommodate the actual human resources of an IT staff and depart-
ment within the process of implementing new technologies. Thus,
strategic integration will require organizational changes in terms of
cultural assimilation. The chapter also provides a perspective of the
technology life cycle so that readers can see how responsive organi-
zational dynamism is applied, on an IT project basis. Finally, I define
the driver and supporter functions of IT and how these contribute to
managing technology life cycles.
Chapter 4 introduces theories on organizational learning, and
applies them specifically to responsive organizational dynamism. I
emphasize that organizational learning must result in individual, and
organizational transformation, that leads to measurable performance
outcomes. The chapter defines a number of organizational learning
theories, such as reflective practices, learning organization, communi-
ties of practice, learning preferences and experiential learning, social
discourse, and the use of language. These techniques and approaches
to promoting organizational learning are then configured into various
models that can be used to assess individual and organizational devel-
opment. Two important models are designed to be used in responsive
organizational dynamism: the applied individual learning wheel and
the technology maturity arc. These models lay the foundation for my
position that learning maturation involves a steady linear progression
from an individual focus toward a system or organizational perspec-
tive. The chapter also addresses implementation issues— political
challenges that can get in the way of successful application of the
learning theories.
Chapter 5 explores the role of management in creating and sustain-
ing responsive organizational dynamism. I define the tiers of middle
management in relation to various theories of management partici-
pation in organizational learning. The complex issues of whether
xxixIntroduCtIon
organizational learning needs to be managed from the top down,
bottom up, or middle-top-down are discussed and applied to a model
that operates in responsive organizational dynamism. This chapter
takes into account the common three-tier structure in which most
organizations operate: executive, middle, and operations. The execu-
tive level includes the chief executive officer (CEO), president, and
senior vice presidents. The middle is the most complex, ranging from
vice president/director to supervisory roles. Operations covers what is
commonly known as “ staff,” including clerical functions. The knowl-
edge that I convey suggests that all of these tiers need to participate in
management, including operations personnel, via a self-development
model. The chapter also presents the notion that knowledge manage-
ment is necessary to optimize competitive advantage, particularly as
it involves transforming tacit knowledge into explicit knowledge. I
view the existing theories on knowledge management, create a hybrid
model that embraces technology issues, and map them to responsive
organizational dynamism. Discussions on change management are
included as a method of addressing the unique ways that technol-
ogy affects product development. Essentially, I tie together respon-
sive organizational dynamism with organizational change theory, by
offering modifications to generally accepted theories. There is also a
specific model created for IT organizations, that maps onto organi-
zational-level concepts. Although I have used technology as the basis
for the need for responsive organizational dynamism, I show that the
needs for its existence can be attributed to any variable that requires
dynamic change. As such, I suggest that readers begin to think about
the next “ technology” or variable that can cause the same needs to
occur inside organizations. The chapter has been extended to address
the impact of social networking and the leadership opportunities it
provides to technology executives.
Chapter 6 examines how organizational transformation occurs.
The primary focus of the chapter is to integrate transformation theory
with responsive organizational dynamism. The position taken is that
organizational learning techniques must inevitably result in orga-
nizational transformation. Discussions on transformation are often
addressed at organizational level, as opposed to focusing on individual
development. As in other sections of the book, I extend a number
of theories so that they can operate under the auspices of responsive
xxx IntroduCtIon
organizational dynamism, specifically, the works of Yorks and Marsick
(2000) and Aldrich (2001). I expand organizational transformation
to include ongoing assessment within technology deliverables. This
is accomplished through the use of a modified Balanced Scorecard
originally developed by Kaplan and Norton (2001). The Balanced
Scorecard becomes the vehicle for establishing a strategy-focused and
technology-based organization.
Chapter 7 deals with the many business transformation projects
that require outsource arrangements and virtual team management.
This chapter provides an understanding of when and how to consider
outsourcing and the intricacies of considerations once operating with
virtual teams. I cover such issues as management considerations and
the challenges of dealing in multiple locations. The chapter extends the
models discussed in previous chapters so that they can be aligned with
operating in a virtual team environment. Specifically, this includes
communities of practice, social discourse, self-development, knowl-
edge management, and, of course, responsive organizational dyna-
mism and its corresponding maturity arcs. Furthermore, I expand the
conversation to include IT and non-IT personnel, and the arguments
for the further support needed to integrate all functions across the
organization.
Chapter 8 presents updated case studies that demonstrate how my
organizational learning techniques are actually applied in practice.
Three case studies are presented: Siemens AG, ICAP, and HTC.
Siemens AG is a diverse international company with 20 discrete
businesses in over 190 countries. The case study offers a perspec-
tive of how a corporate chief information officer (CIO) introduced
e- business strategy. ICAP is a leading international money and secu-
rity broker. This case study follows the activities of the electronic trad-
ing community (ETC) entity, and how the CEO transformed the
organization and used organizational learning methods to improve
competitive advantage. HTC (a pseudonym) provides an example of
why the chief IT executive should report to the CEO, and how a
CEO can champion specific projects to help transform organizational
norms and behaviors. This case study also maps the transformation of
the company to actual examples of strategic advantage.
Chapter 9 focuses on the challenges of forming a “ cyber security”
culture. The growing challenges of protecting companies from outside
xxxiIntroduCtIon
attacks have established the need to create a cyber security culture.
This chapter addresses the ways in which information technology
organizations must further integrate with business operations, so
that their firms are better equipped to protect against outside threats.
Since the general consensus is that no system can be 100% protected,
and that most system compromises occur as a result of internal expo-
sures, information technology leaders must educate employees on
best practices to limit cyberattacks. Furthermore, while prevention is
the objective, organizations must be internally prepared to deal with
attacks and thus have processes in place should a system become pen-
etrated by third-party agents.
Chapter 10 explores the effects of the digital global economy on
the ways in which organizations need to respond to the consumeriza-
tion of products and services. From this perspective, digital transfor-
mation involves a type of social reengineering that affects the ways in
which organizations communicate internally, and how they consider
restructuring departments. Digital transformation also affects the
risks that organizations must take in what has become an accelerated
changing consumer market.
Chapter 11 provides conclusions and focuses on Gen Y employ-
ees who are known as “ digital natives” and represent the new supply
chain of talent. Gen Y employees possess the attributes to assist com-
panies to transform their workforce to meet the accelerated change in
the competitive landscape. Most executives across industries recog-
nize that digital technologies are the most powerful variable to main-
taining and expanding company markets. Gen Y employees provide a
natural fit for dealing with emerging digital technologies. However,
success with integrating Gen Y employees is contingent upon Baby
Boomer and Gen X management adopting new leadership philoso-
phies and procedures suited to meet the expectations and needs of
these new workers. Ignoring the unique needs of Gen Y employees
will likely result in an incongruent organization that suffers high
turnover of young employees who will ultimately seek a more entre-
preneurial environment.
Chapter 12 seeks to define best practices to implement and sus-
tain responsive organizational dynamism. The chapter sets forth a
model that creates separate, yet linked, best practices and maturity
arcs that can be used to assess stages of the learning development
xxxii IntroduCtIon
of the chief IT executive, the CEO, and the middle management. I
discuss the concept of common threads , by which each best practices
arc links through common objectives and outcomes to the responsive
organizational dynamism maturity arc presented in Chapter 4. Thus,
these arcs represent an integrated and hierarchical view of how each
component of the organization contributes to overall best practices. A
new section has been added that links ethics to technology leadership
and maturity.
Chapter 13 summarizes the many aspects of how IT and organi-
zational learning operate together to support the responsive organi-
zational dynamism environment. The chapter emphasizes the specific
key themes developed in the book, such as evolution versus revolu-
tion; control and empowerment; driver and supporter operations; and
responsive organizational dynamism and self-generating organiza-
tions. Finally, I provide an overarching framework for “ organizing”
reflection and integrate it with the best practices arcs.
As a final note, I need to clarify my use of the words information
technology, digital technology, and technology. In many parts of the book,
they are used interchangeably, although there is a defined difference.
Of course, not all technology is related to information or digital; some
is based on machinery or the like. For the purposes of this book, the
reader should assume that IT and digital technology are the primary
variables that I am addressing. However, the theories and processes
that I offer can be scaled to all types of technological innovation.
1
1
The “Ravell” CoRpoRaTion
Introduction
Launching into an explanation of information technology (IT),
organizational learning, and the practical relationship into which I
propose to bring them is a challenging topic to undertake. I choose,
therefore, to begin this discussion by presenting an actual case study
that exemplifies many key issues pertaining to organizational learn-
ing, and how it can be used to improve the performance of an IT
department. Specifically, this chapter summarizes a case study of
the IT department at the Ravell Corporation (a pseudonym) in New
York City. I was retained as a consultant at the company to improve
the performance of the department and to solve a mounting politi-
cal problem involving IT and its relation to other departments. The
case offers an example of how the growth of a company as a “learn-
ing organization”—one in which employees are constantly learning
during the normal workday (Argyris, 1993; Watkins & Marsick,
1993)— utilized reflective practices to help it achieve the practical stra-
tegic goals it sought. Individuals in learning organizations integrate
processes of learning into their work. Therefore, a learning organiza-
tion must advocate a system that allows its employees to interact, ask
questions, and provide insight to the business. The learning organiza-
tion will ultimately promote systematic thinking, and the building
of organizational memory (Watkins & Marsick, 1993). A learning
organization (discussed more fully in Chapter 4) is a component of
the larger topic of organizational learning.
The Ravell Corporation is a firm with over 500 employees who,
over the years, had become dependent on the use of technology to
run its business. Its IT department, like that of many other compa-
nies, was isolated from the rest of the business and was regarded as
a peripheral entity whose purpose was simply to provide technical
support. This was accompanied by actual physical isolation—IT was
2 INFORMATION TECHNOLOGY
placed in a contained and secure location away from mainstream
operations. As a result, IT staff rarely engaged in active discourse
with other staff members unless specific meetings were called relat-
ing to a particular project. The Ravell IT department, therefore, was
not part of the community of organizational learning—it did not
have the opportunity to learn along with the rest of the organiza-
tion, and it was never asked to provide guidance in matters of gen-
eral relevance to the business as a whole. This marginalized status
resulted in an us-versus-them attitude on the part of IT and non-IT
personnel alike.
Much has been written about the negative impact of marginal-
ization on individuals who are part of communities. Schlossberg
(1989) researched adults in various settings and how marginal-
ization affected their work and self-efficacy. Her theory on mar-
ginalization and mattering is applied to this case study because of
its relevance and similarity to her prior research. For example, IT
represents similar characteristics to a separate group on a college
campus or in a workplace environment. Its physical isolation can
also be related to how marginalized groups move away from the
majority population and function without contact. The IT direc-
tor, in particular, had cultivated an adversarial relationship with his
peers. The director had shaped a department that fueled his view of
separation. This had the effect of further marginalizing the posi-
tion of IT within the organization. Hand in hand with this form of
separatism came a sense of actual dislike on the part of IT personnel
for other employees. IT staff members were quick to point fingers
at others and were often noncommunicative with members of other
departments within the organization. As a result of this kind of
behavior, many departments lost confidence in the ability of IT to
provide support; indeed, the quality of support that IT furnished
had begun to deteriorate. Many departments at Ravell began to hire
their own IT support personnel and were determined to create their
own information systems subdepartments. This situation eventually
became unacceptable to management, and the IT director was ter-
minated. An initiative was begun to refocus the department and its
position within the organization. I was retained to bring about this
change and to act as the IT director until a structural transforma-
tion of the department was complete.
3the “rAvell” CorporAtIon
A New Approach
My mandate at Ravell was initially unclear—I was to “fix” the
problem; the specific solution was left up to me to design and imple-
ment. My goal became one of finding a way to integrate IT fully into
the organizational culture at Ravell. Without such integration, IT
would remain isolated, and no amount of “fixing” around this issue
would address the persistence of what was, as well, a cultural prob-
lem. Unless IT became a true part of the organization as a whole,
the entire IT staff could be replaced without any real change having
occurred from the organization’s perspective. That is, just replacing
the entire IT staff was an acceptable solution to senior management.
The fact that this was acceptable suggested to me that the knowledge
and value contained in the IT department did not exist or was mis-
understood by the senior management of the firm. In my opinion,
just eliminating a marginalized group was not a solution because I
expected that such knowledge and value did exist, and that it needed
to be investigated properly. Thus, I rejected management’s option and
began to formulate a plan to better understand the contributions that
could be made by the IT department. The challenge was threefold: to
improve the work quality of the IT department (a matter of perfor-
mance), to help the department begin to feel itself a part of the orga-
nization as a whole and vice versa (a matter of cultural assimilation),
and to persuade the rest of the organization to accept the IT staff as
equals who could contribute to the overall direction and growth of the
organization (a fundamental matter of strategic integration).
My first step was to gather information. On my assignment to the
position of IT director, I quickly arranged a meeting with the IT
department to determine the status and attitudes of its personnel.
The IT staff meeting included the chief financial officer (CFO), to
whom IT reported. At this meeting, I explained the reasons behind
the changes occurring in IT management. Few questions were asked;
as a result, I immediately began scheduling individual meetings with
each of the IT employees. These employees varied in terms of their
position within the corporate hierarchy, in terms of salary, and in
terms of technical expertise. The purpose of the private meetings was
to allow IT staff members to speak openly, and to enable me to hear
their concerns. I drew on the principles of action science, pioneered
4 INFORMATION TECHNOLOGY
by Argyris and Schö n (1996), designed to promote individual self-
reflection regarding behavior patterns, and to encourage a produc-
tive exchange among individuals. Action science encompasses a range
of methods to help individuals learn how to be reflective about their
actions. By reflecting, individuals can better understand the outcomes
of their actions and, especially, how they are seen by others. This was
an important approach because I felt learning had to start at the indi-
vidual level as opposed to attempting group learning activities. It was
my hope that the discussions I orchestrated would lead the IT staff to
a better understanding than they had previously shown, not only of
the learning process itself, but also of the significance of that process.
I pursued these objectives by guiding them to detect problem areas in
their work and to undertake a joint effort to correct them (Argyris,
1993; Arnett, 1992).
Important components of reflective learning are single-loop and
double-loop learning. Single-loop learning requires individuals to
reflect on a prior action or habit that needs to be changed in the future
but does not require individuals to change their operational proce-
dures with regard to values and norms. Double-loop learning, on the
other hand, does require both change in behavior and change in oper-
ational procedures. For example, people who engage in double-loop
learning may need to adjust how they perform their job, as opposed to
just the way they communicate with others, or, as Argyris and Schö n
(1996, p. 22) state, “the correction of error requires inquiry through
which organizational values and norms themselves are modified.”
Despite my efforts and intentions, not all of the exchanges were
destined to be successful. Many of the IT staff members felt that the
IT director had been forced out, and that there was consequently
no support for the IT function in the organization. There was also
clear evidence of internal political division within the IT department;
members openly criticized each other. Still other interviews resulted
in little communication. This initial response from IT staff was disap-
pointing, and I must admit I began to doubt whether these learning
methods would be an antidote for the department. Replacing people
began to seem more attractive, and I now understood why many man-
agers prefer to replace staff, as opposed to investing in their transfor-
mation. However, I also knew that learning is a gradual process and
that it would take time and trust to see results.
5the “rAvell” CorporAtIon
I realized that the task ahead called for nothing short of a total cul-
tural transformation of the IT organization at Ravell. Members of the
IT staff had to become flexible and open if they were to become more
trusting of one another and more reflective as a group (Garvin, 2000;
Schein, 1992). Furthermore, they had to have an awareness of their
history, and they had to be willing to institute a vision of partnering
with the user community. An important part of the process for me
was to accept the fact that the IT staff were not habitually inclined to
be reflective. My goal then was to create an environment that would
foster reflective learning, which would in turn enable a change in
individual and organizational values and norms (Senge, 1990).
The Blueprint for Integration
Based on information drawn from the interviews, I developed a pre-
liminary plan to begin to integrate IT into the day-to-day operations
at Ravell, and to bring IT personnel into regular contact with other
staff members. According to Senge (1990), the most productive learn-
ing occurs when skills are combined in the activities of advocacy and
inquiry. My hope was to encourage both among the staff at Ravell. The
plan for integration and assimilation involved assigning IT resources
to each department; that is, following the logic of the self-dissemina-
tion of technology, each department would have its own dedicated IT
person to support it. However, just assigning a person was not enough,
so I added the commitment to actually relocate an IT person into each
physical area. This way, rather than clustering together in an area of
their own, IT people would be embedded throughout the organiza-
tion, getting first-hand exposure to what other departments did, and
learning how to make an immediate contribution to the productiv-
ity of these departments. The on-site IT person in each department
would have the opportunity to observe problems when they arose—
and hence, to seek ways to prevent them—and, significantly, to share
in the sense of accomplishment when things went well. To reinforce
their commitment to their respective areas, I specified that IT person-
nel were to report not only to me but also to the line manager in their
respective departments. In addition, these line managers were to have
input on the evaluation of IT staff. I saw that making IT staff offi-
cially accountable to the departments they worked with was a tangible
6 INFORMATION TECHNOLOGY
way to raise their level of commitment to the organization. I hoped
that putting line managers in a supervisory position, would help build
a sense of teamwork between IT and non-IT personnel. Ultimately,
the focus of this approach was to foster the creation of a tolerant and
supportive cultural climate for IT within the various departments; an
important corollary goal here was also to allow reflective reviews of
performance to flourish (Garvin, 1993).
Enlisting Support
Support for this plan had to be mustered quickly if I was to create an
environment of trust. I had to reestablish the need for the IT func-
tion within the company, show that it was critical for the company’s
business operations, and show that its integration posed a unique
challenge to the company. However, it was not enough just for me
to claim this. I also had to enlist key managers to claim it. Indeed,
employees will cooperate only if they believe that self-assessment and
critical thinking are valued by management (Garvin, 2000). I decided
to embark on a process of arranging meetings with specific line man-
agers in the organization. I selected individuals who would represent
the day-to-day management of the key departments. If I could get
their commitment to work with IT, I felt it could provide the stimulus
we needed. Some line managers were initially suspicious of the effort
because of their prior experiences with IT. However, they generally
liked the idea of integration and assimilation that was presented to
them, and agreed to support it, at least on a trial basis.
Predictably, the IT staff were less enthusiastic about the idea. Many
of them felt threatened, fearing that they were about to lose their
independence or lose the mutual support that comes from being in a
cohesive group. I had hoped that holding a series of meetings would
help me gain support for the restructuring concept. I had to be care-
ful to ensure that the staff members would feel that they also had an
opportunity to develop a plan, that they were confident would work.
During a number of group sessions, we discussed various scenarios of
how such a plan might work. I emphasized the concepts of integra-
tion and assimilation, and that a program of their implementation
would be experimental. Without realizing it, I had engaged IT staff
members in a process of self-governance. Thus, I empowered them
7the “rAvell” CorporAtIon
to feel comfortable with voicing new ideas, without being concerned
that they might be openly criticized by me if I did not agree. This pro-
cess also encouraged individuals to begin thinking more as a group.
Indeed, by directing the practice of constructive criticism among
the IT staff, I had hoped to elicit a higher degree of reflective action
among the group and to show them that they had the ability to learn
from one another as well as the ability to design their own roles in the
organization (Argyris, 1993). Their acceptance of physical integration
and, hence, cultural assimilation became a necessary condition for
the ability of the IT group, to engage in greater reflective behavior
(Argyris & Schö n, 1996).
Assessing Progress
The next issue concerned individual feedback. How was I to let each
person know how he or she was doing? I decided first, to get feedback
from the larger organizational community. This was accomplished
by meeting with the line managers and obtaining whatever feed-
back was available from them. I was surprised at the large quantity
of information they were willing to offer. The line managers were not
shy about participating, and their input allowed me to complete two
objectives: (1) to understand how the IT staff was being perceived in
its new assignment and (2) to create a social and reflective relation-
ship between IT individuals and the line managers. The latter objec-
tive was significant, for if we were to be successful, the line managers
would have to assist us in the effort to integrate and assimilate IT
functions within their community.
After the discussions with managers were completed, individual
meetings were held with each IT staff member to discuss the feedback.
I chose not to attribute the feedback to specific line managers but rather
to address particular issues by conveying the general consensus about
them. Mixed feelings were also disclosed by the IT staff. After convey-
ing the information, I listened attentively to the responses of IT staff
members. Not surprisingly, many of them responded to the feedback
negatively and defensively. Some, for example, felt that many technology
users were unreasonable in their expectations of IT. It was important for
me as facilitator not to find blame among them, particularly if I was to
be a participant in the learning organization (Argyris & Schö n, 1996).
8 INFORMATION TECHNOLOGY
Resistance in the Ranks
Any major organizational transformation is bound to elicit resistance
from some employees. The initiative at Ravell proved to be no excep-
tion. Employees are not always sincere, and some individuals will
engage in political behavior that can be detrimental to any organiza-
tional learning effort. Simply put, they are not interested in partici-
pating, or, as Marsick (1998) states, “It would be naï ve to expect that
everyone is willing to play on an even field (i.e., fairly).” Early in the
process, the IT department became concerned that its members spent
much of their time trying to figure out how best to position themselves
for the future instead of attending to matters at hand. I heard from
other employees that the IT staff felt that they would live through my
tenure; that is, just survive until a permanent IT director was hired. It
became difficult at times to elicit the truth from some members of the
IT staff. These individuals would skirt around issues and deny making
statements that were reported by other employees rather than con-
front problems head on. Some IT staff members would criticize me in
front of other groups and use the criticism as proof that the plan for
a general integration was bound to fail. I realized in a most tangible
sense that pursuing change through reflective practice does not come
without resistance, and that this resistance needs to be factored into
the planning of any such organizationally transformative initiative.
Line Management to the Rescue
At the time that we were still working through the resistance within
IT, the plan to establish a relationship with line management began
to work. A number of events occurred that allowed me to be directly
involved in helping certain groups solve their IT problems. Word
spread quickly that there was a new direction in IT that could be
trusted. Line management support is critical for success in such trans-
formational situations. First, line management is typically comprised
of people from the ranks of supervisors and middle managers, who are
responsible for the daily operations of their department. Assuming
they do their jobs, senior management will cater to their needs and
listen to their feedback. The line management of any organiza-
tion, necessarily engaged to some degree in the process of learning
9the “rAvell” CorporAtIon
(a “learning organization”), is key to its staff. Specifically, line manag-
ers are responsible for operations personnel; at the same time, they
must answer to senior management. Thus, they understand both exec-
utive and operations perspectives of the business (Garvin, 2000). They
are often former staff members themselves and usually have a high
level of technical knowledge. Upper management, while important
for financial support, has little effect at the day-to-day level, yet this is
the level at which the critical work of integration and the building of
a single learning community must be done.
Interestingly, the line management organization had previously
had no shortage of IT-related problems. Many of these line managers
had been committed to developing their own IT staffs; however, they
quickly realized that the exercise was beyond their expertise, and that
they needed guidance and leadership. Their participation in IT staff
meetings had begun to foster a new trust in the IT department, and
they began to see the possibilities of working closely with IT to solve
their problems. Their support began to turn toward what Watkins and
Marsick (1993, p. 117) call “creating alignment by placing the vision
in the hands of autonomous, cross-functional synergetic teams.” The
combination of IT and non-IT teams began to foster a synergy among
the communities, which established new ideas about how best to use
technology.
IT Begins to Reflect
Although it was initially difficult for some staff members to accept,
they soon realized that providing feedback opened the door to the
process of self-reflection within IT. We undertook a number of exer-
cises, to help IT personnel understand how non-IT personnel per-
ceived them, and how their own behavior may have contributed to
these perceptions. To foster self-reflection, I adopted a technique
developed by Argyris called “the left-hand column.” In this technique,
individuals use the right-hand column of a piece of paper to transcribe
dialogues that they felt had not resulted in effective communication.
In the left-hand column of the same page, participants are to write
what they were really thinking at the time of the dialogue but did not
say. This exercise is designed to reveal underlying assumptions that
speakers may not be aware of during their exchanges and that may be
10 INFORMATION TECHNOLOGY
impeding their communication with others by giving others a wrong
impression. The exercise was extremely useful in helping IT personnel
understand how others in the organization perceived them.
Most important, the development of reflective skills, according to
Schö n (1983), starts with an individual’s ability to recognize “leaps
of abstraction”—the unconscious and often inaccurate generalizations
people make about others based on incomplete information. In the
case of Ravell, such generalizations were deeply entrenched among its
various personnel sectors. Managers tended to assume that IT staffers
were “ just techies,” and that they therefore held fundamentally differ-
ent values and had little interest in the organization as a whole. For
their part, the IT personnel were quick to assume that non-IT people
did not understand or appreciate the work they did. Exposing these
“leaps of abstraction” was key to removing the roadblocks that pre-
vented Ravell from functioning as an integrated learning organization.
Defining an Identity for Information Technology
It was now time to start the process of publicly defining the identity
of IT. Who were we, and what was our purpose? Prior to this time,
IT had no explicit mission. Instead, its members had worked on an
ad hoc basis, putting out fires and never fully feeling that their work
had contributed to the growth or development of the organization as
a whole. This sense of isolation made it difficult for IT members to
begin to reflect on what their mission should or could be. I organized
a series of meetings to begin exploring the question of a mission, and I
offered support by sharing exemplary IT mission statements that were
being implemented in other organizations. The focus of the meetings
was not on convincing them to accept any particular idea but rather to
facilitate a reflective exercise with a group that was undertaking such
a task for the first time (Senge, 1990).
The identity that emerged for the IT department at Ravell was dif-
ferent from the one implicit in their past role. Our new mission would
be to provide technical support and technical direction to the organi-
zation. Of necessity, IT personnel would remain specialists, but they
were to be specialists who could provide guidance to other depart-
ments in addition to helping them solve and prevent problems. As
they became more intimately familiar with what different departments
11the “rAvell” CorporAtIon
did—and how these departments contributed to the organization as a
whole—IT professionals would be able to make better informed rec-
ommendations. The vision was that IT people would grow from being
staff who fixed things into team members who offered their expertise
to help shape the strategic direction of the organization and, in the
process, participate fully in organizational growth and learning.
To begin to bring this vision to life, I invited line managers to
attend our meetings. I had several goals in mind with this invita-
tion. Of course, I wanted to increase contact between IT and non-IT
people; beyond this, I wanted to give IT staff an incentive to change
by making them feel a part of the organization as a whole. I also got
a commitment from IT staff that we would not cover up our prob-
lems during the sessions, but would deal with all issues with trust
and honesty. I also believed that the line managers would reciprocate
and allow us to attend their staff meetings. A number of IT indi-
viduals were concerned that my approach would only further expose
our problems with regard to quality performance, but the group as
a whole felt compelled to stick with the beliefs that honesty would
always prevail over politics. Having gained insight into how the rest of
the organization perceived them, IT staff members had to learn how
to deal with disagreement and how to build consensus to move an
agenda forward. Only then could reflection and action be intimately
intertwined so that after-the-fact reviews could be replaced with peri-
ods of learning and doing (Garvin, 2000).
The meetings were constructive, not only in terms of content issues
handled in the discussions, but also in terms of the number of line
managers who attended them. Their attendance sent a strong message
that the IT function was important to them, and that they under-
stood that they also had to participate in the new direction that IT
was taking. The sessions also served as a vehicle to demonstrate how
IT could become socially assimilated within all the functions of the
community while maintaining its own identity.
The meetings were also designed as a venue for group members to
be critical of themselves. The initial meetings were not successful in
this regard; at first, IT staff members spent more time blaming oth-
ers than reflecting on their own behaviors and attitudes. These ses-
sions were difficult in that I would have to raise unpopular questions
and ask whether the staff had truly “looked in the mirror” concerning
12 INFORMATION TECHNOLOGY
some of the problems at hand. For example, one IT employee found
it difficult to understand why a manager from another department
was angry about the time it took to get a problem resolved with his
computer. The problem had been identified and fixed within an hour,
a time frame that most IT professionals would consider very respon-
sive. As we looked into the reasons why the manager could have been
justified in his anger, it emerged that the manager had a tight deadline
to meet. In this situation, being without his computer for an hour was
a serious problem.
Although under normal circumstances a response time of one hour
is good, the IT employee had failed to ask about the manager’s par-
ticular circumstance. On reflection, the IT employee realized that
putting himself in the position of the people he was trying to support
would enable him to do his job better. In this particular instance, had
the IT employee only understood the position of the manager, there
were alternative ways of resolving the problem that could have been
implemented much more quickly.
Implementing the Integration: A Move toward Trust and Reflection
As communication became more open, a certain synergy began to
develop in the IT organization. Specifically, there was a palpable rise
in the level of cooperation and agreement, with regard to the over-
all goals set during these meetings. This is not to suggest that there
were no disagreements but rather that discussions tended to be more
constructive in helping the group realize its objective of providing
outstanding technology support to the organization. The IT staff
also felt freer to be self-reflective by openly discussing their ideas and
their mistakes. The involvement of the departmental line manag-
ers also gave IT staff members the support they needed to carry out
the change. Slowly, there developed a shift in behavior in which the
objectives of the group sharpened its focus on the transformation of
the department, on its acknowledgment of successes and failures, and
on acquiring new knowledge, to advance the integration of IT into
the core business units.
Around this time, an event presented itself that I felt would allow
the IT department to establish its new credibility and authority to
the other departments: the physical move of the organization to a
13the “rAvell” CorporAtIon
new location. The move was to be a major event, not only because
it represented the relocation of over 500 people and the technologi-
cal infrastructure they used on a day-to-day basis, but also because
the move was to include the transition of the media communications
systems of the company, to digital technology. The move required
tremendous technological work, and the organization decided to
perform a “technology acceleration,” meaning that new technology
would be introduced more quickly because of the opportunity pre-
sented by the move. The entire moving process was to take a year, and
I was immediately summoned to work with the other departments in
determining the best plan to accomplish the transition.
For me, the move became an emblematic event for the IT group at
Ravell. It would provide the means by which to test the creation of,
and the transitioning into, a learning organization. It was also to pro-
vide a catalyst for the complete integration and assimilation of IT into
the organization as a whole. The move represented the introduction
of unfamiliar processes in which “conscious reflection is … necessary
if lessons are to be learned” (Garvin, 2000, p. 100). I temporarily
reorganized IT employees into “SWAT” teams (subgroups formed
to deal with defined problems in high-pressure environments), so
that they could be eminently consumed in the needs of their com-
munity partners. Dealing with many crisis situations helped the IT
department change the existing culture by showing users how to bet-
ter deal with technology issues in their everyday work environment.
Indeed, because of the importance of technology in the new location,
the core business had an opportunity to embrace our knowledge and
to learn from us.
The move presented new challenges every day, and demanded
openness and flexibility from everyone. Some problems required that
IT listen intently to understand and meet the needs of its commu-
nity partners. Other situations put IT in the role of teaching; assess-
ing needs and explaining to other departments what was technically
possible, and then helping them to work out compromises based on
technical limitations. Suggestions for IT improvement began to come
from all parts of the organization. Ideas from others were embraced
by IT, demonstrating that employees throughout the organization
were learning together. IT staff behaved assertively and without fear
of failure, suggesting that, perhaps for the first time, their role had
14 INFORMATION TECHNOLOGY
extended beyond that of fixing what was broken to one of helping
to guide the organization forward into the future. Indeed, the move
established the kind of “special problem” that provided an opportunity
for growth in personal awareness through reflection (Moon, 1999).
The move had proved an ideal laboratory for implementing the
IT integration and assimilation plan. It provided real and important
opportunities for IT to work hand in hand with other departments—
all focusing on shared goals. The move fostered tremendous cama-
raderie within the organization and became an excellent catalyst for
teaching reflective behavior. It was, if you will, an ideal project in
which to show how reflection in action can allow an entire organiza-
tion to share in the successful attainment of a common goal. Because
it was a unique event, everyone—IT and non-IT personnel alike—
made mistakes, but this time, there was virtually no finger-pointing.
People accepted responsibility collectively and cooperated in finding
solutions. When the company recommenced operations from its new
location—on time and according to schedule—no single group could
claim credit for the success; it was universally recognized that success
had been the result of an integrated effort.
Key Lessons
The experience of the reorganization of the IT department at Ravell
can teach us some key lessons with respect to the cultural transforma-
tion and change of marginalized technical departments, generally.
Defining Reflection and Learning for an Organization
IT personnel tend to view learning as a vocational event. They gener-
ally look to increase their own “technical” knowledge by attending
special training sessions and programs. However, as Kegan (1998)
reminds us, there must be more: “Training is really insufficient as a
sole diet of education—it is, in reality a subset of education.” True
education involves transformation, and transformation, according to
Kegan, is the willingness to take risks, to “get out of the bedroom of
our comfortable world.” In my work at Ravell, I tried to augment this
“diet” by embarking on a project that delivered both vocational train-
ing and education through reflection. Each IT staff person was given
15the “rAvell” CorporAtIon
one week of technical training per year to provide vocational develop-
ment. But beyond this, I instituted weekly learning sessions in which
IT personnel would meet without me and produce a weekly memo of
“reflection.” The goal of this practice was to promote dialogue, in the
hope that IT would develop a way to deal with its fears and mistakes
on its own. Without knowing it, I had begun the process of creating
a discursive community in which social interactions could act as insti-
gators of reflective behavior leading to change.
Working toward a Clear Goal
The presence of clearly defined, measurable, short-term objectives
can greatly accelerate the process of developing a “learning organiza-
tion” through reflective practice. At Ravell, the move into new physi-
cal quarters provided a common organizational goal toward which
all participants could work. This goal fostered cooperation among IT
and non-IT employees and provided an incentive for everyone to work
and, consequently, learn together. Like an athletic team before an
important game, or even an army before battle, the IT staff at Ravell
rallied around a cause and were able to use reflective practices to help
meet their goals. The move also represented what has been termed an
“eye-opening event,” one that can trigger a better understanding of a
culture whose differences challenge one’s presuppositions (Mezirow,
1990). It is important to note, though, that while the move accelerated
the development of the learning organization as such, the move itself
would not have been enough to guarantee the successes that followed
it. Simply setting a deadline is no substitute for undergoing the kind
of transformation necessary for a consummately reflective process.
Only as the culmination of a process of analysis, socialization, and
trust building, can an event like this speed the growth of a learning
organization.
Commitment to Quality
Apart from the social challenges it faced in merging into the core
business, the IT group also had problems with the quality of its out-
put. Often, work was not performed in a professional manner. IT
organizations often suffer from an inability to deliver on schedule,
16 INFORMATION TECHNOLOGY
and Ravell was no exception. The first step in addressing the qual-
ity problem, was to develop IT’s awareness of the importance of the
problem, not only in my estimation but in that of the entire company.
The IT staff needed to understand how technology affected the day-
to-day operations of the entire company. One way to start the dia-
logue on quality is to first initiate one about failures. If something was
late, for instance, I asked why. Rather than addressing the problems
from a destructive perspective (Argyris & Schö n, 1996; Schein, 1992;
Senge, 1990), the focus was on encouraging IT personnel to under-
stand the impact of their actions—or lack of action—on the company.
Through self-reflection and recognition of their important role in the
organization, the IT staff became more motivated than before to per-
form higher quality work.
Teaching Staff “Not to Know”
One of the most important factors that developed out of the process
of integrating IT was the willingness of the IT staff “not to know.”
The phenomenology of “not knowing” or “knowing less” became the
facilitator of listening; that is, by listening, we as individuals are better
able to reflect. This sense of not knowing also “allows the individual
to learn an important lesson: the acceptance of what is, without our
attempts to control, manipulate, or judge” (Halifax, 1999, p. 177). The
IT staff improved their learning abilities by suggesting and adopting
new solutions to problems. An example of this was the creation of a
two-shift help desk that provided user support during both day and
evening. The learning process allowed IT to contribute new ideas to
the community. More important, their contributions did not dramat-
ically change the community; instead, they created gradual adjust-
ments that led to the growth of a new hybrid culture. The key to
this new culture was its ability to share ideas, accept error as a reality
(Marsick, 1998), and admit to knowing less (Halifax, 1999).
Transformation of Culture
Cultural changes are often slow to develop, and they occur in small
intervals. Furthermore, small cultural changes may even go unnoticed
or may be attributed to factors other than their actual causes. This
17the “rAvell” CorporAtIon
raises the issue of the importance of cultural awareness and our ability
to measure individual and group performance. The history of the IT
problems at Ravell made it easy for me to make management aware of
what we were newly attempting to accomplish and of our reasons for
creating dialogues about our successes and failures. Measurement and
evaluation of IT performance are challenging because of the intrica-
cies involved in determining what represents success. I feel that one
form of measurement can be found in the behavioral patterns of an
organization. When it came time for employee evaluations, reviews
were held with each IT staff member. Discussions at evaluation
reviews focused on the individuals’ perceptions of their role, and how
they felt about their job as a whole. The feedback from these review
meetings suggested that the IT staff had become more devoted, and
more willing to reflect on their role in the organization, and, gen-
erally, seemed happier at their jobs than ever before. Interestingly,
and significantly, they also appeared to be having fun at their jobs.
This happiness propagated into the community and influenced other
supporting departments to create similar infrastructures that could
reproduce our type of successes. This interest was made evident by
frequent inquiries I received from other departments about how the
transformation of IT was accomplished, and how it might be trans-
lated to create similar changes in staff behavior elsewhere in the com-
pany. I also noticed that there were fewer complaints and a renewed
ability for the staff to work with our consultants.
Alignment with Administrative Departments
Ravell provided an excellent lesson about the penalties of not align-
ing properly with other strategic and operational partners in a firm.
Sometimes, we become insistent on forcing change, especially when
placed in positions that afford a manager power—the power to get
results quickly and through force. The example of Ravell teaches us
that an approach of power will not ultimately accomplish transforma-
tion of the organization. While senior management can authorize and
mandate change, change usually occurs much more slowly than they
wish, if it occurs at all. The management ranks can still push back
and cause problems, if not sooner, then later. While I aligned with
the line units, I failed to align with important operational partners,
18 INFORMATION TECHNOLOGY
particularly human resources (HR). HR in my mind at that time
was impeding my ability to accomplish change. I was frustrated and
determined to get things done by pushing my agenda. This approach
worked early on, but I later discovered that the HR management was
bitter and devoted to stopping my efforts. The problems I encountered
at Ravell are not unusual for IT organizations. The historical issues
that affect the relationship between HR and IT are as follows:
• IT has unusual staff roles and job descriptions that can be
inconsistent with the rest of the organization.
• IT tends to have complex working hours and needs.
• IT has unique career paths that do not “fit” with HR standards.
• IT salary structures shift more dynamically and are very sen-
sitive to market conditions.
• IT tends to operate in silos.
The challenge, then, to overcome these impediments requires IT to
• reduce silos and IT staff marginalization
• achieve better organization-wide alignment
• develop shared leadership
• define and create an HR/IT governance model
The success of IT/HR alignment should follow practices similar
to those I instituted with the line managers at Ravell, specifically the
following:
• Successful HR/IT integration requires organizational learn-
ing techniques.
• Alignment requires an understanding of the relationship
between IT investments and business strategy.
• An integration of IT can create new organizational cultures
and structures.
• HR/IT alignment will likely continue to be dynamic in
nature, and evolve at an accelerated pace.
The oversight of not integrating better with HR cost IT dearly at
Ravell. HR became an undisclosed enemy—that is, a negative force
against the entire integration. I discovered this problem only later, and
was never able to bring the HR department into the fold. Without
HR being part of the learning organization, IT staff continued to
19the “rAvell” CorporAtIon
struggle with aligning their professional positions with those of the
other departments. Fortunately, within two years the HR vice presi-
dent retired, which inevitably opened the doors for a new start.
In large IT organizations, it is not unusual to have an HR member
assigned to focus specifically on IT needs. Typically, it is a joint position
in which the HR individual in essence works for the IT executive. This
is an effective alternative in that the HR person becomes versed in IT
needs and can properly represent IT in the area of head count needs and
specific titles. Furthermore, the unique aspect of IT organizations is in
the hybrid nature of their staff. Typically, a number of IT staff members
are consultants, a situation that presents problems similar to the one I
encountered at Ravell—that is, the resentment of not really being part
of the organization. Another issue is that many IT staff members are
outsourced across the globe, a situation that brings its own set of chal-
lenges. In addition, the role of HR usually involves ensuring compliance
with various regulations. For example, in many organizations, a con-
sultant is permitted to work on site for only one year before U.S. gov-
ernment regulations force the company to hire them as employees. The
HR function must work closely with IT to enforce these regulations.
Yet another important component of IT and HR collaboration is talent
management. That is, HR must work closely with IT to understand new
roles and responsibilities as they develop in the organization. Another
challenge is the integration of technology into the day-to-day business
of a company, and the question of where IT talent should be dispersed
throughout the organization. Given this complex set of challenges, IT
alone cannot facilitate or properly represent itself, unless it aligns with
the HR departments. This becomes further complex with the prolifera-
tion of IT virtual teams across the globe that create complex structures
that often have different HR ramifications, both legally and culturally.
Virtual team management is discussed further in the book.
Conclusion
This case study shows that strategic integration of technical resources
into core business units can be accomplished, by using those aspects of
organizational learning that promote reflection in action. This kind of
integration also requires something of a concomitant form of assimila-
tion, on the cultural level (see Chapter 3). Reflective thinking fosters the
20 INFORMATION TECHNOLOGY
development of a learning organization, which in turn allows for the
integration of the “other” in its various organizational manifestations.
The experience of this case study also shows that the success of organi-
zational learning will depend on the degree of cross fertilization achiev-
able in terms of individual values and on the ability of the community
to combine new concepts and beliefs, to form a hybrid culture. Such a
new culture prospers with the use of organizational learning strategies
to enable it to share ideas, accept mistakes, and learn to know less as a
regular part their discourse and practice in their day-to-day operations.
Another important conclusion from the Ravell experience is that
time is an important factor to the success of organizational learning
approaches. One way of dealing with the problem of time is with
patience—something that many organizations do not have. Another
element of success came in the acceleration of events (such as the relo-
cation at Ravell), which can foster a quicker learning cycle and helps
us see results faster. Unfortunately, impatience with using organiza-
tional learning methods is not an acceptable approach because it will
not render results that change individual and organizational behavior.
Indeed, I almost changed my approach when I did not get the results
I had hoped for early in the Ravell engagement. Nevertheless, my per-
sistence paid off. Finally, the belief that replacing the staff, as opposed
to investing in its knowledge, results from a faulty generalization. I
found that most of the IT staff had much to contribute to the orga-
nization and, ultimately, to help transform the culture. Subsequent
chapters of this book build on the Ravell experience and discuss spe-
cific methods for integrating organizational learning and IT in ways
that can improve competitive advantage.
Another recent perception, which I discuss further in Chapter 4,
is the commitment to “complete” integration. Simply put, IT cannot
select which departments to work with, or choose to participate only
with line managers; as they say, it is “all or nothing at all.” Furthermore,
as Friedman (2007, p. 8) states “The world is flat.” Certainly, part of
the “flattening” of the world has been initiated by technology, but it
has also created overwhelming challenges for seamless integration of
technology within all operations. The flattening of the world has cre-
ated yet another opportunity for IT to better integrate itself into what
is now an everyday challenge for all organizations.
21
2
The iT Dilemma
Introduction
We have seen much discussion in recent writing about how informa-
tion technology has become an increasingly significant component of
corporate business strategy and organizational structure (Bradley &
Nolan, 1998; Levine et al., 2000; Siebel, 1999). But, do we know
about the ways in which this significance takes shape? Specifically,
what are the perceptions and realities regarding the importance of
technology from organization leaders, business managers, and core
operations personnel? Furthermore, what forms of participation
should IT assume within the rest of the organization?
The isolation of IT professionals within their companies often pre-
vents them from becoming active participants in the organization.
Technology personnel have long been criticized for their inability to
function as part of the business and are often seen as a group falling
outside business cultural norms (Schein, 1992). They are frequently
stereotyped as “techies” and segregated into areas of the business
where they become marginalized and isolated from the rest of the
organization. It is my experience, based on case studies such as the
one reviewed in Chapter 1 (the Ravell Corporation), that if an orga-
nization wishes to absorb its IT department into its core culture, and
if it wishes to do so successfully, the company as a whole must be pre-
pared to consider structural changes and to seriously consider using
organizational learning approaches.
The assimilation of technical people into an organization presents
a special challenge in the development of true organizational learning
practices (developed more fully in Chapter 3). This challenge stems
from the historical separation of a special group that is seen as stand-
ing outside the everyday concerns of the business. IT is generally
acknowledged as having a key support function in the organization as
a whole. However, empirical studies have shown that it is a challenging
22 InForMAtIon teChnoloGY
endeavor to successfully integrate IT personnel into the learning fold
and to do so in such a way that they not only are accepted, but also
understood to be an important part of the social and cultural struc-
ture of the business (Allen & Morton, 1994; Cassidy, 1998; Langer,
2007; Schein, 1992; Yourdon, 1998).
In his book In Over Our Heads, Kegan (1994) discusses the chal-
lenges of dealing with individual difference. IT personnel have been
consistently regarded as “different” fixtures; as outsiders who do not
quite fit easily into the mainstream organization. Perhaps, because
of their technical practices, which may at times seem “foreign,” or
because of perceived differences in their values, IT personnel can
become marginalized; imagined as outside the core social structures
of business. As in any social structure, marginalization can result in
the withdrawal of the individual from the community (Schlossberg,
1989). As a result, many organizations are choosing to outsource their
IT services rather than confront and address the issues of cultural
absorption and organizational learning. The outsourcing alternative
tends to further distance the IT function from the core organiza-
tion, thus increasing the effects of marginalization. Not only does the
outsourcing of IT personnel separate them further from their peers,
but it also invariably robs the organization of a potentially important
contributor to the social growth and organizational learning of the
business. For example, technology personnel should be able to offer
insight into how technology can support further growth and learning
within the organization. In addition, IT personnel are usually trained
to take a logical approach to problem solving; as a result, they should
be able to offer a complementary focus on learning. Hence, the inte-
gration of IT staff members into the larger business culture can offer
significant benefits to an organization in terms of learning and orga-
nizational growth.
Some organizations have attempted to improve communications
between IT and non-IT personnel through the use of an intermedi-
ary who can communicate easily with both groups. This intermediary
is known in many organizations as the business analyst. Typically, the
business analyst will take responsibility for the interface between IT
and the larger business community. Although a business analyst may
help facilitate communication between IT and non-IT personnel,
this arrangement cannot help but carry the implication that different
23the It dIleMMA
“languages” are spoken by these two groups and, by extension, that
direct communication is not possible. Therefore, the use of such an
intermediary suffers the danger of failing to promote integration
between IT and the rest of the organization; in fact, it may serve to
keep the two camps separate. True integration, in the form of direct
contact between IT and non-IT personnel, represents a greater chal-
lenge for an organization than this remedy would suggest.
Recent Background
Since the 1990s, IT has been seen as a kind of variable that possesses
the great potential to reinvent business. Aspects of this promise affected
many of the core business rules used by successful chief executives and
business managers. While organizations have used IT for the process-
ing of information, decision-support processing, and order processing,
the impact of the Internet and e-commerce systems has initiated
revolutionary responses in every business sector. This economic phe-
nomenon became especially self-evident with the formation of dot-coms
in the mid- and late 1990s. The advent of this phenomenon stressed
the need to challenge fundamental business concepts. Many financial
wizards surmised that new technologies were indeed changing the very
infrastructure of business, affecting how businesses would operate and
compete in the new millennium. Much of this hoopla seemed justified
by the extraordinary potential that technology offered, particularly with
respect to the revolutionizing of old-line marketing principles, for it
was technology that came to violate what was previously thought to be
protected market conditions and sectors. Technology came to reinvent
these business markets and to allow new competitors to cross market in
sectors they otherwise could not have entered.
With this new excitement also came fear— fear that fostered unnat-
ural and accelerated entry into technology because any delay might
sacrifice important new market opportunities. Violating some of their
traditional principles, many firms invested in creating new organi-
zations that would “incubate” and eventually, capture large market
segments using the Internet as the delivery vehicle. By 2000, many of
these dot-coms were in trouble, and it became clear that their notion
of new business models based on the Internet contained significant
flaws and shortfalls. As a result of this crisis, the role and valuation
24 InForMAtIon teChnoloGY
of IT is again going through a transformation and once more we are
skeptical about the value IT can provide a business and about the way
to measure the contributions of IT.
IT in the Organizational Context
Technology not only plays a significant role in workplace operations,
but also continues to increase its relevance among other traditional
components of any business, such as operations, accounting, and
marketing (Earl, 1996b; Langer, 2001a; Schein, 1992). Given this
increasing relevance, IT gains significance in relation to
1. The impact it bears on organizational structure
2. The role it can assume in business strategy
3. The ways in which it can be evaluated
4. The extent to which chief executives feel the need to manage
operational knowledge and thus to manage IT effectively
IT and Organizational Structure
Sampler’s (1996) research explores the relationship between IT and
organizational structure. His study indicated that there is no clear-cut
relationship that has been established between the two. However, he
concluded that there are five principal positions that IT can take in
this relationship:
1. IT can lead to centralization of organizational control.
2. Conversely, IT can lead to decentralization of organizational
control.
3. IT can bear no impact on organizational control, its signifi-
cance being based on other factors.
4. Organizations and IT can interact in an unpredictable
manner.
5. IT can enable new organizational arrangements, such as net-
worked or virtual organizations.
According to Sampler (1996), the pursuit of explanatory models for
the relationship between IT and organizational structure continues
to be a challenge, especially since IT plays dual roles. On the one
25the It dIleMMA
hand, it enhances and constrains the capabilities of workers within
the organization, and because of this, it also possesses the ability
to create a unique cultural component. While both roles are active,
their impact on the organization cannot be predicted; instead, they
evolve as unique social norms within the organization. Because IT
has changed so dramatically over the past decades, it continues to be
difficult to compare prior research on the relationship between IT and
organizational structure.
Earl (1996a) studied the effects of applying business process reen-
gineering (BPR) to organizations. BPR is a process that organizations
undertake to determine how best to use technology, to improve busi-
ness performance. Earl concludes that BPR is “an unfortunate title: it
does not reflect the complex nature of either the distinctive underpin-
ning concept of BPR [i.e., to reevaluate methods and rules of business
operations] or the essential practical challenges to make it happen
[i.e., the reality of how one goes about doing that]” (p. 54).
In my 2001 study of the Ravell Corporation (“Fixing Bad Habits,”
Langer, 2001b), I found that BPR efforts require buy-in from business
line managers, and that such efforts inevitably require the adaptation
by individuals of different cultural norms and practices.
Schein (1992) recognizes that IT culture represents a subculture in
collision with many others within an organization. He concludes that if
organizations are to be successful in using new technologies in a global
context, they must cope with ceaseless flows of information to ensure
organizational health and effectiveness. His research indicates that chief
executive officers (CEOs) have been reluctant to implement a new sys-
tem of technology unless their organizations felt comfortable with it and
were ready to use it. While many CEOs were aware of cost and effi-
ciency implications in using IT, few were aware of the potential impact
on organizational structure that could result from “adopting an IT view
of their organizations” (p. 293). Such results suggest that CEOs need
to be more active and more cognizant than they have been of potential
shifts in organizational structure when adopting IT opportunities.
The Role of IT in Business Strategy
While many chief executives recognize the importance of IT in
the day-to-day operations of their business, their experience with
26 InForMAtIon teChnoloGY
attempting to utilize IT as a strategic business tool, has been frustrat-
ing. Typical executive complaints about IT, according to Bensaou and
Earl (1998), fall into five problem areas:
1. A lack of correspondence between IT investments and busi-
ness strategy
2. Inadequate payoff from IT investments
3. The perception of too much “technology for technology’s
sake”
4. Poor relations between IT specialists and users
5. The creation of system designs that fail to incorporate users’
preferences and work habits
McFarlan created a strategic grid (as presented in Applegate et al.,
2003) designed to assess the impact of IT on operations and strategy.
The grid shows that IT has maximum value when it affects both oper-
ations and core business objectives. Based on McFarlan’s hypothesis,
Applegate et al. established five key questions about IT that may be
used by executives to guide strategic decision making:
1. Can IT be used to reengineer core value activities, and change
the basis of competition?
2. Can IT change the nature of the relationship, and the balance
of power, between buyers and sellers?
3. Can IT build or reduce barriers to entry?
4. Can IT increase or decrease switching costs?
5. Can IT add value to existing products and services, or create
new ones?
The research and analysis conducted by McFarlan and Applegate,
respectively, suggest that when operational strategy and its results
are maximized, IT is given its highest valuation as a tool that can
transform the organization. It then receives the maximum focus
from senior management and board members. However, Applegate
et al. (2003) also focus on the risks of using technology. These risks
increase when executives have a poor understanding of competitive
dynamics, when they fail to understand the long-term implications
of a strategic system that they have launched, or when they fail to
account for the time, effort, and cost required to ensure user adop-
tion, assimilation, and effective utilization. Applegate’s conclusion
27the It dIleMMA
underscores the need for IT management to educate senior man-
agement, so that the latter will understand the appropriate indi-
cators for what can maximize or minimize their investments in
technology.
Szulanski and Amin (2000) claim that while emerging technologies
shrink the window in which any given strategy can be implemented,
if the strategy is well thought out, it can remain viable. Mintzberg’s
(1987) research suggests that it would be useful to think of strategy as
an art, not a science. This perspective is especially true in situations
of uncertainty. The rapidly changing pace of emerging technologies,
we know, puts a strain on established approaches to strategy— that is
to say, it becomes increasingly difficult to find comfortable implemen-
tation of technological strategies in such times of fast-moving envi-
ronments, requiring sophisticated organizational infrastructure and
capabilities.
Ways of Evaluating IT
Firms have been challenged to find a way to best evaluate IT,
particularly using traditional return on investment (ROI) approaches.
Unfortunately, in this regard, many components of IT do not generate
direct returns. Cost allocations based on overhead formulas (e.g., costs
of IT as a percentage of revenues) are not applicable to most IT spend-
ing needs. Lucas (1999) established nonmonetary methods for evalu-
ating IT. His concept of conversion effectiveness places value on the
ability of IT to complete its projects on time and within its budgets.
This alone is a sufficient factor for providing ROI, assuming that the
project was approved for valid business reasons. He called this overall
process for evaluation the “garbage can” model. It allows organizations
to present IT needs through a funneling pipeline of conversion effec-
tiveness that filters out poor technology plans and that can determine
which projects will render direct and indirect benefits to the organiza-
tion. Indirect returns, according to Lucas, are those that do not pro-
vide directly measurable monetary returns but do provide significant
value that can be measured using his IT investment opportunities
matrix. Utilizing statistical probabilities of returns, the opportunities
matrix provides an effective tool for evaluating the impact of indirect
returns.
28 InForMAtIon teChnoloGY
Executive Knowledge and Management of IT
While much literature and research have been produced on how IT
needs to participate in and bring value to an organization, there has
been relatively little analysis conducted on what non-IT chief execu-
tives need to know about technology. Applegate et al. (2003) suggest
that non-IT executives need to understand how to differentiate new
technologies from older ones, and how to gauge the expected impact
of these technologies on the businesses, in which the firm competes
for market share. This is to say that technology can change the rela-
tionship between customer and vendor, and thus, should be examined
as a potential for providing competitive advantage. The authors state
that non-IT business executives must become more comfortable with
technology by actively participating in technology decisions rather than
delegating them to others. They need to question experts as they would
in the financial areas of their businesses. Lou Gerstner, former CEO
of IBM , is a good example of a non-IT chief executive who acquired
sufficient knowledge and understanding of a technology firm. He was
then able to form a team of executives who better understood how to
develop the products, services, and overall business strategy of the firm.
Allen and Percival (2000) also investigate the importance of non-
IT executive knowledge and participation with IT: “If the firm lacks
the necessary vision, insights, skills, or core competencies, it may be
unwise to invest in the hottest [IT] growth market” (p. 295). The
authors point out that success in using emerging technologies is dif-
ferent from success in other traditional areas of business. They con-
cluded that non-IT managers need to carefully consider expected
synergies to determine whether an IT investment can be realized and,
especially, whether it is efficient to earn cost of capital.
Recent studies have focused on four important components in the
linking of technology and business: its relationship to organizational
structure, its role in business strategy, the means of its evaluation, and
the extent of non-IT executive knowledge in technology. The chal-
lenge in determining the best organizational structure for IT is posed
by the accelerating technological advances since the 1970s and by the
difficulty in comparing organizational models to consistent business
cases. Consequently, there is no single organizational structure that
has been adopted by businesses.
29the It dIleMMA
While most chief executives understand the importance of using
technology as part of their business strategy, they express frustra-
tion in determining how to effectively implement a technology-based
strategic approach. This frustration results from difficulties in under-
standing how IT investments relate to other strategic business issues,
from difficulty in assessing payoff and performance of IT generally
and from perceived poor relations between IT and other departments.
Because most IT projects do not render direct monetary returns, exec-
utives find themselves challenged to understand technology investments.
They have difficulty measuring value since traditional ROI formulas are
not applicable. Thus, executives would do better to focus on valuing tech-
nology investments by using methods that can determine payback based
on a matrix of indirect returns, which do not always include monetary
sources. There is a lack of research on the question of what general knowl-
edge non-IT executives need to have to effectively manage the strategic
use of technology within their firms. Non-IT chief executives are often
not engaged in day-to-day IT activities, and they often delegate dealing
with strategic technology issues to other managers. The remainder of this
chapter examines the issues raised by the IT dilemma in its various guises
especially as they become relevant to, and are confronted from, the top
management or chief executive point of view.
IT: A View from the Top
To investigate further the critical issues facing IT, I conducted a study
in which I personally interviewed over 40 chief executives in vari-
ous industries, including finance/investment, publishing, insurance,
wholesale/retail, and hotel management. Executives interviewed
were either the CEO or president of their respective corporations. I
canvassed a population of New York-based midsize corporations for
this interview study. Midsize firms, in our case, comprise businesses
of between 200 and 500 employees. Face-to-face interviews were
conducted, to allow participants the opportunity to articulate their
responses, in contrast to answering printed survey questions; execu-
tives were therefore allowed to expand, and clarify, their responses to
questions. An interview guide (see questions in Tables 2.1 through
2.3) was designed to raise issues relevant to the challenges of using
technology, as reported in the recent research literature, and to
30 InForMAtIon teChnoloGY
consider significant phenomena, that could affect changes in the uses
of technology, such as the Internet. The interview discussions focused
on three sections: (1) chief executive perception of the role of IT, (2)
management and strategic issues, and (3) measuring IT performance
and activities. The results of the interviews are summarized next.
Table 2.1 Perception and Role of IT
QUESTION ANALYSIS
1. How do you define the role and the
mission of IT in your firm?
Fifty-seven percent responded that their IT
organizations were reactive and did not really have a
mission. Twenty-eight percent had an IT mission that
was market driven; that is, their IT departments were
responsible for actively participating in marketing
and strategic processes.
2. What impact has the Internet had
on your business strategy?
Twenty-eight percent felt the impact was insignificant,
while 24% felt it was critical. The remaining 48% felt
that the impact of the Internet was significant to daily
transactions.
3. Does the firm have its own internal
software development activity? Do
you develop your own in-house
software or use software
packages?
Seventy-six percent had an internal development
organization. Eighty-one percent had internally
developed software.
4. What is your opinion of
outsourcing? Do you have the need
to outsource technology? If so, how
is this accomplished?
Sixty-two percent had outsourced certain aspects of
their technology needs.
5. Do you use consultants to help
formulate the role of IT? If yes,
what specific roles do they play? If
not, why?
Sixty-two percent of the participants used consultants
to assist them in formulating the role of IT.
6. Do you feel that IT will become
more important to the strategy of
the business? If yes, why?
Eighty-five percent felt that IT had recently become
more important to the strategic planning of the
business.
7. How is the IT department viewed
by other departments? Is the IT
department liked, or is it
marginalized?
Twenty-nine percent felt that IT was still marginalized.
Another 29% felt it was not very integrated. Thirty-eight
percent felt IT was sufficiently integrated within the
organization, but only one chief executive felt that IT
was very integrated with the culture of his firm.
8. Do you feel there is too much
“ hype” about the importance and
role of technology?
Fifty-three percent felt that there was no hype. However,
32% felt that there were levels of hype attributed to the
role of technology; 10% felt it was “ all hype.”
9. Have the role and the uses of
technology in the firm significantly
changed over the last 5 years? If
so, what are the salient changes?
Fourteen percent felt little had changed, whereas 43%
stated that there were moderate changes. Thirty-eight
percent stated there was significant change.
31the It dIleMMA
Table 2.2 Management and Strategic Issues
QUESTION ANALYSIS
1. What is the most senior title held
by someone in IT? Where does
this person rank on the
organization hierarchy?
Sixty-six percent called the highest position chief
information officer (CIO). Ten percent used managing
director, while 24% used director as the highest title.
2. Does IT management ultimately
report to you?
Fifty percent of IT leaders reported directly to the chief
executive (CEO). The other half reported to either the
chief financial officer (CFO) or the chief operating
officer (COO).
3. How active are you in working
with IT issues?
Fifty-seven percent stated that they are very active— on
a weekly basis. Thirty-eight percent were less active or
inconsistently involved, usually stepping in when an
issue becomes problematic.
4. Do you discuss IT strategy with
your peers from other firms?
Eighty-one percent did not communicate with peers at
all. Only 10% actively engaged in peer-to-peer
communication about IT strategy.
5. Do IT issues get raised at board,
marketing, and/or strategy
meetings?
Eighty-six percent confirmed that IT issues were
regularly discussed at board meetings. However, only
57% acknowledged IT discussion during marketing
meetings, and only 38% confirmed like discussions at
strategic sessions.
6. How critical is IT to the
day-to-day business?
Eighty-two percent of the chief executives felt it was very
significant or critical to the business.
Table 2.3 Measuring IT Performance and Activities
QUESTION ANALYSIS
1. Do you have any view of how IT
should be measured and
accounted for?
Sixty-two percent stated that they had a view on
measurement; however, there was significant
variation in how executives defined measurement.
2. Are you satisfied with IT
performance in the firm?
There was significant variation in IT satisfaction. Only
19% were very satisfied. Thirty-three percent were
satisfied, another 33% were less satisfied, and 14%
were dissatisfied.
3. How do you budget IT costs? Is it
based on a percentage of gross
revenues?
Fifty-seven percent stated that they did not use gross
revenues in their budgeting methodologies.
4. To what extent do you perceive
technology as a means of
increasing marketing or
productivity or both?
Seventy-one percent felt that technology was a
significant means of increasing both marketing and
productivity in their firms.
5. Are Internet/Web marketing
activities part of the IT function?
Only 24% stated that Internet/Web marketing efforts
reported directly to the IT organization.
32 InForMAtIon teChnoloGY
Section 1: Chief Executive Perception of the Role of IT
This section of the interview focuses on chief executive perceptions of
the role of IT within the firm. For the first question, about the role
and mission of IT, over half of the interviewees responded in ways
that suggested their IT organizations were reactive, without a strate-
gic mission. One executive admitted, “IT is not really defined. I guess
its mission is to meet our strategic goals and increase profitability.”
Another response betrays a narrowly construed understanding of its
potential: “The mission is that things must work— zero tolerance for
failure.” These two responses typify the vague and generalized percep-
tion that IT “has no explicit mission” except to advance the important
overall mission of the business itself. Little over a quarter of respon-
dents could confirm a market-driven role for IT; that is, actively par-
ticipating in marketing and strategic processes. Question 2, regarding
the impact of the Internet on business strategy, drew mixed responses.
Some of these revealed the deeply reflective challenges posed by the
Internet: “I feel the Internet forces us to take a longer-term view and a
sharper focus to our business.” Others emphasized its transformative
potential: “The Internet is key to decentralization of our offices and
business strategy.”
Questions 3 and 4 focused on the extent to which firms have their own
software development staffs, whether they use internally developed or
packaged software, and whether they outsource IT services. Control over
internal development of systems and applications remained important to
the majority of chief executives: “I do not like outsourcing— surrender
control, and it’s hard to bring back.” Almost two-thirds of the partici-
pants employed consultants to assist them in formulating the role of IT
within their firms but not always without reservation: “Whenever we
have a significant design issue we bring in consultants to help us— but
not to do actual development work.” Only a few were downright skepti-
cal: “I try to avoid consultants— what is their motivation?” The percep-
tion of outsourcing is still low in midsize firms, as compared to the recent
increase in IT outsourcing abroad. The lower use could be related to the
initial costs and management overheads that are required to properly
implement outsource operations in foreign countries.
A great majority of chief executives recognized some form of the
strategic importance of IT to business planning: “More of our business
33the It dIleMMA
is related to technology and therefore I believe IT is more important
to strategic planning.” Still, this sense of importance remained some-
what intuitive: “I cannot quantify how IT will become more strategic
to the business planning— but I sense that job functions will be dra-
matically altered.” In terms of how IT is viewed by other departments
within the firm, responses were varied. A little over a third of respon-
dents felt IT was reasonably integrated within the organization: “The
IT department is vitally important— but rarely noticed.” The major-
ity of respondents, however, recognized a need for greater integra-
tion: “IT was marginalized— but it is changing. While IT drives the
system— it needs to drive more of the business.” Some articulated
clearly the perceived problems: “IT needs to be more proactive— they
do not seem to have good interpersonal skills and do not understand
corporate politics.” A few expressed a sense of misgiving (“IT people
are strange— personality is an issue”) and even a sense of hopeless-
ness: “People hate IT— particularly over the sensitivity of the data. IT
sometimes is viewed as misfits and incompetent.”
Question eight asked participants whether they felt there was too
much “hype” attributed to the importance of technology in business.
Over half responded in the negative, although not without reserva-
tion: “I do not think there is too much hype— but I am disappointed.
I had hoped that technology at this point would have reduced paper,
decreased cost— it just has not happened.” Others felt that there is
indeed some degree of sensationalism: “I definitely think there is too
much hype— everyone wants the latest and greatest.” Hype in many
cases can be related to a function of evaluation, as in this exclama-
tion: “The hype with IT relates more to when will we actually see
the value!” The last question in this section asks whether the uses of
technology within the firm had significantly changed over the last
five years. A majority agreed that it had: “The role of IT has changed
significantly in the last five years—we need to stay up-to-date because
we want to carry the image that we are ‘ on the ball’.” Many of these
stressed the importance of informational flows: “I find the ‘ I’ [infor-
mation] part to be more and more important and the ‘ T’ [technol-
ogy] to be diminishing in importance.” Some actively downplayed the
significance: “I believe in minimizing the amount of technology we
use—people get carried away.”
34 InForMAtIon teChnoloGY
Section 2: Management and Strategic Issues
This section focuses on questions pertaining to executive and man-
agement organizational concerns. The first and second questions
asked executives about the most senior title held by an IT officer
and about the reporting structure for IT. Two-thirds of the par-
ticipants ranked their top IT officer as a chief information officer
(CIO). In terms of organizational hierarchy, half of the IT leaders
were at the second tier, reporting directly to the CEO or presi-
dent, while the other half were at the third tier, reporting either
to the chief financial officer (CFO) or to the chief operating offi-
cer (COO). As one CEO stated, “Most of my activity with IT is
through the COO. We have a monthly meeting, and IT is always
on the agenda.”
The third question asked executives to consider their level of
involvement with IT matters. Over half claimed a highly active rela-
tionship, engaging on a weekly basis: “I like to have IT people close
and in one-on-one interactions. It is not good to have artificial barri-
ers.” For some, levels of involvement may be limited: “I am active with
IT issues in the sense of setting goals.” A third of participants claimed
less activity, usually becoming active when difficulties arose. Question
four asked whether executives spoke to their peers at other firms about
technology issues. A high majority managed to skip this potential for
communication with their peers. Only one in 10 actively pursued this
matter of engagement.
Question 5 asked about the extent to which IT issues were
discussed at board meetings, marketing meetings, and business
strategy sessions. Here, a great majority confirmed that there was
regular discussion regarding IT concerns, especially at board meet-
ings. A smaller majority attested to IT discussions during market-
ing meetings. Over a third reported that IT issues maintained a
presence at strategic sessions. The higher incidence at board meet-
ings may still be attributable to the effects of Year 2000 (Y2K)
preparations. The final question in this section concerned the level
of criticality for IT in the day-to-day operations of the business. A
high majority of executives responded affirmatively in this regard:
“IT is critical to our survival, and its impact on economies of scale
is significant.”
35the It dIleMMA
Section 3: Measuring IT Performance and Activities
This section is concerned with how chief executives measured IT per-
formance and activities within their firms. The first question of this
section asked whether executives had a view about how IT performance
should be measured. Almost two-thirds affirmed having some formal
or informal way of measuring performance: “We have no formal pro-
cess of measuring IT other than predefined goals, cost constraints, and
deadlines.” Their responses demonstrated great variation, sometimes
leaning on cynicism: “I measure IT by the number of complaints I
get.” Many were still grappling with this challenge: “Measuring IT is
unqualified at this time. I have learned that hours worked is not the way
to measure IT— it needs to be more goal- oriented.” Most chief execu-
tives expressed some degree of quandary: “We do not feel we know
enough about how IT should be measured.” Question two asked execu-
tives to rate their satisfaction with IT performance. Here, also, there
was significant variation. A little more than half expressed some degree
of satisfaction: “Since 9/11 IT has gained a lot of credibility because of
the support that was needed during a difficult time.” Slightly fewer than
half revealed a degree of dissatisfaction: “We had to overhaul our IT
department to make it more customer-service oriented.”
Question three concerned budgeting; that is, whether or not chief
executives budgeted IT costs as a percentage of gross revenues. Over
half denied using gross revenues in their budgeting method: “When
handling IT projects we look at it on a request-by-request basis.”
The last two questions asked chief executives to assess the impact of
technology on marketing and productivity. Almost three quarters of
the participants felt that technology represented a significant means of
enhancing both marketing and productivity. Some maintained a cer-
tainty of objective: “We try to get IT closer to the customer— having
them understand the business better.” Still, many had a less-defined
sense of direction: “I have a fear of being left behind, so I do think IT
will become more important to the business.” And others remained
caught in uncertainty: “I do not fully understand how to use technol-
ogy in marketing— but I believe it’s there.” Chief executive certainty,
in this matter, also found expression in the opposite direction: “IT
will become less important— it will be assumed as a capability and a
service that companies provide to their customers.” Of the Internet/
36 InForMAtIon teChnoloGY
Web marketing initiatives, only one quarter of these reported directly
to the IT organization: “IT does not drive the Web activities because
they do not understand the business.” Often, these two were seen as
separate or competing entities of technology: “Having Web develop-
ment report to IT would hinder the Internet business’s growth poten-
tial.” Yet, some might be willing to explore a synergistic potential:
“We are still in the early stages of understanding how the Internet
relates to our business strategy and how it will affect our product line.”
General Results
Section 1 revealed that the matter of defining a mission for the IT
organization remains as unresolved as finding a way to reckon with the
potential impact of IT on business strategy. Executives still seemed to
be at a loss on the question of how to integrate IT into the workplace— a
human resource as well as a strategic issue. There was uncertainty regard-
ing the dependability of the technology information received. Most
agreed, however, in their need for software development departments to
support their internally developed software, in their need to outsource
certain parts of technology, and in their use of outside consultants to
help them formulate the future activities of their IT departments.
Section 2 showed that while the amount of time that executives spent
on IT issues varied, there was a positive correlation between a structure in
which IT managers reported directly to the chief executive and the degree
of activity that executives stated they had with IT matters. Section 3
showed that chief executives understood the potential value that technol-
ogy can bring to the marketing and productivity of their firms. They did
not believe, however, that technology can go unmeasured; there needs
to be some rationale for allotting a spending figure in the budget. For
most of the firms in this study, the use of the Internet as a technological
vehicle for future business was not determined by IT. This suggests that
IT does not manage the marketing aspects of technology, and that it has
not achieved significant integration in strategic planning.
Defining the IT Dilemma
The variations found in this study in terms of where IT reports, how
it is measured, and how its mission is defined were consistent with
37the It dIleMMA
existing research. But, the wide-ranging inconsistencies and uncer-
tainties among executives described here left many of them wonder-
ing whether they should be using IT as part of their business strategy
and operations. While this quandary does not in itself suggest an
inadequacy, it does point to an absence of a “best practices” guideline
for using technology strategically. Hence, most businesses lacked a
clear plan on how to evolve IT contributions toward business develop-
ment. Although a majority of respondents felt that IT was critical to
the survival of their businesses, the degree of IT assimilation within
the core culture of organizations still varied. This suggests that the
effects of cultural assimilation lag behind the actual involvement of
IT in the strategic direction of the company.
While Sampler (1996) attributes many operational inconsistencies to
the changing landscape of technology, the findings of this study suggest
that there is also a lack in professional procedures, rules, and established
governance, that could support the creation of best practices for the
profession. Bensaou and Earl (1998), on the one hand, have addressed
this concern by taking a pro-Japanese perspective in extrapolating from
five “Western” problems five “general” principles, presumably not cul-
ture bound, and thence a set of “best principles” for managing IT. But,
Earl et al. (1995), on the other hand, have sidestepped any attempt to
incorporate Earl’s own inductive approach discussed here; instead, they
favor a market management approach, based on a supply-and-demand
model to “balance” IT management. Of course, best practices already
embody the implicit notion of best principles; however, the problems
confronting executives— the need for practical guidelines— remain. For
instance, this study shows that IT performance is measured in many
different ways. It is this type of practical inconsistency that leaves chief
executives with the difficult challenge of understanding how technol-
ogy decisions can be managed.
On a follow-up call related to this study, for example, a CEO
informed me of a practical yet significant difference she had instituted
since our interview. She stated:
The change in reporting has allowed IT to become part of the main-
stream vision of the business. It now is a fundamental component of all
discussions with human resources, sales and marketing, and accounting.
The change in reporting has allowed for the creation of a critical system,
38 InForMAtIon teChnoloGY
which has generated significant direct revenues for the business. I attri-
bute this to my decision to move the reporting of technology directly
to me and to my active participation in the uses of technology in our
business.
This is an example of an executive whom Schein (1994) would
call a “change agent”— someone who employs “cognitive redefinition
through scanning,” in this case to elicit the strategic potential of IT.
We might also call this activity reflective thinking (Langer, 2001b).
Schein’s change agents, however, go on to “acknowledge that future
generations of CEOs will have been educated much more thoroughly
in the possibilities of the computer and IT, thus enabling them to take
a hands-on adopter stance” (p. 343). This insight implies a distanc-
ing (“future”) of present learning responsibilities among current chief
executives. The nearer future of this insight may instead be seen in
the development of organizational learning.* These are two areas of
contemporary research that begin to offer useful models in the pursuit
of a best practices approach to the understanding and managing of IT.
If the focus of this latter study was geared toward the evaluation of
IT based on the view of the chief executive, it was, indeed, because
their views necessarily shape the very direction for the organizations
that they manage. Subsequent chapters of this book examine how
the various dilemmas surrounding IT that I have discussed here are
affecting organizations and how organizational learning practices can
help answer many of the issues of today as raised by executives, man-
agers, and operations personnel.
Recent Developments in Operational Excellence
The decline in financial markets in 2009, and the continued increase
in mergers and acquisitions due to global competition have created an
interesting opportunity for IT that reinforces the need for integration
via organizational learning. During difficult economic periods, IT
has traditionally been viewed as a cost center and had its operations
* My case study “Fixing Bad Habits” (Langer, 2001b) has shown that integrating
the practices of reflective thinking, to support the development of organizational
learning, has greatly enhanced the adaptation of new technologies, their strategic
valuation to the firm, and their assimilation into the social norms of the business.
39the It dIleMMA
reduced (I discuss this further in Chapter 3, in which I introduce
the concept of drivers and supporters). However, with the growth in
the role of technology, IT management has now been asked to help
improve efficiency through the use of technology across departments.
That is, IT is emerging as an agent for business transformation in a
much stronger capacity than ever before. This phenomenon has placed
tremendous pressure on the technology executive to align with his or
her fellow executives in other departments and to get them to partici-
pate in cost reductions by implementing more technology. Naturally,
using technology to facilitate cuts to the workforce is often unpopular,
and there has been much bitter fallout from such cross-department
reductions. Technology executives thus face the challenge of position-
ing themselves as the agents of a necessary change. However, opera-
tional excellence is broader than just cutting costs and changing the
way things operate; it is about doing things efficiently and with qual-
ity measures across corporate operations. Now that technology affects
every aspect of operations, it makes sense to charge technology execu-
tives with a major responsibility to get it accomplished.
The assimilation of technology as a core part of the entire orga-
nization is now paramount for survival, and the technology execu-
tive of today and certainly tomorrow will be one who understands
that operational excellence through efficiency must be accomplished
by educating business units in self-managing the process. The IT
executive, then, supports the activity as a leader, not as a cost cut-
ter who invades the business. The two approaches are very different,
and adopting the former can result in significant long-term results in
strategic alignment.
My interviews with CEOs supported this notion: The CEO does
not want to be the negotiator; change must be evolutionary within the
business units themselves. While taking this kind of role in organiza-
tional change presents a new dilemma for IT, it can also be an oppor-
tunity for IT to position itself successfully within the organization.
41
3
TeChnology as a
vaRiable anD Responsive
oRganizaTional Dynamism
Introduction
This chapter focuses on defining the components of technology and
how they affect corporate organizations. In other words, if we step
back momentarily from the specific challenges that information tech-
nology (IT) poses, we might ask the following: What are the generic
aspects of technology that have made it an integral part of strategic and
competitive advantage for many organizations? How do organizations
respond to these generic aspects as catalysts of change? Furthermore,
how do we objectively view the role of technology in this context, and
how should organizations adjust to its short- and long-term impacts?
Technological Dynamism
To begin, technology can be regarded as a variable, independent
of others, that contributes to the life of a business operation. It is
capable of producing an overall, totalizing, yet distinctive, effect on
organizations— it has the unique capacity to create accelerations of
corporate events in an unpredictable way. Technology, in its aspect of
unpredictability, is necessarily a variable, and in its capacity as accel-
erator— its tendency to produce change or advance— it is dynamic.
My contention is that, as a dynamic kind of variable, technology, via
responsive handling or management, can be tapped to play a special
role in organizational development. It can be pressed into service as
the dynamic catalyst that helps bring organizations to maturity in
dealing not only with new technological quandaries, but also with
other agents of change. Change generates new knowledge, which in
turn requires a structure of learning that should, if managed properly,
42 INFORMATION TECHNOLOGY
result in transformative behavior, supporting the continued evolution
of organizational culture. Specifically, technology speeds up events,
such as the expectation of getting a response to an e-mail, and requires
organizations to respond to them in ever-quickening time frames.
Such events are not as predictable as those experienced by individuals
in organizations prior to the advent of new technologies— particu-
larly with the meteoric advance of the Internet. In viewing technology
then as a dynamic variable, and one that requires systemic and cul-
tural organizational change, we may regard it as an inherent, internal
driving force— a form of technological dynamism.
Dynamism is defined as a process or mechanism responsible for the
development or motion of a system. Technological dynamism charac-
terizes the unpredictable and accelerated ways in which technology,
specifically, can change strategic planning and organizational behav-
ior/culture. This change is based on the acceleration of events and
interactions within organizations, which in turn create the need to
better empower individuals and departments. Another way of under-
standing technological dynamism is to think of it as an internal drive
recognized by the symptoms it produces. The new events and interac-
tions brought about by technology are symptoms of the dynamism
that technology manifests. The next section discusses how organiza-
tions can begin to make this inherent dynamism work in their favor
on different levels.
Responsive Organizational Dynamism
The technological dynamism at work in organizations has the power
to disrupt any antecedent sense of comfortable equilibrium or an
unwelcome sense of stasis. It also upsets the balance among the vari-
ous factors and relationships that pertain to the question of how we
might integrate new technologies into the business— a question of
what we will call strategic integration— and how we assimilate the cul-
tural changes they bring about organizationally— a question of what
we call cultural assimilation. Managing the dynamism, therefore, is a
way of managing the effects of technology. I propose that these orga-
nizational ripples, these precipitous events and interactions, can be
addressed in specific ways at the organizational management level.
The set of integrative responses to the challenges raised by technology
43teChnoloGY As A vArIAble And responsIve
is what I am calling responsive organizational dynamism, which will
also receive further explication in the next few chapters. For now, we
need to elaborate the two distinct categories that present themselves
in response to technological dynamism: strategic integration and cul-
tural assimilation. Figure 3.1 diagrams the relationships.
Strategic Integration
Strategic integration is a process that addresses the business- strategic
impact of technology on organizational processes. That is, the
business-strategic impact of technology requires immediate orga-
nizational responses and in some instances zero latency. Strategic
integration recognizes the need to scale resources across traditional
business– geographic boundaries, to redefine the value chain in the
life cycle of a product or service line, and generally to foster more
agile business processes (Murphy, 2002). Strategic integration, then,
Technology as an
independent
variable
Creates
Organizational
dynamism
Acceleration of events that
require different
infrastructures and
organizational processes
Requires
Strategic
integration
Cultural
assimilation
Symptoms and
implications
Figure 3.1 Responsive organizational dynamism.
44 INFORMATION TECHNOLOGY
is a way to address the changing requirements of business processes
caused by the sharp increases in uses of technology. Evolving tech-
nologies have become catalysts for competitive initiatives that create
new and different ways to determine successful business investment.
Thus, there is a dynamic business variable that drives the need for
technology infrastructures capable of greater flexibility and of exhib-
iting greater integration with all business operations.
Historically, organizational experiences with IT investment have
resulted in two phases of measured returns. The first phase often
shows negative or declining productivity as a result of the investment;
in the second phase, we often see a lagging of, although eventual
return to, productivity. The lack of returns in the first phase has been
attributed to the nature of the early stages of technology exploration
and experimentation, which tend to slow the process of organizational
adaptation to technology. The production phase then lags behind
the ability of the organization to integrate new technologies with
its existing processes. Another complication posed by technological
dynamism via the process of strategic integration is a phenomenon we
can call factors of multiplicity — essentially, what happens when several
new technology opportunities overlap and create myriad projects that
are in various phases of their developmental life cycle. Furthermore,
the problem is compounded by lagging returns in productivity, which
are complicated to track and to represent to management. Thus, it is
important that organizations find ways to shorten the period between
investment and technology’ s effective deployment. Murphy (2002)
identifies several factors that are critical to bridging this delta:
1. Identifying the processes that can provide acceptable business
returns from new technological investments
2. Establishing methodologies that can determine these processes
3. Finding ways to actually perform and realize expected benefits
4. Integrating IT projects with other projects
5. Adjusting project objectives when changes in the business
require them
Technology complicates these actions, making them more difficult
to resolve; hence the need to manage the complications. To tackle
these compounded concerns, strategic integration can shorten life
cycle maturation by focusing on the following integrating factors:
45teChnoloGY As A vArIAble And responsIve
• Addressing the weaknesses in management organizations in
terms of how to deal with new technologies, and how to bet-
ter realize business benefits
• Providing a mechanism that both enables organizations to
deal with accelerated change caused by technological innova-
tions and integrates them into a new cycle of processing and
handling change
• Providing a strategic learning framework by which every new
technology variable adds to organizational knowledge, par-
ticularly using reflective practices (see Chapter 4)
• Establishing an integrated approach that ties technology
accountability to other measurable outcomes using organiza-
tional learning techniques and theories
To realize these objectives, organizations must be able to
• Create dynamic internal processes that can function on a
daily basis to deal with understanding the potential fit of new
technologies and their overall value to the business
• Provide the discourse to bridge the gaps between IT- and
non-IT-related investments and uses into an integrated system
• Monitor investments and determine modifications to the life
cycle
• Implement various organizational learning practices, includ-
ing learning organization, knowledge management, change
management, and communities of practice, all of which help
foster strategic thinking and learning that can be linked to
performance (Gephardt & Marsick, 2003)
Another important aspect of strategic integration is what Murphy
(2002) calls “ consequential interoperability,” in which “ the conse-
quences of a business process” are understood to “ dynamically trigger
integration” (p. 31). This integration occurs in what he calls the five
pillars of benefits realization:
1. Strategic alignment: The alignment of IT strategically with
business goals and objectives.
2. Business process impact: The impact on the need for the organi-
zation to redesign business processes and integrate them with
new technologies.
46 INFORMATION TECHNOLOGY
3. Architecture: The actual technological integration of appli-
cations, databases, and networks to facilitate and support
implementation.
4. Payback: The basis for computing return on investment (ROI)
from both direct and indirect perspectives.
5. Risk: Identifying the exposure for underachievement or fail-
ure in the technology investment.
Murphy’ s (2002) pillars are useful in helping us understand how
technology can engender the need for responsive organizational dyna-
mism (ROD), especially as it bears on issues of strategic integration.
They also help us understand what becomes the strategic integration
component of ROD. His theory on strategic alignment and business
process impact supports the notion that IT will increasingly serve as an
undergirding force, one that will drive enterprise growth by identify-
ing the initiators (such as e-business on the Internet) that best fit busi-
ness goals. Many of these initiators will be accelerated by the growing
use of e-business, which becomes the very driver of many new market
realignments. This e-business realignment will require the ongoing
involvement of executives, business managers, and IT managers. In
fact, the Gartner Group forecasted that 70% of new software applica-
tion investments and 5% of new infrastructure expenditures by 2005
would be driven by e-business. Indeed, this has occurred and contin-
ues to expand.
The combination of evolving business drivers with accelerated and
changing customer demands has created a business revolution that
best defines the imperative of the strategic integration component of
ROD. The changing and accelerated way businesses deal with their
customers and vendors requires a new strategic integration to become
a reality rather than remain a concept discussed but affecting little
action. Without action directed toward new strategic integration,
organizations would lose competitive advantage, which would affect
profits. Most experts see e-business as the mechanism that will ulti-
mately require the integrated business processes to be realigned, thus
providing value to customers and modifying the customer– vendor
relationship. The driving force behind this realignment emanates from
the Internet, which serves as the principle accelerator of the change
in transactions across all businesses. The general need to optimize
47teChnoloGY As A vArIAble And responsIve
resources forces organizations to rethink and to realign business pro-
cesses to gain access to new business markets.
Murphy’ s (2002) pillar of architecture brings out yet another aspect
of ROD. By architecture we mean the focus on the effects that technol-
ogy has on existing computer applications or legacy systems (old exist-
ing systems). Technology requires existing IT systems to be modified
or replacement systems to be created that will mirror the new busi-
ness realignments. These changes respond to the forces of strategic
integration and require business process reengineering (BPR) activi-
ties, which represent the reevaluation of existing systems based on
changing business requirements. It is important to keep in mind the
acceleration factors of technology and to recognize the amount of
organizational effort and time that such projects take to complete. We
must ask the following question: How might organizations respond to
these continual requirements to modify existing processes? I discuss
in other chapters how ROD represents the answer to this question.
Murphy’ s (2002) pillar of direct return is somewhat limited and nar-
row because not all IT value can be associated with direct returns, but
it is important to discuss. Technology acceleration is forcing organiza-
tions to deal with broader issues surrounding what represents a return
from an investment. The value of strategic integration relies heavily on
the ability of technology to encapsulate itself within other departments
where it ultimately provides the value. We show in Chapter 4 that
this issue also has significance in organizational formation. What this
means is simply that value can be best determined within individual
business units at the microlevel and that these appropriate-level busi-
ness units also need to make the case for why certain investments need
to be pursued. There are also paybacks that are indirect; for example,
Lucas (1999) demonstrates that many technology investments are non-
monetary. The IT department (among others) becomes susceptible to
great scrutiny and subject to budgetary cutbacks during economically
difficult times. This does not suggest that IT “ hide” itself but rather
that its investment be integrated within the unit where it provides the
most benefit. Notwithstanding the challenge to map IT expenditures
to their related unit, there are always expenses that are central to all
departments, such as e-mail and network infrastructure. These types
of expenses can rarely provide direct returns and are typically allocated
across departments as a cost of doing business.
48 INFORMATION TECHNOLOGY
Because of the increased number of technology opportuni-
ties, Murphy’ s (2002) risk pillar must be a key part of strategic
integration. The concept of risk assessment is not new to an organiza-
tion; however, it is somewhat misunderstood as it relates to technology
assessment. Technology assessment, because of the acceleration factor,
must be embedded within the strategic decision-making process. This
can only be accomplished by having an understanding of how to align
technology opportunities for business change and by understanding
the cost of forgoing the opportunity as well as the cost of delays in
delivery. Many organizations use risk assessment in an unstructured
way, which does not provide a consistent framework to dynamically
deal with emerging technologies. Furthermore, such assessment needs
to be managed at all levels in the organization as opposed to being an
event-driven activity controlled only by executives.
Summary
Strategic integration represents the objective of dealing with emerg-
ing technologies on a regular basis. It is an outcome of ROD, and it
requires organizations to deal with a variable, that forces acceleration
of decisions in an unpredictable fashion. Strategic integration would
require businesses to realign the ways in which they include technol-
ogy in strategic decision making.
Cultural Assimilation
Cultural assimilation is a process that focuses on the organizational
aspects of how technology is internally organized, including the role
of the IT department, and how it is assimilated within the organiza-
tion as a whole. The inherent, contemporary reality of technologi-
cal dynamism requires not only strategic but also cultural change.
This reality demands that IT organizations connect to all aspects of
the business. Such affiliation would foster a more interactive culture
rather than one that is regimented and linear, as is too often the case.
An interactive culture is one that can respond to emerging technology
decisions in an optimally informed way, and one that understands the
impact on business performance.
49teChnoloGY As A vArIAble And responsIve
The kind of cultural assimilation elicited by technological dyna-
mism and formalized in ROD is divided into two subcategories: the
study of how the IT organization relates and communicates with
“ others,” and the actual displacement or movement of traditional
IT staff from an isolated “ core” structure to a firm-wide, integrated
framework.
IT Organization Communications with “ Others”
The Ravell case study shows us the limitations and consequences of
an isolated IT department operating within an organization. The case
study shows that the isolation of a group can lead to marginalization,
which results in the kind of organization in which not all individuals
can participate in decision making and implementation, even though
such individuals have important knowledge and value. Technological
dynamism is forcing IT departments to rethink their strategic posi-
tion within the organizational structure of their firm. No longer can
IT be a stand-alone unit designed just to service outside departments
while maintaining its separate identity. The acceleration factors of
technology require more dynamic activity within and among depart-
ments, which cannot be accomplished through discrete communica-
tions between groups. Instead, the need for diverse groups to engage
in more integrated discourse, and to share varying levels of techno-
logical knowledge, as well as business-end perspectives, requires new
organizational structures that will of necessity give birth to a new
and evolving business— social culture. Indeed, the need to assimilate
technology creates a transformative effect on organizational cultures,
the way they are formed and re-formed, and what they will need from
IT personnel.
Movement of Traditional IT Staff
To facilitate cultural assimilation from an IT perspective, IT must
become better integrated with non-IT personnel. This form of inte-
gration can require the actual movement of IT staff into other depart-
ments, which begins the process of a true assimilation of resources
among business units. While this may seem like the elimination of
50 INFORMATION TECHNOLOGY
the integrity or identity of IT, such a loss is far from the case. The
elimination of the IT department is not at all what is called for here;
on the contrary, the IT department is critical to the function of cul-
tural assimilation. However, the IT department may need to be struc-
tured differently from the way it has been so that it can deal primarily
with generic infrastructure and support issues, such as e-mail, net-
work architecture, and security. IT personnel who focus on business-
specific issues need to become closely aligned with the appropriate
units so that ROD can be successfully implemented.
Furthermore, we must acknowledge that, given the wide range of
available knowledge about technology, not all technological knowl-
edge emanates from the IT department. The question becomes
one of finding the best structure to support a broad assimilation of
knowledge about any given technology; then, we should ask how that
knowledge can best be utilized by the organization. There is a pitfall
in attempting to find a “ standard” IT organizational structure that
will address the cultural assimilation of technology. Sampler’ s (1996)
research, and my recent research with chief executives, confirms that
no such standard structure exists. It is my position that organizations
must find their own unique blend, using organizational learning con-
structs. This simply means that the cultural assimilation of IT may
be unique to the organization. What is then more important for the
success of organizational development is the process of assimilation as
opposed to the transplanting of the structure itself.
Today, many departments still operate within “ silos” where they
are unable to meet the requirements of the dynamic and unpredictable
nature of technology in the business environment. Traditional orga-
nizations do not often support the necessary communications needed
to implement cultural assimilation across business units. However,
business managers can no longer make decisions without considering
technology; they will find themselves needing to include IT staff in
their decision-making processes. On the other hand, IT departments
can no longer make technology-based decisions without concerted
efforts toward assimilation (in contrast to occasional partnering or
project-driven participation) with other business units. This assimi-
lation becomes mature when new cultures evolve synergistically as
opposed to just having multiple cultures that attempt to work in con-
junction with each other. The important lesson from Ravell to keep
51teChnoloGY As A vArIAble And responsIve
in mind here is that the process of assimilating IT can create new
cultures that in turn evolve to better support the requirements estab-
lished by the dynamism of technology.
Eventually, these new cultural formations will not perceive them-
selves as functioning within an IT or non-IT decision framework
but rather as operating within a more central business operation that
understands how to incorporate varying degrees of IT involvement
as necessary. Thus, organizational cultures will need to fuse together
to respond to new business opportunities and requirements brought
about by the ongoing acceleration of technological innovation. This
was also best evidenced by subsequent events at Ravell. Three years
after the original case study, it became necessary at Ravell to inte-
grate one of its business operations with a particular group of IT staff
members. The IT personnel actually transferred to the business unit
to maximize the benefits of merging both business and technical cul-
tures. Interestingly, this business unit is currently undergoing cultural
assimilation and is developing its own behavioral norms influenced by
the new IT staff. However, technology decisions within such groups
are not limited to the IT transferred personnel. IT and non-IT staff
need to formulate decisions using various organizational learning
techniques. These techniques are discussed in the next chapter.
Summary
Without appropriate cultural assimilation, organizations tend to have
staff that “ take shortcuts, [then] the loudest voice will win the day, ad
hoc decisions will be made, accountabilities lost, and lessons from suc-
cesses and failures will not become part of … wisdom” (Murphy, 2002,
p. 152). As in the case of Ravell Corporation, it is essential, then, to
provide for consistent governance that fits the profile of the existing cul-
ture or can establish the need for a new culture. While many scholars
and managers suggest the need to have a specific entity responsible for
IT governance, one that is to be placed within the operating structure
of the organization, such an approach creates a fundamental problem.
It does not allow staff and managers the opportunity to assimilate tech-
nologically driven change and understand how to design a culture that
can operate under ROD. In other words, the issue of governance is
misinterpreted as a problem of structural positioning or hierarchy when
52 INFORMATION TECHNOLOGY
it is really one of cultural assimilation. As a result, many business solu-
tions to technology issues often lean toward the prescriptive, instead of
the analytical, in addressing the real problem.
Murphy’ s (2002) risk pillar theory offers us another important
component relevant to cultural assimilation. This approach addresses
the concerns that relate to the creation of risk cultures formed to deal
with the impact of new systems. New technologies can actually cause
changes in cultural assimilation by establishing the need to make cer-
tain changes in job descriptions, power structures, career prospects,
degree of job security, departmental influence, or ownership of data.
Each of these potential risks needs to be factored in as an important
part of considering how best to organize and assimilate technology
through ROD.
Technology Business Cycle
To better understand technology dynamism, or how technology acts as
a dynamic variable, it is necessary to define the specific steps that occur
during its evolution in an organization. The evolution or business cycle
depicts the sequential steps during the maturation of a new technology
from feasibility to implementation and through subsequent evolution.
Table 3.1 shows the five components that comprise the cycle: feasibil-
ity, measurement, planning, implementation, and evolution.
Table 3.1 Technology Business Cycle
CYCLE COMPONENT COMPONENT DESCRIPTION
Feasibility Understanding how to view and evaluate emerging technologies, from a
technical and business perspective.
Measurement Dealing with both the direct monetary returns and indirect nonmonetary
returns; establishing driver and support life cycles.
Planning Understanding how to set up projects, establishing participation across
multiple layers of management, including operations and departments.
Implementation Working with the realities of project management; operating with political
factions, constraints; meeting milestones; dealing with setbacks; having
the ability to go live with new systems.
Evolution Understanding how acceptance of new technologies affects cultural
change, and how uses of technology will change as individuals and
organizations become more knowledgeable about technology, and
generate new ideas about how it can be used; objective is established
through organizational dynamism, creating new knowledge and an
evolving organization.
53teChnoloGY As A vArIAble And responsIve
Feasibility
The stage of feasibility focuses on a number of issues surrounding
the practicality of implementing a specific technology. Feasibility
addresses the ability to deliver a product when it is needed in com-
parison to the time it takes to develop it. Risk also plays a role in
feasibility assessment; of specific concern is the question of whether
it is possible or probable that the product will become obsolete before
completion. Cost is certainly a huge factor, but viewed at a “ high
level” (i.e., at a general cost range), and it is usually geared toward
meeting the expected ROI of a firm. The feasibility process must be
one that incorporates individuals in a way that allows them to respond
to the accelerated and dynamic process brought forth by technological
innovations.
Measurement
Measurement is the process of understanding how an investment in
technology is calculated, particularly in relation to the ROI of an
organization. The complication with technology and measurement
is that it is simply not that easy to determine how to calculate such
a return. This problem comes up in many of the issues discussed by
Lucas (1999) in his book Information Technology and the Productivity
Paradox. His work addresses many comprehensive issues, surround-
ing both monetary and nonmonetary ROI, as well as direct ver-
sus indirect allocation of IT costs. Aside from these issues, there
is the fact that for many investments in technology the attempt to
compute ROI may be an inappropriate approach. As stated, Lucas
offered a “ garbage can” model that advocates trust in the operational
management of the business and the formation of IT representatives
into productive teams that can assess new technologies as a regu-
lar part of business operations. The garbage can is an abstract con-
cept for allowing individuals a place to suggest innovations brought
about by technology. The inventory of technology opportunities
needs regular evaluation. Lucas does not really offer an explana-
tion of exactly how this process should work internally. ROD, how-
ever, provides the strategic processes and organizational– cultural
needs that can provide the infrastructure to better understand and
54 INFORMATION TECHNOLOGY
evaluate the potential benefits from technological innovations using
the garbage can model. The graphic depiction of the model is shown
in Figure 3.2.
Planning
Planning requires a defined team of user and IT representatives. This
appears to be a simple task, but it is more challenging to understand
how such teams should operate, from whom they need support, and
what resources they require. Let me be specific. There are a number
of varying types of “ users” of technology. They typically exist in three
tiers: executives, business line managers, and operations users. Each
of these individuals offers valuable yet different views of the benefits
of technology (Langer, 2002). I define these user tiers as follows:
1. Executives: These individuals are often referred to as execu
tive sponsors. Their role is twofold. First, they provide input
into the system, specifically from the perspective of pro-
ductivity, ROI, and competitive edge. Second, and per-
haps more important, their responsibility is to ensure that
users are participating in the requisite manner (i.e., made
Garbage can
model of IT value
Failed systems
Direct
benefits
Indirect
benefits
User
needs, etc.
C
on
ve
rs
io
n
eff
ec
tiv
en
es
s �e IT value pipeline
Figure 3.2 Garbage can model of IT value. (From Lucas, H.C., Information Technology and the
Productivity Paradox. Oxford University Press, New York, 1999.)
55teChnoloGY As A vArIAble And responsIve
to be available, in the right place, etc.). This area can be
problematic because internal users are typically busy doing
their jobs and sometimes neglect to provide input or to
attend project meetings. Furthermore, executive sponsors
can help control political agendas that can hurt the success
of the project.
2. Business line managers: This interface provides the most
information from a business unit perspective. These indi-
viduals are responsible for two aspects of management.
First, they are responsible for the day-to-day productivity
of their unit; therefore, they understand the importance
of productive teams, and how software can assist in this
endeavor. Second, they are responsible for their staff. Thus,
line managers need to know how software will affect their
operational staff.
3. Functional users: These are the individuals in the trenches who
understand exactly how processing needs to get done. While
their purview of the benefits of the system is relatively nar-
rower than that of the executives and managers, they provide
the concrete information that is required to create the feature/
functions that make the system usable.
The planning process becomes challenging when attempting to
get the three user communities to integrate their needs and “ agree to
agree” on how a technology project needs to be designed and managed.
Implementation
Implementation is the process of actually using a technology.
Implementation of technology systems requires wider integration
within the various departments than other systems in an organization
because usually multiple business units are affected. Implementation
must combine traditional methods of IT processes of development
yet integrate them within the constraints, assumptions, and cultural
(perhaps political) environments of different departments. Cultural
assimilation is therefore required at this stage because it delves into
the structure of the internal organization and requires individual
participation in every phase of the development and implementation
56 INFORMATION TECHNOLOGY
cycle. The following are some of the unique challenges facing the
implementation of technological projects:
1. Project managers as complex managers: Technology projects
require multiple interfaces that often lie outside the traditional
user community. They can include interfacing with writers,
editors, marketing personnel, customers, and consumers, all
of whom are stakeholders in the success of the system.
2. Shorter and dynamic development schedules: Due to the dynamic
nature of technology, its process of development is less lin-
ear than that of others. Because there is less experience in
the general user community, and there are more stakeholders,
there is a tendency by those in IT, and executives, to underes-
timate the time and cost to complete the project.
3. New untested technologies: There is so much new technol-
ogy offered to organizations that there is a tendency by IT
organizations to implement technologies that have not yet
matured— that are not yet the best products they will eventu-
ally be.
4. Degree of scope changes: Technology, because of its dynamic
nature, tends to be prone to scope creed — the scope of the orig-
inal project expanding during development.
5. Project management: Project managers need to work closely
with internal users, customers, and consumers to advise
them on the impact of changes to the project schedule.
Unfortunately, scope changes that are influenced by changes
in market trends may not be avoidable. Thus, part of a good
strategy is to manage scope changes rather than attempt to
stop them, which might not be realistic.
6. Estimating completion time: IT has always had difficulties in
knowing how long it will take to implement a technology.
Application systems are even more difficult because of the
number of variables and unknowns.
7. Lack of standards: The technology industry continues to be a
profession that does not have a governing body. Thus, it is
impossible to have real enforced standards that other pro-
fessions enjoy. While there are suggestions for best prac-
tices, many of them are unproven and not kept current with
57teChnoloGY As A vArIAble And responsIve
changing developments. Because of the lack of successful
application projects, there are few success stories to create new
and better sets of best practices.
8. Lessspecialized roles and responsibilities: The IT team tends to
have staff members who have varying responsibilities. Unlike
traditional new technology-driven projects, separation of roles
and responsibilities is more difficult when operating in more
dynamic environments. The reality is that many roles have not
been formalized and integrated using something like ROD.
9. Broad project management responsibilities: Project management
responsibilities need to go beyond those of the traditional IT
manager. Project managers are required to provide manage-
ment services outside the traditional software staff. They need
to interact more with internal and external individuals, as well
as with non-traditional members of the development team,
such as Web text and content staff. Therefore, there are many
more obstacles that can cause implementation problems.
Evolution
The many ways to form a technological organization with a natural
capacity to evolve have been discussed from an IT perspective in this
chapter. However, another important factor is the changing nature
of application systems, particularly those that involve e-businesses.
E-business systems are those that utilize the Internet and engage
in e-commerce activities among vendors, clients, and internal users
in the organization. The ways in which e-business systems are built
and deployed suggest that they are evolving systems. This means
that they have a long life cycle involving ongoing maintenance and
enhancement. They are, if you will, “ living systems” that evolve
in a manner similar to organizational cultures. So, the traditional
beginning-to-end life cycle does not apply to an e-business proj-
ect that must be implemented in inherently ongoing and evolving
phases. The important focus is that technology and organizational
development have parallel evolutionary processes that need to be in
balance with each other. This philosophy is developed further in the
next chapter.
58 INFORMATION TECHNOLOGY
Drivers and Supporters
There are essentially two types of generic functions performed by
departments in organizations: driver functions and supporter func-
tions. These functions relate to the essential behavior and nature of
what a department contributes to the goals of the organization. I
first encountered the concept of drivers and supporters at Coopers
& Lybrand, which was at that time a Big 8* accounting firm. I stud-
ied the formulation of driver versus supporter as it related to the role
of our electronic data processing (EDP) department. The firm was
attempting to categorize the EDP department as either a driver or a
supporter.
Drivers were defined in this instance as those units that engaged
in frontline or direct revenue-generating activities. Supporters were
units that did not generate obvious direct revenues but rather were
designed to support frontline activities. For example, operations such
as internal accounting, purchasing, or office management were all
classified as supporter departments. Supporter departments, due to
their nature, were evaluated on their effectiveness and efficiency or
economies of scale. In contrast, driver organizations were expected to
generate direct revenues and other ROI value for the firm. What was
also interesting to me at the time was that drivers were expected to
be more daring— since they must inevitably generate returns for the
business. As such, drivers engaged in what Bradley and Nolan (1998)
coined “ sense and respond” behaviors and activities. Let me explain.
Marketing departments often generate new business by investing
or “ sensing” an opportunity quickly because of competitive forces
in the marketplace. Thus, they must sense an opportunity and be
allowed to respond to it in a timely fashion. The process of sensing
opportunity, and responding with competitive products or services,
is a stage in the cycle that organizations need to support. Failures in
the cycles of sense and respond are expected. Take, for example, the
* The original “ Big 8” consisted of the eight large accounting and management con-
sulting firms— Coopers & Lybrand, Arthur Anderson, Touche Ross, Deloitte
Haskins & Sells, Arthur Young, Price Waterhouse, Pete Marwick Mitchell, and
Ernst and Whinney— until the late 1980s, when these firms began to merge. Today,
there are four: Price Waterhouse Coopers, Deloitte & Touche, Ernst & Young, and
KPMG (Pete Marwick and others).
59teChnoloGY As A vArIAble And responsIve
launching of new fall television shows. Each of the major stations
goes through a process of sensing which shows might be interesting to
the viewing audience. They respond, after research and review, with a
number of new shows. Inevitably, only a few of these selected shows
are actually successful; some fail almost immediately. While relatively
few shows succeed, the process is acceptable and is seen by manage-
ment as the consequence of an appropriate set of steps for competing
effectively— even though the percentage of successful new shows is
low. Therefore, it is safe to say that driver organizations are expected
to engage in high-risk operations, of which many will fail, for the sake
of creating ultimately successful products or services.
The preceding example raises two questions: (1) How does sense
and respond relate to the world of IT? and (2) Why is it important?
IT is unique in that it is both a driver and a supporter. The latter is the
generally accepted norm in most firms. Indeed, most IT functions are
established to support myriad internal functions, such as
• Accounting and finance
• Data center infrastructure (e-mail, desktop, etc.)
• Enterprise-level application (enterprise resource planning, ERP)
• Customer support (customer relationship management, CRM)
• Web and e-commerce activities
As one would expect, these IT functions are viewed as overhead
related, as somewhat of a commodity, and thus are constantly man-
aged on an economy-of-scale basis— that is, how can we make this
operation more efficient, with a particular focus on cost containment?
So, what then are IT driver functions? By definition, they are those
that engage in direct revenues and identifiable ROI. How do we define
such functions in IT because most activities are sheltered under the
umbrella of marketing organization domains? (Excluding, of course,
software application development firms that engage in marketing for
their actual application products.) I define IT driver functions as those
projects that, if delivered, would change the relationship between the
organization and its customers; that is, those activities that directly
affect the classic definition of a market: forces of supply and demand,
which are governed by the customer (demand) and the vendor (sup-
plier) relationship. This concept can be shown in the case example that
follows.
60 INFORMATION TECHNOLOGY
Santander versus Citibank
Santander Bank, the major bank of Spain, had enjoyed a dominant
market share in its home country. Citibank had attempted for years to
penetrate Santander’ s dominance using traditional approaches (open-
ing more branch offices, marketing, etc.) without success, until, that
is, they tried online banking. Using technology as a driver, Citibank
made significant penetration into the market share of Santander
because it changed the customer– vendor relationship. Online bank-
ing, in general, has had a significant impact on how the banking
industry has established new markets, by changing this relationship.
What is also interesting about this case is the way in which Citibank
accounted for its investment in online banking; it knows little about
its total investment and essentially does not care about its direct pay-
back. Rather, Citibank sees its ROI in a similar way that depicts
driver/marketing behavior; the payback is seen in broader terms to
affect not only revenue generation, but also customer support and
quality recognition.
Information Technology Roles and Responsibilities
The preceding section focuses on how IT can be divided into two dis-
tinct kinds of business operations. As such, the roles and responsibili-
ties within IT need to change accordingly and be designed under the
auspices of driver and supporter theory. Most traditional IT depart-
ments are designed to be supporters, so that they have a close-knit
organization that is secure from outside intervention and geared to
respond to user needs based on requests. While in many instances
this type of formation is acceptable, it is limited in providing the IT
department with the proper understanding of the kind of business
objectives that require driver-type activities. This was certainly the
experience in the Ravell case study. In that instance, I found that
making the effort to get IT support personnel “ out from their com-
fortable shells” made a huge difference in providing better service
to the organization at large. Because more and more technology is
becoming driver essential, this development will require of IT per-
sonnel an increasing ability to communicate to managers and execu-
tives and to assimilate within other departments.
61teChnoloGY As A vArIAble And responsIve
The Ravell case, however, also brought to light the huge vacuum of
IT presence in driver activities. The subsequent chief executive inter-
view study also confirmed that most marketing IT-oriented activities,
such as e-business, do not fall under the purview of IT in most orga-
nizations. The reasons for this separation are correlated with the lack
of IT executive presence within the management team.
Another aspect of driver and supporter functions is the concept of
a life cycle. A life cycle, in this respect, refers to the stages that occur
before a product or service becomes obsolete. Technology products
have a life cycle of value just as any other product or service. It is
important not to confuse this life cycle with processes during devel-
opment as discussed elsewhere in this chapter.
Many technical products are adopted because they are able to deliver
value that is typically determined based on ROI calculations. However,
as products mature within an organization, they tend to become more of
a commodity, and as they are normalized, they tend to become support-
oriented. Once they reach the stage of support, the rules of economies
of scale become more important and relevant to evaluation. As a prod-
uct enters the support stage, replacement based on economies of scale
can be maximized by outsourcing to an outside vendor who can provide
the service cheaper. New technologies then can be expected to follow
this kind of life cycle, by which their initial investment requires some
level of risk to provide returns to the business. This initial investment
is accomplished in ROD using strategic integration. Once the evalua-
tions are completed, driver activities will prevail during the maturation
process of the technology, which will also require cultural assimilation.
Inevitably, technology will change organizational behavior and struc-
ture. However, once the technology is assimilated and organizational
behavior and structures are normalized, individuals will use it as a per-
manent part of their day-to-day operations. Thus, driver activities give
way to those of supporters. Senior managers become less involved, and
line managers then become the more important group that completes
the transition from driver to supporter.
Replacement or Outsource
After the technology is absorbed into operations, executives will seek
to maximize the benefit by increased efficiency and effectiveness.
62 INFORMATION TECHNOLOGY
Certain product enhancements may be pursued during this phase; they
can create “ mini-loops” of driver-to-supporter activities. Ultimately, a
technology, viewed in terms of its economies of scale and longevity,
is considered for replacement or outsourcing. Figure 3.3 graphically
shows the cycle.
The final stage of maturity of an evolving driver therefore includes
becoming a supporter, at which time it becomes a commodity and,
finally, an entity with potential for replacement or outsourcing. The
next chapter explores how organizational learning theories can be
used to address many of the issues and challenges brought forth in
this chapter.
Mini loop technology enhancementsTechnology
driver
Evaluation
cycle
Driver
maturation
Support
status
Replacement or
outsource
Economies
of scale
Figure 3.3 Driver-to-supporter life cycle.
63
4
oRganizaTional leaRning
TheoRies anD TeChnology
Introduction
The purpose of this chapter is to provide readers with an under-
standing of organizational theory. The chapter covers some aspects
of the history and context of organizational learning. It also defines
and explains various learning protocols, and how they can be used to
promote organizational learning. The overall objective of organiza-
tional learning is to support a process that guides individuals, groups,
and entire communities through transformation. Indeed, evidence of
organizational transformation provides the very proof that learning
has occurred, and that changes in behavior are occurring. What is
important in this regard is that transformation remains internal to
the organization so that it can evolve in a progressive manner while
maintaining the valuable knowledge base that is contained within
the personnel of an organization. Thus, the purpose of organiza-
tional learning is to foster evolutionary transformation that will lead
to change in behaviors and that is geared toward improving strategic
performance.
Approaches to organizational learning typically address how indi-
viduals, groups, and organizations “notice and interpret information
and use it to alter their fit with their environments” (Aldrich, 2001,
p. 57). As such, however, organizational learning does not direct itself
toward, and therefore has not been able to show, an inherent link to
success—which is a critical concern for executive management. There
are two perspectives on organizational learning theory. On the one
hand, the adoptive approach, pioneered by Cyert and March (1963),
treats organizations as goal-oriented activity systems. These systems
generate learning when repeating experiences that have either suc-
ceeded or failed, discarding, of course, processes that have failed.
64 INFORMATION TECHNOLOGY
Knowledge development, on the other hand, treats organizations as
sets of interdependent members with shared patterns of cognition and
belief (Argyris & Schö n, 1996). Knowledge development empha-
sizes that learning is not limited to simple trial and error, or direct
experience. Instead, learning is understood also to be inferential and
vicarious; organizations can generate new knowledge through experi-
mentation and creativity. It is the knowledge development perspec-
tive that fits conceptually and empirically with work on technological
evolution and organizational knowledge creation and deployment
(Tushman & Anderson, 1986).
There is a complication in the field of organizational learning over
whether it is a technical or social process. Scholars disagree on this
point. From the technical perspective, organizational learning is
about the effective processing of, interpretation of, and response to
information both inside and outside the organization. “An organiza-
tion is assumed to learn if any of its units acquires knowledge that it
recognizes as potentially useful to the organization” (Huber, 1991,
p. 89). From the social perspective, on the other hand, comes the con-
cept that learning is “something that takes place not with the heads of
individuals, but in the interaction between people” (Easterby-Smith
et al., 1999, p. 6). The social approach draws from the notion that
patterns of behavior are developed, via patterns of socialization, by
evolving tacit knowledge and skills. There is, regrettably, a lack of
ongoing empirical investigation in the area of organizational learning
pertaining, for example, to in-depth case studies, to micropractices
within organizational settings, and to processes that lead to outcomes.
Indeed, measuring learning is a difficult process, which is why there
is a lack of research that focuses on outputs. As Prange (1999, p. 24)
notes: “The multitude of ways in which organizational learning has
been classified and used purports an ‘organizational learning jungle,’
which is becoming progressively dense and impenetrable.” Mackenzie
(1994, p. 251) laments that what the “scientific community devoted
to organizational learning has not produced discernable intellectual
progress.”
Ultimately, organizational learning must provide transformation
that links to performance. Most organizations seeking improved per-
formance expect changes that will support new outcomes. The study of
organizational learning needs an overarching framework under which
65orGAnIzAtIonAl leArnInG theorIes
an inquiry into the pivotal issues surrounding organizational change
can be organized. Frameworks that support organizational learning,
whether their orientation is on individuals, groups, or infrastructure,
need to allow for natural evolution within acceptable time frames for
the organization. This is the problem of organizational learning the-
ory. It lacks a method of producing measurable results that executives
can link to performance. While scholars seek outcomes through stra-
tegic learning, there must be tangible evidence of individual and orga-
nizational performance to ensure future investments in the concepts
of learning. Technology, we should remember, represents the oppor-
tunity to provide outcomes through strategic learning that addresses
transitions and transformations over a specific life cycle.
We saw this opportunity occur in the Ravell case study; the
information technology (IT) department used organizational learn-
ing. Specifically, individual reflective practices were used to provide
measurable outcomes for the organization. In this case, the out-
comes related to a specific event, the physical move of the business
to a different location. Another lesson we can derive (with hindsight)
from the Ravell experience is that learning was converted to strategic
benefit for the organization. The concept of converting learning to
strategic benefit was pioneered by Pietersen (2002). He established a
strategic learning cycle composed of four component processes that he
identified with the action verbs learn, focus, align, and execute. These
are stages in the learning cycle, as follows:
1. Learn: Conduct a situation analysis to generate insights into
the competitive environment and into the realities of the
company.
2. Focus: Translate insights into a winning proposition that out-
lines key priorities for success.
3. Align: Align the organization and energize the people behind
the new strategic focus.
4. Execute: Implement strategy and experiment with new con-
cepts. Interpret results and continue the cycle.
At Ravell, technology assisted in driving the learning cycle because,
by its dynamic nature, it mandated the acceleration of the cycle that
Pietersen (2002) describes in his stage strategy of implementation.
Thus, Ravell required the process Pietersen outlined to occur within
66 INFORMATION TECHNOLOGY
6 months, and therein established the opportunity to provide outcomes.
It also altered the culture of the organization (i.e., the evolution in cul-
ture was tangible because the transformation was concrete).
We see from the Ravell case that technology represents the best
opportunity to apply organizational learning techniques because the
use of it requires forms of evolutionary-related change. Organizations
are continually seeking to improve their operations and competi-
tive advantage through efficiency and effective processes. As I have
discussed in previous chapters, today’s businesses are experiencing
technological dynamism (defined as causing accelerated and dynamic
transformations), and this is due to the advent of technologically driven
processes. That is, organizations are experiencing more pressure to
change and compete as a result of the accelerations that technology
has brought about. Things happen quicker, and more unpredictably,
than before. This situation requires organizations to sense the need for
change and execute that change. The solution I propose is to tie orga-
nizational theory to technological implementation. Another way of
defining this issue is to provide an overarching framework that orga-
nizes an inquiry into the issues surrounding organizational change.
Another dimension of organizational learning is political. Argyris
(1993) and Senge (1990) argue that politics gets “in the way of good
learning.” In my view, however, the political dimension is very much
part of learning. It seems naï ve to assume that politics can be elimi-
nated from the daily commerce of organizational communication.
Instead, it needs to be incorporated as a factor in organizational learn-
ing theory rather than attempting to disavow or eliminate it, which is
not realistic. Ravell also revealed that political factors are simply part
of the learning process. Recall that during my initial efforts to create
a learning organization there were IT staff members who deliberately
refused to cooperate, assuming that they could “outlast” me in my
interim tenure as IT director. But politics, of course, is not limited to
internal department negotiations; it was also a factor at Ravell with,
and among, departments outside IT. These interdepartmental rela-
tionships applied especially to line managers, who became essential
advocates for establishing and sustaining necessary forms of learning
at the organizational level. But, not all line managers responded with
the same enthusiasm, and a number of them did not display a sense of
authentically caring about facilitating synergies across departments.
67orGAnIzAtIonAl leArnInG theorIes
The irrepressible existence of politics in social organizations, however,
must not in itself deter us from implementing organizational learn-
ing practices; it simply means that that we must factor it in as part
of the equation. At Ravell, I had to work within the constraints of
both internal and external politics. Nevertheless, in the end I was able
to accomplish the creation of a learning organization. Another way
one might look at the road bumps of politics is to assume that they
will temporarily delay or slow the implementation of organizational
learning initiatives. But, let us make no mistake about the potentially
disruptive nature of politics because, as we know, in its extreme cases
of inflexibility, it can be damaging.
I have always equated politics with the dilemma of blood cholesterol.
We know that there are two types of cholesterol: “good” cholesterol
and “bad” cholesterol. We all know that bad cholesterol in your blood
can cause heart disease, among other life-threatening conditions.
However, good cholesterol is essential to the body. My point is simple;
the general word politics can have damaging perceptions. When most
people discuss the topic of cholesterol, they focus on the bad type, not
the good. Such is the same with politics—that is, most individuals dis-
cuss the bad type, which often corresponds with their personal expe-
riences. My colleague Professor Lyle Yorks, at Columbia University,
often lectures on the importance of politics and its positive aspects for
establishing strategic advocacy, defined as the ability to establish per-
sonal and functional influence through cultivating alliances through
defining opportunities for the adding value to either the top or bottom
line (Langer & Yorks, 2013). Thus, politics can add value for indi-
viduals by allowing them to initiate and influence relationships and
conversations with other leaders. This, then, is “good” politics!
North American cultural norms account for much of what goes
into organizational learning theory, such as individualism, an empha-
sis on rationality, and the importance of explicit, empirical informa-
tion. IT, on the other hand, has a broadening, globalizing effect on
organizational learning because of the sheer increase in the number of
multicultural organizations created through the expansion of global
firms. Thus, technology also affects the social aspects of organizational
learning, particularly as it relates to the cultural evolution of commu-
nities. Furthermore, technology has shown us that what works in one
culture may not work in another. Dana Deasy, the former CIO of the
68 INFORMATION TECHNOLOGY
Americas region/sector for Siemens AG, experienced the difficulties
and challenges of introducing technology standards on a global scale.
He quickly learned that what worked in North America did not oper-
ate with the same expectations in Asia or South America. I discuss
Siemens AG as a case study in Chapter 8.
It is my contention, however, that technology can be used as an
intervention that can actually increase organizational learning. In
effect, the implementation of organizational learning has lacked and
has needed concrete systemic processes that show results. A solution
to this need can be found, as I have found it, in the incorporation of
IT itself into the process of true organizational learning. The prob-
lem with IT is that we keep trying to simplify it—trying to reduce
its complexity. However, dealing with the what, when, and how of
working with technology is complex. Organizations need a kind of
mechanism that can provide a way to absorb and learn all of the com-
plex pieces of technology.
It is my position that organizational change often follows learn-
ing, which to some extent should be expected. What controls whether
change is radical or evolutionary depends on the basis on which
new processes are created (Argyris & Schö n, 1996; Senge, 1990;
Swieringa & Wierdsma, 1992). Indeed, at Ravell the learning fol-
lowed the Argyris and Schö n approach: that radical change occurs
when there are major events that support the need for accelerated
change. In other words, critical events become catalysts that promote
change, through reflection. On the other hand, there can be non-
event-related learning, that is not so much radical in nature, as it is
evolutionary. Thus, evolutionary learning is characterized as an ongo-
ing process that slowly establishes the need for change over time. This
evolutionary learning process compares to what Senge (1990, p. 15)
describes as “learning in wholes as opposed to pieces.”
This concept of learning is different from an event-driven perspec-
tive, and it supports the natural tendency that groups and organiza-
tions have to protect themselves from open confrontation and critique.
However, technology provides an interesting variable in this regard.
It is generally accepted as an agent of change that must be addressed
by the organization. I believe that this agency can be seized as an
opportunity to promote such change because it establishes a reason
why organizations need to deal with the inevitable transitions brought
69orGAnIzAtIonAl leArnInG theorIes
about by technology. Furthermore, as Huysman (1999) points out, the
history of organizational learning has not often created measurable
improvement, particularly because implementing the theories has not
always been efficient or effective. Much of the impetus for implement-
ing a new technology, however, is based on the premise that its use
will result in such benefits. Therefore, technology provides compelling
reasons for why organizational learning is important: to understand
how to deal with agents of change, and to provide ongoing changes in
the processes that improve competitive advantage.
There is another intrinsic issue here. Uses of technology have not
always resulted in efficient and effective outcomes, particularly as
they relate to a firm’s expected ROI. In fact, IT projects often cost
more than expected and tend to be delivered late. Indeed, research
performed by the Gartner Group and CIO Magazine (Koch, 1999)
reports that 54% of IT projects are late and that 22% are never com-
pleted. In May 2009, McGraw reported similar trends, so industry
performance has not materially improved. This is certainly a disturb-
ing statistic for a dynamic variable of change that promises outcomes
of improved efficiency and effectiveness. The question then is why is
this occurring? Many scholars might consider the answer to this ques-
tion as complex. It is my claim, however, based on my own research,
that the lack of organizational learning, both within IT and within
other departments, poses, perhaps, the most significant barrier to the
success of these projects in terms of timeliness and completion. Langer
(2001b) suggests that the inability of IT organizations to understand
how to deal with larger communities within the organization and to
establish realistic and measurable outcomes are relevant both to many
of the core values of organizational learning and to its importance in
attaining results. What better opportunity is there to combine the
strengths and weaknesses of each of IT and organizational learning?
Perhaps what is most interesting—and, in many ways, lacking
within the literature on organizational learning—is the actual way
individuals learn. To address organizational learning, I believe it is
imperative to address the learning styles of individuals within the
organization. One fundamental consideration to take into account
is that of individual turnover within departments. Thus, methods
to measure or understand organizational learning must incorporate
the individual; how the individual learns, and what occurs when
70 INFORMATION TECHNOLOGY
individuals change positions or leave, as opposed to solely focusing
on the event-driven aspect of evolutionary learning. There are two
sociological positions about how individual learning occurs. The first
suggests that individual action derives from determining influences
in the social system, and the other suggests that it emanates from
individual action. The former proposition supports the concept that
learning occurs at the organizational, or group level, and the lat-
ter supports it at the individual level of action and experience. The
“system” argument focuses on learning within the organization as a
whole and claims that individual action functions within its boundar-
ies. The “individual” argument claims that learning emanates from
the individual first and affects the system as a result of outcomes from
individual actions. Determining a balance between individual and
organizational learning is an issue debated by scholars and an impor-
tant one that this book must address.
Why is this issue relevant to the topic of IT and organizational
learning? Simply put, understanding the nature of evolving technolo-
gies requires that learning—and subsequent learning outcomes—will
be heavily affected by the processes in which it is delivered. Therefore,
without understanding the dynamics of how individuals and organi-
zations learn, new technologies may be difficult to assimilate because
of a lack of process that can determine how they can be best used in
the business. What is most important to recognize is the way in which
responsive organizational dynamism (ROD) needs both the system
and individual approaches. Huysman (1999) suggests (and I agree)
that organizational versus individual belief systems are not mutually
exclusive pairs but dualities. In this way, organizational processes are
not seen as just top-down or bottom-up affairs, but as accumulations
of history, assimilated in organizational memory, which structures
and positions the agency or capacity for learning. In a similar way,
organizational learning can be seen as occurring through the actions
of individuals, even when they are constrained by institutional forces.
The strategic integration component of ROD lends itself to the system
model of learning to the extent that it almost mandates change—
change that, if not addressed, will inevitably affect the competitive
advantage of the organization. On the other hand, the cultural assim-
ilation component of ROD is also involved because of its effect on
individual behavior. Thus, the ROD model needs to be expanded to
71orGAnIzAtIonAl leArnInG theorIes
show the relationship between individual and organizational learning
as shown in Figure 4.1.
An essential challenge to technology comes from the fact that
organizations are not sure about how to handle its overall potential.
Thus, in a paradoxical way, this quandary provides a springboard to
learning by utilizing organizational learning theories and concepts to
create new knowledge, by learning from experience, and ultimately by
linking technology to learning and performance. This perspective can
be promoted from within the organization because chief executives
are generally open to investing in learning as long as core business
principles are not violated. This position is supported by my research
with chief executives that I discussed in Chapter 2.
Organizational
dynamism
Acceleration of events that
require different
infrastructures and
organizational processes
Requires
Strategic
integration
Cultural
assimilation
Organization
structures
(system)
Individual
actions
Renegotiation of
relationship
Organizational learning techniques
Symptoms and
implications
Technology
Figure 4.1 ROD and organizational learning.
72 INFORMATION TECHNOLOGY
Organizational learning can also assist in the adoption of
technologies by providing a mechanism to help individuals manage
change. This notion is consistent with Aldrich (2001), who observes
that many organizations reject technology-driven changes or “pio-
neering ventures,” which he called competence-destroying ventures
because they threaten existing norms and processes. Organizations
would do well to understand the value of technology, particularly for
those who adopt it early (early adopters), and how it can lead to com-
petitive advantages. Thus, organizations that position themselves to
evolve, to learn, and to create new knowledge are better prepared to
foster the handling, absorption, and acceptance of technology-driven
change than those that are not. Another way to view this ethic is to
recognize that organizations need to be “ready” to deal with change—
change that is accelerated by technology innovations. Although
Aldrich (2001) notes that organizational learning has not been tied
to performance and success, I believe it will be the technology revolu-
tion that establishes the catalyst that can tie organizational learning
to performance.
The following sections of this chapter expand on the core concept
that the success of ROD is dependent on the uses of organizational
learning techniques. In each section, I correlate this concept to many
of the organizational learning theories and show how they can be
tailored and used to provide important outcomes that assist the pro-
motion of both technological innovation and organizational learning.
Learning Organizations
Business strategists have realized that the ability of an organization
to learn faster, or “better,” than its competitors may indeed be the key
to long-term business success (Collis, 1994; Dodgson, 1993; Grant,
1996; Jones, 1975). A learning organization is defined as a form of
organization that enables, in an active sense, the learning of its mem-
bers in such a way that it creates positive outcomes, such as innovation,
efficiency, improved alignment with the environment, and competi-
tive advantage. As such, a learning organization is one that acquires
knowledge from within. Its evolution, then, is primarily driven by
itself without the need for interference from outside forces. In this
sense, it is a self-perpetuating and self-evolving system of individual
73orGAnIzAtIonAl leArnInG theorIes
and organizational transformations integrated into the daily processes
of the organization. It should be, in effect, a part of normal organiza-
tional behavior. The focus of organizational learning is not so much
on the process of learning but more on the conditions that allow suc-
cessful outcomes to flourish. Learning organization literature draws
from organizational learning theory, particularly as it relates to inter-
ventions based on outcomes. This provides an alternative to social
approaches.
In reviewing these descriptions of what a learning organization
does, and why it is important, we can begin to see that technology may
be one of the few agents that can actually show what learning organi-
zations purport to do. Indeed, Ravell created an evolving population
that became capable of dealing with environmental changes brought
on by technological innovation. The adaptation of these changes
created those positive outcomes and improved efficiencies. Without
organizational learning, specifically the creation of a learning organi-
zation, many innovations brought about by technology could produce
chaos and instability. Organizations generally tend to suffer from, and
spend too much time reflecting on, their past dilemmas. However,
given the recent phenomenon of rapid changes in technology, orga-
nizations can no longer afford the luxury of claiming that there is
simply too much else to do to be constantly worrying about technol-
ogy. Indeed, Lounamaa and March (1987) state that organizations
can no longer support the claim that too-frequent changes will inhibit
learning. The fact is that such changes must be taken as evolutionary,
and as a part of the daily challenges facing any organization. Because
a learning organization is one that creates structure and strategies, it
is positioned to facilitate the learning of all its members, during the
ongoing infiltration of technology-driven agents of change. Boland
et al. (1994) show that information systems based on multimedia
technologies may enhance the appreciation of diverse interpretations
within organizations and, as such, support learning organizations.
Since learning organizations are deliberately created to facilitate the
learning of their members, understanding the urgency of technologi-
cal changes can provide the stimulus to support planned learning.
Many of the techniques used in the Ravell case study were based
on the use of learning organizational techniques, many of which were
pioneered by Argyris and Schö n (1996). Their work focuses on using
74 INFORMATION TECHNOLOGY
“action science” methods to create and maintain learning organiza-
tions. A key component of action science is the use of reflective prac-
tices—including what is commonly known among researchers and
practitioners as reflection in action and reflection on action. Reflection
with action is the term I use as a rubric for these various methods,
involving reflection in relation to activity. Reflection has received
a number of definitions, from different sources in the literature.
Depending on the emphasis, whether on theory or practice, defini-
tions vary from philosophical articulation (Dewey, 1933; Habermas,
1998), to practice-based formulations, such as Kolb’s (1984b) use of
reflection in the experiential learning cycle. Specifically, reflection
with action carries the resonance of Schö n’s (1983) twin constructs:
reflection on action and reflection in action, which emphasize reflec-
tion in retrospect, and reflection to determine which actions to take
in the present or immediate future, respectively. Dewey (1933) and
Hullfish and Smith (1978) also suggest that the use of reflection sup-
ports an implied purpose: individuals reflect for a purpose that leads
to the processing of a useful outcome. This formulation suggests the
possibility of reflection that is future oriented—what we might call
“reflection to action.” These are methodological orientations covered
by the rubric.
Reflective practices are integral to ROD because so many
technology-based projects are event driven and require individu-
als to reflect before, during, and after actions. Most important to
this process is that these reflections are individually driven and that
technology projects tend to accelerate the need for rapid decisions.
In other words, there are more dynamic decisions to be made in less
time. Without operating in the kind of formation that is a learning
organization, IT departments cannot maintain the requisite infra-
structure to develop products timely on time and support business
units—something that clearly is not happening if we look at the
existing lateness of IT projects. With respect to the role of reflec-
tion in general, the process can be individual or organizational.
While groups can reflect, it is in being reflective that individuals
bring about “an orientation to their everyday lives,” according to
Moon (1999). “For others reflection comes about when conditions
in the learning environment are appropriate” (p. 186). However,
IT departments have long suffered from not having the conditions
75orGAnIzAtIonAl leArnInG theorIes
to support such an individual learning environment. This is why
implementing a learning organization is so appealing as a remedy
for a chronic problem.
Communities of Practice
Communities of practice are based on the assumption that learning
starts with engagement in social practice and that this practice is the
fundamental construct by which individuals learn (Wenger, 1998).
Thus, communities of practice are formed to get things done by using
a shared way of pursuing interest. For individuals, this means that
learning is a way of engaging in, and contributing to, the practices
of their communities. For specific communities, on the other hand,
it means that learning is a way of refining their distinctive practices
and ensuring new generations of members. For entire organizations,
it means that learning is an issue of sustaining interconnected com-
munities of practice, which define what an organization knows and
contributes to the business. The notion of communities of practice
supports the idea that learning is an “inevitable part of participat-
ing in social life and practice” (Elkjaer, 1999, p. 75). Communities of
practice also include assisting members of the community, with the
particular focus on improving their skills. This is also known as situ
ated learning. Thus, communities of practice are very much a social
learning theory, as opposed to one that is based solely on the indi-
vidual. Communities of practice have been called learning in working,
in which learning is an inevitable part of working together in a social
setting. Much of this concept implies that learning, in some form or
other will occur, and that it is accomplished within a framework of
social participation, not solely or simply in the individual mind. In a
world that is changing significantly due to technological innovations,
we should recognize the need for organizations, communities, and
individuals to embrace the complexities of being interconnected at an
accelerated pace.
There is much that is useful in the theory of communities of practice
and that justifies its use in ROD. While so much of learning technol-
ogy is event driven and individually learned, it would be shortsighted
to believe that it is the only way learning can occur in an organization.
Furthermore, the enormity and complexity of technology requires a
76 INFORMATION TECHNOLOGY
community focus. This would be especially useful within the confines of
specific departments that are in need of understanding how to deal with
technological dynamism. That is, preparation for using new technolo-
gies cannot be accomplished by waiting for an event to occur. Instead,
preparation can be accomplished by creating a community that can
assess technologies as a part of the normal activities of an organization.
Specifically, this means that, through the infrastructure of a commu-
nity, individuals can determine how they will organize themselves to
operate with emerging technologies, what education they will need, and
what potential strategic integration they will need to prepare for changes
brought on by technology. Action in this context can be viewed as a
continuous process, much in the same way that I have presented technol-
ogy as an ongoing accelerating variable. However, Elkjaer (1999) argues
that the continuous process cannot exist without individual interaction.
As he states: “Both individual and collective activities are grounded in
the past, the present, and the future. Actions and interactions take place
between and among group members and should not be viewed merely as
the actions and interactions of individuals” (p. 82).
Based on this perspective, technology can be handled by the
actions (community) and interactions (individuals) of the organiza-
tion as shown in Figure 4.2.
Communities of practice:
Social actions of how to
deal with technology
Allows groups to engage in
discourse and examine the
ongoing effects on the
department/unit, including
short/long-term education
requirements, skills transfer
and development,
organizational issues,
relationships with other
departments and customers
�e individual interacts with
others and determines new
methods of utilizing
technology within his/her
specific business objectives.
Individuals use reflection as
the basis of transformative
learning.
Event-driven individual-
based learning
Figure 4.2 Technology relationship between communities and individuals.
77orGAnIzAtIonAl leArnInG theorIes
It seems logical that communities of practice provide the mecha-
nism to assist, particularly, with the cultural assimilation component
of ROD. Indeed, cultural assimilation targets the behavior of the
community, and its need to consider what new organizational struc-
tures can better support emerging technologies. I have, in many ways,
already established and presented the challenge of what should be
called the “community of IT practice” and its need to understand how
to restructure to meet the needs of the organization. This is the kind
of issue that does not lend itself to event-driven, individual learning,
but rather to a more community-based process that can deal with the
realignment of departmental relationships.
Essentially, communities of IT practice must allow for the con-
tinuous evolution of learning based on emergent strategies. Emergent
strategies acknowledge unplanned action. Such strategies are defined
as patterns that develop in the absence of intentions (Mintzberg &
Waters, 1985). Emergent strategies can be used to gather groups that
can focus on issues not based on previous plans. These strategies can
be thought of as creative approaches to proactive actions. Indeed, a
frustrating aspect of technology is its uncertainty. Ideas and concepts
borrowed from communities of practice can help departments deal
with the evolutionary aspects of technological dynamism.
The relationship, then, between communities of practice and tech-
nology is significant. Many of the projects involving IT have been tra-
ditionally based on informal processes of learning. While there have
been a number of attempts to computerize knowledge using various
information databases, they have had mixed results. A “structured”
approach to creating knowledge reporting is typically difficult to estab-
lish and maintain. Many IT departments have utilized International
Organization for Standardization (ISO) 9000 concepts. The ISO is
a worldwide organization that defines quality processes through for-
mal structures. It attempts to take knowledge-based information and
transfer it into specific and documented steps that can be evaluated as
they occur. Unfortunately, the ISO 9000 approach, even if realized,
is challenging when such knowledge and procedures are undergoing
constant and unpredictable change. Technological dynamism cre-
ates too many uncertainties to be handled by the extant discourses on
how organizations have dealt with change variables. Communities of
practice provide an umbrella of discourses that are necessary to deal
78 INFORMATION TECHNOLOGY
with ongoing and unpredictable interactions established by emerging
technologies.
Support for this position is found in the fact that technology requires
accumulative collective learning that needs to be tied to social prac-
tices; this way, project plans can be based on learning as a participatory
act. One of the major advantages of communities of practice is that
they can integrate key competencies into the very fabric of the organi-
zation (Lesser et al., 2000). The typical disadvantage of IT is that its
staff needs to serve multiple organizational structures simultaneously.
This requires that priorities be set by the organization. Unfortunately,
it is difficult, if not impossible, for IT departments to establish such
priorities without engaging in concepts of communities of practice that
allow for a more integrated process of negotiation and determination.
Much of the process of communities of practice would be initiated by
strategic integration and result in many cultural assimilation changes;
that is, the process of implementing communities of practice will
necessitate changes in cultural behavior and organization processes.
As stated, communities-of-practice activities can be initiated via
the strategic integration component of ROD. According to Lesser et
al. (2000), a knowledge strategy based on communities of practice
consists of seven basic steps (Table 4.1).
Lesser and Wenger (2000) suggest that communities of practice
are heavily reliant on innovation: “Some strategies rely more on inno-
vation than others for their success. … Once dependence on innova-
tion needs have been clarified, you can work to create new knowledge
where innovation matters” (p. 8). Indeed, electronic communities of
practice are different from physical communities. IT provides another
dimension to how technology affects organizational learning. It does
so by creating new ways in which communities of practice operate. In
the complexity of ways that it affects us, technology has a dichoto-
mous relationship with communities of practice. That is, there is a
two-sided issue: (1) the need for communities of practice to imple-
ment IT projects and integrate them better into learning organiza-
tions, and (2) the expansion of electronic communities of practice
invoked by technology, which can, in turn, assist in organizational
learning, globally and culturally.
The latter issue establishes the fact that a person can now readily
be a member of many electronic communities, and in many different
79orGAnIzAtIonAl leArnInG theorIes
capacities. Electronic communities are different, in that they can
have memberships that are short-lived and transient, forming and
re-forming according to interest, particular tasks, or commonality of
issue. Communities of practice themselves are utilizing technologies
to form multiple and simultaneous relationships. Furthermore, the
growth of international communities resulting from ever-expanding
global economies has created further complexities and dilemmas.
Thus far, I have presented communities of practice as an infra-
structure that can foster the development of organizational learn-
ing to support the existence of technological dynamism. Most of
what I presented has an impact on the cultural assimilation com-
ponent of ROD—that is, affecting organizational structure and the
Table 4.1 Extended Seven Steps of Community of Practice Strategy
STEP COMMUNITIES-OF-PRACTICE STEP TECHNOLOGY EXTENSION
1 Understanding strategic knowledge
needs: What knowledge is critical
to success.
Understanding how technology affects strategic
knowledge, and what specific technological
knowledge is critical to success.
2 Engaging practice domains: People
form communities of practice to
engage in and identify with.
Technology identifies groups, based on
business-related benefits; requires domains to
work together toward measurable results.
3 Developing communities: How to
help key communities reach their
full potential.
Technologies have life cycles that require
communities to continue; treats the life cycle
as a supporter for attaining maturation and
full potential.
4 Working the boundaries: How to link
communities to form broader
learning systems.
Technology life cycles require new boundaries to
be formed. This will link other communities
that were previously outside discussions and
thus, expand input into technology
innovations.
5 Fostering a sense of belonging: How
to engage people’s identities and
sense of belonging.
The process of integrating communities: IT and
other organizational units will create new
evolving cultures that foster belonging as well
as new social identities.
6 Running the business: How to
integrate communities of practice
into running the business of the
organization.
Cultural assimilation provides new
organizational structures that are necessary to
operate communities of practice and to
support new technological innovations.
7 Applying, assessing, reflecting,
renewing: How to deploy knowledge
strategy through waves of
organizational transformation.
The active process of dealing with multiple new
technologies that accelerates the deployment
of knowledge strategy. Emerging technologies
increase the need for organizational
transformation.
80 INFORMATION TECHNOLOGY
way things need to be done. However, technology, particularly the
strategic integration component of ROD, fosters a more expanded
vision of what can represent a community of practice. What does
this mean? Communities of practice, through the advent of strate-
gic integration, have expanded to include electronic communities.
While technology can provide organizations with vast electronic
libraries that end up as storehouses of information, they are only
valuable if they are allowed to be shared within the community.
Although IT has led many companies to imagine a new world of
leveraged knowledge, communities have discovered that just storing
information does not provide for effective and efficient use of knowl-
edge. As a result, many companies have created these “electronic”
communities so that knowledge can be leveraged, especially across
cultures and geographic boundaries. These electronic communities
are predictably more dynamic as a result of what technology pro-
vides to them. The following are examples of what these communi-
ties provide to organizations:
• Transcending boundaries and exchanging knowledge with
internal and external communities. In this circumstance,
communities are extending not only across business units,
but also into communities among various clients—as we
see developing in advanced e-business strategies. Using the
Internet and intranets, communities can foster dynamic inte-
gration of the client, an important participant in competitive
advantage. However, the expansion of an external commu-
nity, due to emergent electronics, creates yet another need for
the implementation of ROD.
• Creating “Internet” or electronic communities as sources
of knowledge (Teigland, 2000), particularly for technical-
oriented employees. These employees are said to form “com-
munities of techies”: technical participants, composed largely
of the IT staff, who have accelerated means to come into con-
tact with business-related issues. In the case of Ravell, I cre-
ated small communities by moving IT staff to allow them to
experience the user’s need; this move is directly related to the
larger, and expanded, ability of using electronic communities
of practice.
81orGAnIzAtIonAl leArnInG theorIes
• Connecting social and workplace communities through
sophisticated networks. This issue links well to the entire
expansion of issues surrounding organizational learning, in
particular, learning organization formation. It enfolds both
the process and the social dialectic issues so important to cre-
ating well-balanced communities of practice that deal with
organizational-level and individual development.
• Integrating teleworkers and non-teleworkers, including the
study of gender and cultural differences. The growth of dis-
tance workers will most likely increase with the maturation of
technological connectivity. Videoconferencing and improved
media interaction through expanded broadband will support
further developments in virtual workplaces. Gender and cul-
ture will continue to become important issues in the expan-
sion of existing models that are currently limited to specific
types of workplace issues. Thus, technology allows for the
“globalization” of organizational learning needs, especially
due to the effects of technological dynamism.
• Assisting in computer-mediated communities. Such media-
tion allows for the management of interaction among com-
munities, of who mediates their communications criteria, and
of who is ultimately responsible for the mediation of issues.
Mature communities of practice will pursue self-mediation.
• Creating “flame” communities. A flame is defined as a lengthy,
often personally insulting, debate in an electronic commu-
nity that provides both positive and negative consequences.
Difference can be linked to strengthening the identification
of common values within a community but requires organiza-
tional maturation that relies more on computerized commu-
nication to improve interpersonal and social factors to avoid
miscommunications (Franco et al., 2000).
• Storing collective knowledge in large-scale libraries and
databases. As Einstein stated: “Knowledge is experience.
Everything else is just information.” Repositories of informa-
tion are not knowledge, and they often inhibit organizations
from sharing important knowledge building blocks that affect
technical, social, managerial, and personal developments that
are critical for learning organizations (McDermott, 2000).
82 INFORMATION TECHNOLOGY
Ultimately, these communities of practice are forming new social
networks, which have established the cornerstone of “global connectiv-
ity, virtual communities, and computer-supported cooperative work”
(Wellman et al., 2000, p. 179). These social networks are creating
new cultural assimilation issues, changing the very nature of the way
organizations deal with and use technology to change how knowledge
develops and is used via communities of practice. It is not, therefore,
that communities of practice are new infrastructure or social forces;
rather, the difference is in the way they communicate. Strategic inte-
gration forces new networks of communication to occur (the IT effect
on communities of practice), and the cultural assimilation component
requires communities of practice to focus on how emerging technolo-
gies are to be adopted and used within the organization.
In sum, what we are finding is that technology creates the need
for new organizations that establish communities of practice. New
members enter the community and help shape its cognitive schemata.
Aldrich (2001) defines cognitive schemata as the “structure that repre-
sents organized knowledge about persons, roles, and events” (p. 148).
This is a significant construct in that it promotes the importance of a
balanced evolutionary behavior among these three areas. Rapid learn-
ing, or organizational knowledge, brought on by technological inno-
vations can actually lessen progress because it can produce premature
closure (March, 1991). Thus, members emerge out of communities of
practice that develop around organizational tasks. They are driven by
technological innovation and need constructs to avoid premature clo-
sure, as well as ongoing evaluation of perceived versus actual realities.
As Brown and Duguid (1991, p. 40) state:
The complex of contradictory forces that put an organization’s assump-
tions and core beliefs in direct conflict with members’ working, learn-
ing, and innovating arises from a thorough misunderstanding of what
working, learning, and innovating are. As a result of such misunder-
standings, many modern processes and technologies, particularly those
designed to downskill, threaten the robust working, learning, and inno-
vating communities and practice of the workplace.
This perspective can be historically justified. We have seen time
and time again how a technology’s original intention is not realized
83orGAnIzAtIonAl leArnInG theorIes
yet still productive. For instance, many uses of e-mail by individuals
were hard to predict. It may be indeed difficult, if not impossible,
to predict the eventual impact of a technology on an organization
and provide competitive advantages. However, based on evolutionary
theories, it may be beneficial to allow technologies to progress from
driver-to-supporter activity. Specifically, this means that communi-
ties of practice can provide the infrastructure to support growth from
individual-centered learning; that is, to a less event-driven process
that can foster systems thinking, especially at the management levels
of the organization. As organizations evolve into what Aldrich (2001)
call “bounded entities,” interaction behind boundaries heightens the
salience of cultural difference. Aldrich’s analysis of knowledge cre-
ation is consistent with what he called an “adaptive organization”—one
that is goal oriented and learns from trial and error (individual-based
learning)—and a “knowledge development” organization (system-
level learning). The latter consists of a set of interdependent members
who share patterns of belief. Such an organization uses inferential and
vicarious learning and generates new knowledge from both experi-
mentation and creativity. Specifically, learning involves sense mak-
ing and builds on the knowledge development of its members. This
becomes critical to ROD, especially in dealing with change driven
by technological innovations. The advantages and challenges of vir-
tual teams and communities of practice are expanded in Chapter 7, in
which I integrate the discussion with the complexities of outsourcing
teams.
Learning Preferences and Experiential Learning
The previous sections of this chapter focused on organizational learn-
ing, particularly two component theories and methods: learning
organizations and communities of practice. Within these two meth-
ods, I also addressed the approaches to learning; that is, learning that
occurs on the individual and the organizational levels. I advocated
the position that both system and individual learning need to be part
of the equation that allows a firm to attain ROD. Notwithstanding
how and when system and individual learning occurs, the investi-
gation of how individuals learn must be a fundamental part of any
theory-to-practice effort, such as the present one. Indeed, whether
84 INFORMATION TECHNOLOGY
one favors a view of learning as occurring on the organizational or
on the individual level (and it occurs on both), we have to recog-
nize that individuals are, ultimately, those who must continue to
learn. Dewey (1933) first explored the concepts and values of what
he called “experiential learning.” This type of learning comes from
the experiences that adults have accrued over the course of their
individual lives. These experiences provide rich and valuable forms
of “literacy,” which must be recognized as important components
to overall learning development. Kolb (1984a) furthered Dewey’s
research and developed an instrument that measures individual
preferences or styles in which adults learn, and how they respond
to day-to-day scenarios and concepts. Kolb’s (1999) Learning Style
Inventory (LSI) instrument allows adults to better understand how
they learn. It helps them understand how to solve problems, work in
teams, manage conflicts, make better career choices, and negotiate
personal and professional relationships. Kolb’s research provided a
basis for comprehending the different ways in which adults prefer to
learn, and it elaborated the distinct advantages of becoming a bal-
anced learner.
The instrument schematizes learning preferences and styles into
four quadrants: concrete experience , reflective observation , abstract con
ceptualization , and active experimentation . Adults who prefer to learn
through concrete experience are those who need to learn through
actual experience, or compare a situation with reality. In reflective
observation, adults prefer to learn by observing others, the world
around them, and what they read. These individuals excel in group
discussions and can effectively reflect on what they see and read.
Abstract conceptualization refers to learning, based on the assimila-
tion of facts and information presented, and read. Those who prefer
to learn by active experimentation do so through a process of evaluat-
ing consequences; they learn by examining the impact of experimen-
tal situations. For any individual, these learning styles often work in
combinations. After classifying an individual’s responses to questions,
Kolb’s instrument determines the nature of these combinations. For
example, an individual can have a learning style in which he or she
prefers to learn from concrete experiences using reflective observation
as opposed to actually “doing” the activity. Figure 4.3 shows Kolb’s
model in the form of a “learning wheel.” The wheel graphically shows
85orGAnIzAtIonAl leArnInG theorIes
an individual’s learning style inventory, reflecting a person’s strengths
and weaknesses with respect to each learning style.
Kolb’s research suggests that learners who are less constrained by
learning preferences within a distinct style are more balanced and are
better learners because they have available to them more dimensions
in which to learn. This is a significant concept; it suggests that adults
who have strong preferences may not be able to learn when faced with
learning environments that do not fit their specific preference. For
example, an adult who prefers group discussion and enjoys reflective
conversation with others may feel uncomfortable in a less interper-
sonal, traditional teaching environment. The importance of Kolb’s
LSI is that it helps adults become aware that such preferences exist.
McCarthy’s (1999) research furthers Kolb’s work by investigating
the relationship between learning preferences and curriculum devel-
opment. Her Learning Type Measure (4Mat) instrument mirrors
and extends the Kolb style quadrants by expressing preferences from
an individual’s perspective on how to best achieve learning. Another
important contribution in McCarthy’s extension of Kolb’s work is the
inclusion of brain function considerations, particularly in terms of
hemisphericity. McCarthy focuses on the cognitive functions asso-
ciated with the right hemisphere (perception) and left hemisphere
(process) of the brain. Her 4Mat system shows how adults, in each
Concrete experience
Abstract
conceptualization
Learns from
hands-on
experience
Observes
concrete
situation and
reflects on its
meaning
Seeks to find
practical uses
for ideas and
theories
Interested in
abstract ideas
and concepts
Active
experimentation
Reflective
observation
Figure 4.3 Kolb’s Learning Style Inventory.
86 INFORMATION TECHNOLOGY
style quadrant, perceive learning with the left hemisphere of the
brain and how it is related to processing in the right hemisphere.
For example, for Type 1 learners (concrete experience and reflective
observation), adults perceive in a concrete way and process in a reflec-
tive way. In other words, these adults prefer to learn by actually doing
a task and then processing the experience by reflecting on what they
experienced during the task. Type 2 learners (reflective observation
and abstract conceptualization), however, perceive a task by abstract
thinking and process it by developing concepts and theories from
their initial ideas. Figure 4.4 shows McCarthy’s rendition of the
Kolb learning wheel.
The practical claim to make here is that practitioners who acquire
an understanding of the concepts of the experiential learning mod-
els will be better able to assist individuals in understanding how
they learn, how to use their learning preferences during times of
Meaning
Con
ce
pts
W
hat?
Skills
How?
W
hy?
Adaptat
ions
If?
Integrate
QIV
QIII
QI
QII
Try Define
Refine Examine
ImageExtend
Counsel
Figure 4.4 McCarthy rendition of the Kolb Learning Wheel.
87orGAnIzAtIonAl leArnInG theorIes
transition, and the importance of developing other dimensions of
learning. The last is particularly useful in developing expertise in
learning from individual reflective practices, learning as a group
in communities of practice, and participating in both individual
transformative learning, and organizational transformations. How,
then, does experiential learning operate within the framework of
organizational learning and technology? This is shown Figure 4.5
in a combined wheel, called the applied individual learning for tech
nology model, which creates a conceptual framework for linking the
technology life cycle with organizational learning and experiential
learning constructs.
Figure 4.5 expands the wheel into two other dimensions. The
first quadrant (QI) represents the feasibility stage of technology. It
requires communities to work together, to ascertain why a particular
technology might be attractive to the organization. This quadrant is
Engaging in the
technology process
Con
ce
ptuali
ze
driv
er
an
d su
ppor
ter
life
cy
cle
sMeas
urem
en
t a
nd
an
aly
sis
–W
hat?
Exploring technology opportunities
Planning and
design–How?
Feasibility–W
hy?
Im
plem
en
tin
g te
ch
nology
Crea
tio
n–W
hat
If?
Action
learning
QIV
QIII
QI
QII
Knowledge
management
Transformative
learning
Communities
of practice
Figure 4.5 Combined applied learning wheel.
88 INFORMATION TECHNOLOGY
best represented by individuals who engage in group discussions to
make better connections from their own experiences. The process
of determining whether a technology is feasible requires integrated
discourse among affected communities, who then can make better
decisions, as opposed to centralized or individual and predetermined
decisions on whether to use a specific technology. During this phase,
individuals need to operate in communities of practice, as the infra-
structure with which to support a democratic process of consensus
building.
The second quadrant (QII) corresponds to measurement and analy-
sis. This operation requires individuals to engage in specific details
to determine and conceptualize driver and supporter life cycles ana-
lytically. Individuals need to examine the specific details to under-
stand “ what” the technology can do, and to reflect on what it means to
them, and their business unit. This analysis is measured with respect
to what the ROI will be, and which driver and supporter functions
will be used. This process requires transformation theory that allows
individuals to perceive and conceptualize which components of the
technology can transform the organization.
Quadrant 3 (QIII), design and planning, defines the “how”
component of the technology life cycle. This process involves explor-
ing technology opportunities after measurement and analysis have
been completed. The process of determining potential uses for
technology requires knowledge of the organization. Specifically, it
needs the abstract concepts developed in QII to be integrated with
tacit knowledge, to then determine possible applications where the
technology can succeed. Thus, knowledge management becomes the
predominant mechanism for translating what has been conceptual-
ized into something explicit (discussed further in Chapter 5).
Quadrant 4 (QIV) represents the implementation-and-creation
step in the technology life cycle. It addresses the hypothetical ques-
tion of “What if?” This process represents the actual implementation
of the technology. Individuals need to engage in action learning tech-
niques, particularly those of reflective practices. The implementation
step in the technology life cycle is heavily dependent on the indi-
vidual. Although there are levels of project management, the essential
aspects of what goes on inside the project very much relies on the
individual performances of the workers.
89orGAnIzAtIonAl leArnInG theorIes
Social Discourse and the Use of Language
The successful implementation of communities of practice fosters
heavy dependence on social structures. Indeed, without understand-
ing how social discourse and language behave, creating and sustaining
the internal interactions within and among communities of practice
are not possible. In taking individuals as the central component for
continued learning and change in organizations, it becomes impor-
tant to work with development theories that can measure and support
individual growth and can promote maturation with the promotion
of organizational/system thinking (Watkins & Marsick, 1993). Thus,
the basis for establishing a technology-driven world requires the inclu-
sion of linear and circular ways of promoting learning. While there
is much that we will use from reflective action concepts designed by
Argyris and Schö n (1996), it is also crucial to incorporate other theo-
ries, such as marginality, transitions, and individual development.
Senge (1990) also compares learning organizations with engineer-
ing innovation; he calls these engineering innovations “technologies.”
However, he also relates innovation to human behavior and distin-
guishes it as a “discipline.” He defines discipline as “a body of theory
and technique that must be studied and mastered to be put into prac-
tice, as opposed to an enforced order or means of punishment” (p. 10).
A discipline, according to Senge, is a developmental path for acquir-
ing certain skills or competencies. He maintains the concept that cer-
tain individuals have an innate “gift”; however, anyone can develop
proficiency through practice. To practice a discipline is a lifelong
learning process—in contrast to the work of a learning organization.
Practicing a discipline is different from emulating a model. This book
attempts to bring the arenas of discipline and technology into some
form of harmony. What technology offers is a way of addressing the
differences that Senge proclaims in his work. Perhaps this is what is
so interesting and challenging about attempting to apply and under-
stand the complexities of how technology, as an engineering innova-
tion, affects the learning organization discipline—and thereby creates
a new genre of practices. After all, I am not sure that one can master
technology as either an engineering component, or a discipline.
Technology dynamism and ROD expand the context of the glo-
balizing forces that have added to the complexity of analyzing “the
90 INFORMATION TECHNOLOGY
language and symbolic media we employ to describe, represent,
interpret, and theorize what we take to be the facticity of organi-
zational life” (Grant et al., 1998, p. 1). ROD needs to create what
I call the “language of technology.” How do we then incorporate
technology in the process of organizing discourse, or how has tech-
nology affected that process? We know that the concept of dis-
course includes language, talk, stories, and conversations, as well
as the very heart of social life, in general. Organizational discourse
goes beyond what is just spoken; it includes written text and other
informal ways of communication. Unfortunately, the study of dis-
course is seen as being less valuable than action. Indeed, discourse
is seen as a passive activity, while “doing” is seen as supporting
more tangible outcomes. However, technology has increased the
importance of sensemaking media as a means of constructing and
understanding organizational identities. In particular, technology,
specifically the use of e-mail, has added to the instability of lan-
guage, and the ambiguities associated with metaphorical analysis—
that is, meaning making from language as it affects organizational
behavior. Another way of looking at this issue is to study the meta-
phor, as well as the discourse, of technology. Technology is actually
less understood today, a situation that creates even greater reason
than before for understanding its metaphorical status in organiza-
tional discourse—particularly with respect to how technology uses
are interpreted by communities of practice. This is best shown using
the schema of Grant et al. of the relationship between content and
activity and how, through identity, skills, and emotion, it leads to
action (Figure 4.6).
To best understand Figure 4.4 and its application to technology,
it is necessary to understand the links between talk and action. It
is the activity and content of conversations that discursively produce
identities, skills, and emotions, which in turn lead to action. Talk,
in respect to conversation and content, implies both oral and writ-
ten forms of communications, discourse, and language. The written
aspect can obviously include technologically fostered communications
over the Internet. It is then important to examine the unique condi-
tions that technology brings to talk and its corresponding actions.
91orGAnIzAtIonAl leArnInG theorIes
Identity
Individual identities are established in collaborations on a team, or
in being a member of some business committee. Much of the theory
of identity development is related to how individuals see themselves,
particularly within the community in which they operate. Thus, how
active or inactive we are within our communities, shapes how we see
ourselves and how we deal with conversational activity and content.
Empowerment is also an important part of identity. Indeed, being
excluded or unsupported within a community establishes a different
identity from other members of the group and often leads to margin-
ality (Schlossberg, 1989).
Identities are not only individual but also collective, which to
a large extent contributes to cultures of practice within organiza-
tional factions. It is through common membership that a collec-
tive identity can emerge. Identity with the group is critical during
discussions regarding emerging technologies and determining how
they affect the organization. The empowerment of individuals, and
the creation of a collective identity, are therefore important in fos-
tering timely actions that have a consensus among the involved
community.
Skills
Identity
Emotions
Action
Conversational
activity
Conversational
content
Figure 4.6 Grant’s schema— relationship between content and activity.
92 INFORMATION TECHNOLOGY
Skills
According to Hardy et al. (1998, p. 71), conversations are “arenas in
which particular skills are invested with meaning.” Watson (1995)
suggests that conversations not only help individuals acquire “techni-
cal skills” but also help develop other skills, such as being persuasive.
Conversations that are about technology can often be skewed toward
the recognition of those individuals who are most “technologically
talented.” This can be a problem when discourse is limited to who
has the best “credentials” and can often lead to the undervaluing of
social production of valued skills, which can affect decisions that lead
to actions.
Emotion
Given that technology is viewed as a logical and rational field, the
application of emotion is not often considered a factor of action.
Fineman (1996) defines emotion as “personal displays of affected, or
‘moved’ and ‘agitated’ states—such as joy, love, fear, anger, sadness,
shame, embarrassment,”—and points out that these states are socially
constructed phenomena. There is a positive contribution from emo-
tional energy as well as a negative one. The consideration of positive
emotion in the organizational context is important because it drives
action (Hardy et al., 1998). Indeed, action is more emotion than ratio-
nal calculation. Unfortunately, the study of emotions often focuses on
its negative aspects. Emotion, however, is an important part of how
action is established and carried out, and therefore warrants attention
in ROD.
Identity, skills, and emotion are important factors in how talk actu-
ally leads to action. Theories that foster discourse, and its use in orga-
nizations, on the other hand, are built on linear paths of talk and
action. That is, talk can lead to action in a number of predefined paths.
Indeed, talk is typically viewed as “cheap” without action or, as is often
said, “action is valued,” or “action speaks louder than words.” Talk,
from this perspective, constitutes the dynamism of what must occur
with action science, communities of practice, transformative learn-
ing, and, eventually, knowledge creation and management. Action,
by contrast, can be viewed as the measurable outcomes that have been
93orGAnIzAtIonAl leArnInG theorIes
eluding organizational learning scholars. However, not all actions
lead to measurable outcomes. Marshak (1998) established three types
of talk that lead to action: tooltalk , frametalk , and mythopoetictalk :
1. Tooltalk includes “instrumental communities required to:
discuss, conclude, act, and evaluate outcomes” (p. 82). What
is most important in its application is that tool-talk be used to
deal with specific issues for an identified purpose.
2. Frametalk focuses on interpretation to evaluate the mean-
ings of talk. Using frame-talk results in enabling implicit and
explicit assessments, which include symbolic, conscious, pre-
conscious, and contextually subjective dimensions.
3. Mythopoetictalk communicates ideogenic ideas and images
(i.e., myths and cosmologies) that can be used to communicate
the nature of how to apply tool-talk and frame-talk within the
particular culture or society. This type of talk allows for con-
cepts of intuition and ideas for concrete application.
Furthermore, it has been shown that organizational members
experience a difficult and ambiguous relationship, between discourse
that makes sense, and non-sense—what is also known as “the struggle
with sense” (Grant et al., 1998). There are two parts that comprise
non-sense: The first is in the difficulties that individuals experience in
understanding why things occur in organizations, particularly when
their actions “make no sense.” Much of this difficulty can be cor-
related with political issues that create “nonlearning” organizations.
However, the second condition of non-sense is more applicable, and
more important, to the study of ROD than the first—that is, non-
sense associated with acceleration in the organizational change pro-
cess. This area comes from the taken-for-granted assumptions about
the realities of how the organization operates, as opposed to how it can
operate. Studies performed by Wallemacq and Sims (1998) provide
examples of how organizational interventions can decompose stories
about non-sense and replace them with new stories that better address
a new situation and can make sense of why change is needed. This
phenomenon is critical to changes established, or responded to, by the
advent of new technologies. Indeed, technology has many nonsensi-
cal or false generalizations regarding how long it takes to implement
a product, what might be the expected outcomes, and so on. Given
94 INFORMATION TECHNOLOGY
the need for ROD—due to the advent of technology—there is a con-
comitant need to reexamine “old stories” so that the necessary change
agents can be assessed and put into practice. Ultimately, the challenge
set forth by Wallemacq and Sims is especially relevant, and critical,
since the very definition of ROD suggests that communities need
to accelerate the creation of new stories—stories that will occur at
unpredictable intervals. Thus, the link between discourse, organiza-
tional learning, and technology is critical to providing ways in which
to deal with individuals and organizations facing the challenge of
changing and evolving.
Grant’s (1996) research shows that sense making using media and
stories provided effective ways of constructing and understanding
organizational identities. Technology affects discourse in a similar
way that it affects communities of practice; that is, it is a variable that
affects the way discourse is used for organizational evolution. It also
provides new vehicles on how such discourse can occur. However, it is
important not to limit discourse analysis to merely being about “texts,”
emotion, stories, or conversations in organizations. Discourse analysis
examines “the constructing, situating, facilitating, and communicat-
ing of diverse cultural, instrumental, political, and socio-economic
parameters of ‘organizational being’” (Grant, 1996, p. 12). Hence,
discourse is the essential component of every organizational learn-
ing effort. Technology accelerates the need for such discourse, and
language, in becoming a more important part of the learning matura-
tion process, especially in relation to “system” thinking and learning.
I propose then, as part of a move toward ROD, that discourse theories
must be integrated with technological innovation and be part of the
maturation in technology and in organizational learning.
The overarching question is how to apply these theories of dis-
course and language to learning within the ROD framework and par-
adigm. First, let us consider the containers of types of talk discussed
by Marshak (1998) as shown in Figure 4.7.
These types of talk can be mapped onto the technology wheel, so that
the most appropriate oral and written behaviors can be set forth within
each quadrant, and development life cycle, as shown in Figure 4.8.
Mythopoetic-talk is most appropriate in Quadrant 1 (QI), where
the fundamental ideas and issues can be discussed in communities of
practice. These technological ideas and concepts, deemed feasible, are
95orGAnIzAtIonAl leArnInG theorIes
then analyzed through frame-talk, by which the technology can be
evaluated in terms of how it meets the fundamental premises estab-
lished in QI. Frame-talk also reinforces the conceptual legitimacy
of how technology will transform the organization while provid-
ing appropriate ROI. Tool-talk represents the process of identifying
applications and actually implementing them. For this reason, tool-
talk exists in both QIII and QIV. The former quadrant represents
Mythopoetic-talk: Ideogenic
Frame-talk: Interpretive
Tool-talk: Instrumental
Figure 4.7 Marshak’s type of talk containers.
Planning and design–How?
Im
plem
en
tat
ion–W
hat
If?
Tool-talk: Doing
using reflective
practices
QIV
QIII
QI
QII
Tool-talk:
Discuss-decide:
Knowledge
management
Frame-talk:
Transformative
Mythopoetic-
talk: Ground
ideas using
communities of
practice
Feasibility–W
hy?
Meas
urem
en
t a
nd an
aly
sis
–W
hat?
Figure 4.8 Marshak’s model mapped to the technology learning wheel.
96 INFORMATION TECHNOLOGY
the discussion-to-decision portion, and the latter represents the actual
doing and completion of the project itself. In QIII, table-talk requires
knowledge management to transition technology concepts into real
options. QIV transforms these real options into actual projects, in
which, reflecting on actual practices during implementation, provides
an opportunity for individual- and organizational-level learning.
Marshak’s (1998) concept of containers and cycles of talk and
action are adapted and integrated with cyclical and linear matu-
rity models of learning. However, discourse and language must
be linked to performance, which is why it needs to be part of the
discourse and language-learning wheel. By integrating discourse
and language into the wheel, individual and group activities can
use discourse and language as part of ref lective practices to create
an environment that can foster action that leads to measurable
outcomes. This process, as explained throughout this book, is of
paramount importance in understanding how discourse operates
with ROD in the information age.
Linear Development in Learning Approaches
Focusing only on the role of the individual in the company is an incom-
plete approach to formulating an effective learning program. There is
another dimension to consider that is based on learning maturation.
That is, where in the life cycle of learning are the individuals and the
organization? The best explanation of this concept is the learning mat-
uration experience at Ravell. During my initial consultation at Ravell,
the organization was at a very early stage of organizational learning.
This was evidenced by the dependence of the organization on event-
driven and individual reflective practice learning. Technology acted
as an accelerator of learning—it required IT to design a new network
during the relocation of the company. Specifically, the acceleration,
operationalized by a physical move, required IT to establish new rela-
tionships with line management. The initial case study concluded that
there was a cultural change as a result of these new relationships—
cultural assimilation started to occur using organizational learning
techniques, specifically reflective practices.
After I left Ravell, another phase in the evolution of the company
took place. A new IT director was hired in my stead, who attempted
97orGAnIzAtIonAl leArnInG theorIes
to reinstate the old culture: centralized infrastructure, stated opera-
tional boundaries, and separations that mandated anti-learning orga-
nizational behaviors. After six months, the line managers, faced with
having to revert back to a former operating culture, revolted and
demanded the removal of the IT director. This outcome, regrettable
as it may be, is critical in proving the conclusion of the original study
that the culture at Ravell had indeed evolved from its state, at the time
of my arrival. The following are two concrete examples that support
this notion:
1. The attempt of the new IT director to “roll back” the process
to a former cultural state was unsuccessful, showing that a
new evolving culture had indeed occurred.
2. Line managers came together from the established learning
organization to deliver a concerted message to the execu-
tive team. Much of their learning had now shifted to a social
organization level that was based less on events and was
more holistic with respect to the goals and objectives of the
organization.
Thus, we see a shift from an individual-based learning process
to one that is based more on the social and organizational issues to
stimulate transformation. This transformation in learning method
occurred within the same management team, suggesting that changes
in learning do occur over time and from experience. Another way of
viewing the phenomenon is to see Ravell as reaching the next level of
organizational learning or maturation with learning. Consistent with
the conclusion of the original study, technology served to accelerate
the process of change or accelerate the maturation process of organi-
zational learning.
Another phase (Phase II) of Ravell transpired after I returned
to the company. I determined at that time that the IT department
needed to be integrated with another technology-based part of the
business—the unit responsible for media and engineering services
(as opposed to IT). While I had suggested this combination eight
months earlier, the organization had not reached the learning matu-
ration to understand why such a combination was beneficial. Much
of the reason it did not occur earlier, can also be attributed to the
organization’s inability to manage ROD, which, if implemented,
98 INFORMATION TECHNOLOGY
would have made the integration more obvious. The initial Ravell
study served to bring forth the challenges of cultural assimilation,
to the extent that the organization needed to reorganize itself and
change its behavior. In phase II, the learning process matured by
accelerating the need for structural change in the actual reporting
processes of IT.
A year later, yet another learning maturation phase (phase III)
occurred. In Ravell, Phase III, the next stage of learning matura-
tion, allowed the firm to better manage ROD. After completing
the merger of the two technically related business units discussed
(phase II), it became necessary to move a core database depart-
ment completely out of the combined technology department, and
to integrate it with a business unit. The reason for this change was
compelling and brought to light a shortfall in my conclusions from
the initial study. It appears that as organizational learning matures
within ROD, there is an increasing need to educate the executive
management team of the organization. This was not the case during
the early stages of the case study. The limitation of my work, then,
was that I predominantly interfaced with line management and
neglected to include executives in the learning. During that time,
results were encouraging, so there was little reason for me to include
executives in event-driven issues, as discussed. Unfortunately, lack-
ing their participation fostered a disconnection with the strategic
integration component of ROD. Not participating in ROD created
executive ignorance of the importance that IT had on the strategy of
the business. Their lack of knowledge resulted in chronic problems
with understanding the relationship and value of IT on the business
units of the organization. This shortcoming resulted in continued
conflicts over investments in the IT organization. It ultimately left
IT with the inability to defend many of its cost requirements. As
stated, during times of economic downturns, firms tend to reduce
support organizations. In other words, executive management did
not understand the driver component of IT.
After the move of the cohort of database developers to a formal
business line unit, the driver components of the group provided
the dialogue and support necessary to educate executives. However,
this education did not occur based on events, but rather, on using
the social and group dynamics of organizational learning. We see
99orGAnIzAtIonAl leArnInG theorIes
here another aspect of how organizational and individual learning
methods work together, but evolve in a specific way, as summarized
in Table 4.2.
Another way of representing the relationship between individual
and organizational learning over time is to chart a “maturity” arc
to illustrate the evolutionary life cycle of technology and organiza-
tional learning. I call this arc the ROD arc. The arc is designed to
assess individual development in four distinct sectors of ROD, each
in relation to five developmental stages of organizational learning.
Thus, each sector of ROD can be measured in a linear and inte-
grated way. Each stage in the course of the learning development
Table 4.2 Analysis of Ravell’s Maturation with Technology
LEARNING PHASE I PHASE II PHASE III
Type of learning Individual reflective
practices used to
establish
operations and
line management.
Line managers
defend new culture
and participate in
less event-driven
learning.
Movement away from holistic
formation of IT, into
separate driver and
supporter attributes.
Learning approaches are
integrated using both
individual and
organizational methods, and
are based on functionality
as opposed to being
organizationally specific.
Learning
outcomes
Early stage of
learning
organization
development.
Combination of
event-driven and
early-stage social
organizational
learning
formation.
Movement toward social-
based organizational
decision making, relative to
the different uses of
technology.
Responsive
organizational
dynamism:
cultural
assimilation.
Established new
culture; no change
in organizational
structure.
Cultural
assimilation
stability with
existing structures;
early phase of IT
organizational
integration with
similar groups.
Mature use of cultural
assimilation, based on IT
behaviors (drivers and
supporters).
Responsive
organizational
dynamism:
Strategic
integration.
Limited integration
due to lack of
executive
involvement.
Early stages of
value/needs based
on similar
strategic
alignment.
Social structures emphasize
strategic integration based
on business needs.
100 INFORMATION TECHNOLOGY
of an organization reflects an underlying principle that guides the
process of ROD norms and behaviors; specifically, it guides orga-
nizations in how they view and use the ROD components available
to them.
The arc is a classificatory scheme that identifies progressive
stages in the assimilated uses of ROD. It reflects the perspective—
paralleling Knefelkamp’s (1999) research—that individuals in an
organization are able to move through complex levels of thinking,
and to develop independence of thought and judgment, as their
careers progress within the management structures available to
them. Indeed, assimilation to learning at specific levels of opera-
tions and management are not necessarily an achievable end but
one that fits into the psychological perspective of what productive
employees can be taught about ROD adaptability. Figure 4.9 illus-
trates the two axes of the arc.
The profile of an individual who assimilates the norms of ROD
can be characterized in five developmental stages (vertical axis)
along four sectors of literacy (horizontal axis). The arc character-
izes an individual at a specific level in the organization. At each
level, the arc identifies individual maturity with ROD, specifically
strategic integration, cultural assimilation, and the type of learning
process (i.e., individual vs. organizational). The arc shows how each
tier integrates with another, what types of organizational learning
theory best apply, and who needs to be the primary driver within
the organization. Thus, the arc provides an organizational schema
for how each conceptual component of organizational learning
applies to each sector of ROD. It also identifies and constructs a
path for those individuals who want to advance in organizational
rank; that is, it can be used to ascertain an individual’s ability to
cope with ROD requirements as a precursor for advancement in
management. Each position within a sector, or cell, represents a
specific stage of development within ROD. Each cell contains spe-
cific definitions that can be used to identify developmental stages
of ROD and organizational learning maturation. Figure 4.10 rep-
resents the ROD arc with its cell definitions. The five stages of the
arc are outlined as follows:
101orGAnIzAtIonAl leArnInG theorIes
St
ra
te
gi
c
in
te
gr
at
io
n
Se
ct
or
s o
f r
es
po
ns
iv
e
or
ga
ni
za
tio
na
l d
yn
am
ism
O
pe
ra
tio
na
l
kn
ow
le
dg
e
D
ep
ar
tm
en
t/
un
it
vi
ew
a
s o
th
er
In
te
gr
at
ed
di
sp
os
iti
on
St
ab
le
o
pe
ra
tio
ns
O
rg
an
iz
at
io
na
l
le
ad
er
sh
ip
O
rg
an
iz
at
io
na
l l
ea
rn
in
g
co
ns
tr
uc
ts
M
an
ag
em
en
t l
ev
el
C
ul
tu
ra
l a
ss
im
ila
tio
n
Fi
gu
re
4
.9
Re
fle
ct
iv
e
or
ga
ni
za
tio
na
l d
yn
am
is
m
a
rc
m
od
el
.
102 INFORMATION TECHNOLOGY
Se
ct
or
v
ar
ia
bl
e
O
pe
ra
tio
na
l
kn
ow
le
dg
e
D
ep
ar
tm
en
t/
un
it
vi
ew
as
o
th
er
In
te
gr
at
ed
d
isp
os
iti
on
St
ab
le
o
pe
ra
tio
ns
O
rg
an
iz
at
io
na
l
le
ad
er
sh
ip
St
ra
te
gi
c
in
te
gr
at
io
n
O
pe
ra
tio
ns
p
er
so
nn
el
un
de
rs
ta
nd
th
at
te
ch
no
lo
gy
h
as
a
n
im
pa
ct
o
n
st
ra
te
gi
c
de
ve
lo
pm
en
t,
pa
rt
ic
ul
ar
ly
o
n
ex
ist
in
g
pr
oc
es
se
s
Vi
ew
th
at
te
ch
no
lo
gy
ca
n
an
d
w
ill
a
ffe
ct
th
e
w
ay
th
e
or
ga
ni
za
tio
n
op
er
at
es
a
nd
th
at
it
ca
n
aff
ec
t r
ol
es
a
nd
re
sp
on
sib
ili
tie
s
C
ha
ng
es
b
ro
ug
ht
fo
rt
h
by
te
ch
no
lo
gy
n
ee
d
to
b
e
as
sim
ila
te
d
in
to
de
pa
rt
m
en
ts
a
nd
a
re
de
pe
nd
en
t o
n
ho
w
o
th
er
s
pa
rt
ic
ip
at
e
U
nd
er
st
an
ds
n
ee
d
fo
r
or
ga
ni
za
tio
na
l c
ha
ng
es
;
di
ffe
re
nt
c
ul
tu
ra
l
be
ha
vi
or
n
ew
st
ru
ct
ur
es
ar
e
se
en
a
s v
ia
bl
e
so
lu
tio
ns
O
rg
an
iz
at
io
na
l c
ha
ng
es
ar
e
co
m
pl
et
ed
a
nd
in
op
er
at
io
n;
e
xi
st
en
ce
o
f
ne
w
o
r m
od
ifi
ed
em
pl
oy
ee
p
os
iti
on
s
D
ep
ar
tm
en
t-
le
ve
l
or
ga
ni
za
tio
na
l c
ha
ng
es
an
d
cu
ltu
ra
l e
vo
lu
tio
n
ar
e
in
te
gr
at
ed
w
ith
or
ga
ni
za
tio
n-
w
id
e
fu
nc
tio
ns
a
nd
c
ul
tu
re
s
In
di
vi
du
al
b
el
ie
fs
o
f
st
ra
te
gi
c
im
pa
ct
a
re
in
co
m
pl
et
e;
in
di
vi
du
al
ne
ed
s t
o
in
co
rp
or
at
e
ot
he
r v
ie
w
s w
ith
in
th
e
de
pa
rt
m
en
t o
r b
us
in
es
s
un
it
Re
co
gn
iti
on
th
at
in
di
vi
du
al
an
d
de
pa
rt
m
en
t v
ie
w
s
m
us
t b
e
in
te
gr
at
ed
to
b
e
co
m
pl
et
e
an
d
st
ra
te
gi
ca
lly
p
ro
du
ct
iv
e
fo
r t
he
d
ep
ar
tm
en
t/
un
it
C
ha
ng
es
m
ad
e
to
pr
oc
es
se
s a
t t
he
de
pa
rt
m
en
t/
un
it
le
ve
l
fo
rm
al
ly
in
co
rp
or
at
e
em
er
gi
ng
te
ch
no
lo
gi
es
D
ep
ar
tm
en
ta
l s
tr
at
eg
ie
s
ar
e
pr
op
ag
at
ed
a
nd
in
te
gr
at
ed
a
t
or
ga
ni
za
tio
n
le
ve
l
C
ul
tu
ra
l
as
sim
ila
tio
n
O
rg
an
iz
at
io
na
l
le
ar
ni
ng
c
on
st
ru
ct
s
In
di
vi
du
al
-b
as
ed
re
fle
ct
iv
e
pr
ac
tic
e
Sm
al
l g
ro
up
-b
as
ed
re
fle
ct
iv
e
pr
ac
tic
es
In
te
ra
ct
iv
e
w
ith
b
ot
h
in
di
vi
du
al
a
nd
m
id
dl
e
m
an
ag
em
en
t u
sin
g
co
m
m
un
iti
es
o
f p
ra
ct
ic
e
In
te
ra
ct
iv
e
be
tw
ee
n
m
id
dl
e
m
an
ag
em
en
t a
nd
ex
ec
ut
iv
es
u
sin
g
so
ci
al
di
sc
ou
rs
e
m
et
ho
ds
to
pr
om
ot
e
tr
an
sf
or
m
at
io
n
O
rg
an
iz
at
io
na
l l
ea
rn
in
g
at
ex
ec
ut
iv
e
le
ve
l u
sin
g
kn
ow
le
dg
e
m
an
ag
em
en
t
M
an
ag
em
en
t l
ev
el
O
pe
ra
tio
ns
O
pe
ra
tio
n
an
d
m
id
dl
e
m
an
ag
em
en
t
M
id
dl
e
m
an
ag
em
en
t
M
id
dl
e
m
an
ag
em
en
t a
nd
ex
ec
ut
iv
e
Ex
ec
ut
iv
e
Fi
gu
re
4
.1
0
Re
sp
on
si
ve
o
rg
an
iza
tio
na
l d
yn
am
is
m
a
rc
.
103orGAnIzAtIonAl leArnInG theorIes
1. Operational knowledge: Represents the capacity to learn, con-
ceptualize, and articulate key issues relating to how technology
can have an impact on existing processes and organizational
structure. Organizational learning is accomplished through
individual learning actions, particularly reflective practices.
This stage typically is the focus for operations personnel, who
are usually focused on their personal perspectives of how
technology affects their daily activities.
2. Department/unit view as other : Indicates the ability to inte-
grate points of view about using technology from diverse indi-
viduals within the department or business unit. Using these
new perspectives, the individual is in position to augment
his or her understanding of technology and relate it to others
within the unit. Operations personnel participate in small-
group learning activities, using reflective practices. Lower
levels of middle managers participate in organizational learn-
ing that is in transition, from purely individual to group-level
thinking.
3. Integrated disposition : Recognizes that individual and depart-
mental views on using technology need to be integrated to
form effective business unit objectives. Understanding that
organizational and cultural shifts need to include all mem-
ber perspectives, before formulating departmental decisions,
organizational learning is integrated with middle managers,
using communities of practice at the department level.
4. Stable operations : Develops in relation to competence in sec-
tors of ROD appropriate for performing job duties for emerg-
ing technologies, not merely adequately, but competitively,
with peers and higher-ranking employees in the organization.
Organizational learning occurs at the organizational level
and uses forms of social discourse to support organizational
transformation.
5. Organizational leadership : Ability to apply sectors of ROD to
multiple aspects of the organization. Department concepts
can be propagated to organizational levels, including strate-
gic and cultural shifts, relating to technology opportunities.
Organizational learning occurs using methods of knowledge
management with executive support. Individuals use their
104 INFORMATION TECHNOLOGY
technology knowledge for creative purposes. They are will-
ing to take risks using critical discernment and what Heath
(1968) calls “freed” decision making.
The ROD arc addresses both individual and organizational
learning. There are aspects of Senge’s (1990) “organizational”
approach that are important and applicable to this model. I
have mentioned its appropriateness in regard to the level of the
manager— suggesting that the more senior manager is better posi-
tioned to deal with nonevent learning practices. However, there is
yet another dimension within each stage of matured learning. This
dimension pertains to timing. The timing dimension focuses on
a multiple-phase approach to maturing individual and organiza-
tional learning approaches. The multiple phasing of this approach
suggests a maturing or evolutionary learning cycle that occurs
over time, in which individual learning fosters the need and the
acceptance of organizational learning methods. This process can
be applied within multiple tiers of management and across differ-
ent business units.
The ROD arc can also be integrated with the applied individual
learning wheel. The combined models show the individual’s cycle of
learning along a path of maturation. This can be graphically shown
to reflect how the wheel turns and moves along the continuum of the
arc (Figure 4.11).
Figure 4.11 shows that an experienced technology learner can
maximize learning by utilizing all four quadrants in each of the
maturity stages. It should be clear that certain quadrants of indi-
vidual learning are more important to specific stages on the arc.
However, movement through the arc is usually not symmetrical;
that is, individuals do not move equally from stage to stage, within
the dimensions of learning (Langer, 2003). This integrated and
multiphase method uses the applied individual learning wheel
with the arc. At each stage of the arc, an individual will need
to draw on the different types of learning that are available in
the learning wheel. Figure 4.12 provides an example of this con-
cept, which Knefelkamp calls “multiple and simultaneous” (1999),
meaning that learning can take on multiple meanings across dif-
ferent sectors simultaneously.
105orGAnIzAtIonAl leArnInG theorIes
Figure 4.12 shows that the dimension variables are not necessarily
parallel in their linear maturation. This phenomenon is not unusual
with linear models, and in fact, is quite normal. However, it also reflects
the complexity of how variables mature, and the importance of having
the capability and infrastructure to determine how to measure such
levels of maturation within dimensions. There are both qualitative
and quantitative approaches to this analysis. Qualitative approaches
typically include interviewing, ethnographic-type experiences over
Engaging in the
technology process
Con
ce
ptuali
ze
driv
er
an
d su
ppor
ter
life
cy
cle
sMeas
urem
en
t a
nd
an
aly
sis
–W
hat?
Exploring technology opportunities
Planning and
design–How?
Feasibility–W
hy?
Im
plem
en
tin
g te
ch
nology
Crea
tio
n–W
hat
If?
Action
learning
QIV
QIII
QI
QII
Operational
knowledge
Department/unit
view as other
Integrated
disposition
Increased levels of maturity with
organizational dynamism
Stable
operations
Organizational
leadership
Knowledge
management
Transformative
learning
Communities
of practice
Figure 4.11 ROD arc with applied individual learning wheel.
106 INFORMATION TECHNOLOGY
D
im
en
si
on
v
ar
ia
bl
e
O
pe
ra
tio
na
l
kn
ow
le
dg
e
D
ep
ar
tm
en
t/
un
it
vi
ew
a
s o
th
er
In
te
gr
at
ed
di
sp
os
iti
on
St
ab
le
o
pe
ra
tio
ns
O
rg
an
iz
at
io
na
l
le
ad
er
sh
ip
St
ra
te
gi
c
in
te
gr
at
io
n
C
ul
tu
ra
l a
ss
im
ila
tio
n
O
rg
an
iz
at
io
na
l l
ea
rn
in
g
co
ns
tr
uc
ts
M
an
ag
em
en
t l
ev
el
Fi
gu
re
4
.1
2
Sa
m
pl
e
RO
D
ar
c.
107orGAnIzAtIonAl leArnInG theorIes
some predetermined time period, individual journals or diaries, group
meetings, and focus groups. Quantitative measures involve the cre-
ation of survey-type measures; they are based on statistical results
from answering questions that identify the level of maturation of the
individual.
The learning models that I elaborate in this chapter are suggestive
of the rich complexities surrounding the learning process for indi-
viduals, groups, and entire organizations. This chapter establishes a
procedure for applying these learning models to technology-specific
situations. It demonstrates how to use different phases of the learning
process to further mature the ability of an organization to integrate
technology strategically and culturally.
109
5
managing oRganizaTional
leaRning anD TeChnology
The Role of Line Management
In Chapter 1, the results of the Ravell case study demonstrated the
importance of the role that line managers have, for the success of imple-
menting organizational learning, particularly in the objective of inte-
grating the information technology (IT) department. There has been
much debate related to the use of event-driven learning. In particular,
there is Senge’s (1990) work from his book, The Fifth Discipline. While
overall, I agree with his theories, I believe that there is a need to critique
some of his core concepts and beliefs. That is, Senge tends to make
broad generalizations about the limits of event-driven education and
learning in organizations. He believes that there is a limitation of learn-
ing from experience because it can create limitations to learning based
on actions—as he asks: “What happens when we can no longer observe
the consequences of our actions?” (Senge, 1990, p. 23).
My research has found that event-driven learning is essential to
most workers who have yet to learn through other means. I agree with
Senge that not all learning can be obtained through event-oriented
thinking, but I feel that much of what occurs at this horizon pertains
more to the senior levels than to what many line managers have to deal
with as part of their functions in business. Senge’s concern with learn-
ing methods that focus too much on the individual, perhaps, is more
powerful, if we see the learning organization as starting at the top and
then working its way down. The position, however, particularly with
respect to the integration of technology, is that too much dependence
on executive-driven programs to establish and sustain organizational
learning, is dangerous. Rather, the line management—or middle
managers who fundamentally run the business—is best positioned
to make the difference. My hypothesis here is that both top-down
and bottom-up approaches to organizational learning are riddled with
110 INFORMATION TECHNOLOGY
problems, especially in their ability to sustain outcomes. We cannot
be naï ve—even our senior executives must drive results to maintain
their positions. As such, middle managers, as the key business drivers,
must operate in an event- and results-driven world—let us not under-
estimate the value of producing measurable outcomes, as part of the
ongoing growth of the organizational learning practicum.
To explore the role of middle managers further, I draw on the inter-
esting research done by Nonaka and Takeuchi (1995). These research-
ers examined how Japanese companies manage knowledge creation,
by using an approach that they call “middle-up-down.” Nonaka and
Takeuchi found that middle managers “best communicate the contin-
uous iterative process by which knowledge is created” (p. 127). These
middle managers are often seen as leaders of a team, or task, in which
a “spiral conversion process” operates and that requires both executive
and operations management personnel. Peters and Waterman (1982),
among others, often have attacked middle managers as representing a
layer of management that creates communication problems and inef-
ficiencies in business processes that resulted in leaving U.S. workers
trailing behind their international competitors during the automobile
crisis in the 1970s. They advocate a “flattening” of the never-ending
levels of bureaucracy responsible for inefficient operations. However,
executives often are not aware of details within their operating depart-
ments and may not have the ability or time to acquire those details.
Operating personnel, on the other hand, do not possess the vision
and business aptitudes necessary to establish the kind of knowledge
creation that fosters strategic learning.
Middle managers, or what I prefer to identify as line managers
(Langer, 2001b), possess an effective combination of skills that can pro-
vide positive strategic learning infrastructures. Line managers under-
stand the core issues of productivity in relation to competitive operations
and return on investment, and they are much closer to the day-to-day
activities that bring forth the realities of how, and when, new strategic
processes can be effectively implemented. While many researchers, such
as Peters and Waterman, find them to be synonymous with backward-
ness, stagnation, and resistance to change, middle managers are the
core group that can provide the basis for continuous innovation through
strategic learning. It is my perspective that the difference of opinion
regarding the positive or negative significance middle managers have
111MAnAGInG orGAnIzAtIonAl leArnInG
in relation to organizational learning has to do with the wide-ranging
variety of employees who fall into the category of “middle.” It strikes
me that Peters and Waterman were somewhat on target with respect to
a certain population of middle managers, although I would not char-
acterize them as line managers. To justify this position, it is important
to clearly establish the differences. Line managers should be defined as
pre-executive employees who have reached a position of managing a
business unit that contains some degree of return on investment for the
business. In effect, I am suggesting that focusing on “middle” manag-
ers, as an identifiable group, is too broad. Thus, there is a need to further
delineate the different levels of what comprises middle managers, and
their roles in the organization.
Line Managers
These individuals usually manage an entire business unit and have
“return-on-investment” responsibilities. Line managers should be
categorized as those who have middle managers reporting to them;
they are, in effect, managers of managers, or, as in some organiza-
tions, they serve a “directorial” function. Such individuals are, in
many ways, considered future executives and perform many low-end
executive tasks. They are, if you will, executives in training. What
is significant about this managerial level is the knowledge it carries
about operations. However, line managers are still involved in daily
operations and maintain their own technical capabilities.
FirstLine Managers
First-line individuals manage nonmanagers but can have supervisory
employees who report to them. They do not carry the responsibility
for a budget line unit but for a department within the unit. These
managers have specific goals that can be tied to their performance and
to the department’s productivity.
Supervisor
A supervisor is the lowest-level middle manager. These individu-
als manage operational personnel within the department. Their
112 INFORMATION TECHNOLOGY
management activities are typically seen as “functions,” as opposed
to managing an entire operation. These middle managers do not have
other supervisors or management-level personnel reporting to them.
We should remember that definitions typically used to character-
ize the middle sectors of management, as described by researchers
like Peters, Nonaka, and others, do not come from exact science. The
point must be made that middle managers cannot be categorized by a
single definition. The category requires distinctive definitions within
each level of stratification presented. Therefore, being more specific
about the level of the middle manager can help us determine the man-
ager’s role in the strategic learning process. Given that Nonaka and
Takeuchi (1995) provide the concept of middle-up-down as it related
to knowledge management, I wish to broaden it into a larger sub-
ject of strategic learning, as a method of evolving changes in culture
and organizational thinking. Furthermore, responsive organizational
dynamism (ROD), unlike other organizational studies, represents
both situational learning and ongoing evolutionary learning require-
ments. Evolutionary learning provides a difficult challenge to organi-
zational learning concepts. Evolutionary learning requires significant
contribution from middle managers. To understand the complexity of
the middle manager, all levels of the organization must be taken into
consideration. I call this process management vectors.
Management Vectors
Senge’s (1990) work addresses some aspects of how technology might
affect organizational behavior: “The central message of the Fifth
Discipline is more radical than ‘ radical organization redesign’—
namely that our organizations work the way they work, ultimately
because of how we think and how we interact” (p. xiv). Technology
aspires to be a new variable or catalyst that can change everyday
approaches to things—to be the radical change element that forces
us to reexamine norms no longer applicable to business operations.
On the other hand, technology can be dangerous if perceived unre-
alistically as a power that possesses new answers to organizational
performance and efficiency. In the late 1990s, we experienced the
“bust” of the dot-com explosion, an explosion that challenged conven-
tional norms of how businesses operate. Dot-coms sold the concepts
113MAnAGInG orGAnIzAtIonAl leArnInG
that brick-and-mortar operations could no longer compete with new
technology-driven businesses and that “older” workers could not be
transformed in time to make dot-com organizations competitive.
Dot-coms allowed us to depart from our commitment to knowledge
workers and learning organizations, which is still true today.
For example, in 2003, IBM at its corporate office in Armonk, New
York, laid off 1,000 workers who possessed skills that were no lon-
ger perceived as needed or competitive. Rather than retrain work-
ers, IBM determined that hiring new employees to replace them
was simply more economically feasible and easier in terms of trans-
forming their organization behaviors. However, in my interview
with Stephen McDermott, chief executive officer (CEO) of ICAP
Electronic Trading Community (ETC), it became apparent that
many of the mystiques of managing technology were incorrect. As he
stated, “Managing a technology company is no different from manag-
ing other types of businesses.” While the technical skills of the IBM
workers may no longer be necessary, why did the organization not
provide enough opportunities to migrate important knowledge work-
ers to another paradigm of technical and business needs? Widespread
worker replacements tell us that few organizational learning infra-
structures actually exist. The question is whether technology can pro-
vide the stimulus to prompt more organizations to commit to creating
infrastructures that support growth and sustained operation. Most
important is the question of how we establish infrastructures that can
provide the impetus for initial and ongoing learning organizations.
This question suggests that the road to working successfully with tech-
nology will require the kind of organizational learning that is driven
by both individual and organization-wide initiatives. This approach
can be best explained by referring to the concept of driver and sup-
porter functions and life cycles of technology presented in Chapter 3.
Figure 5.1 graphically shows the relationship between organizational
structure and organizational learning needs. We also see that this
relationship maps onto driver and supporter functionality.
Figure 5.1 provides an operational overview of the relations between
the three general tiers of management in most organizations. These
levels or tiers are mapped onto organizational learning approaches;
that is, organizational/system or individual. This mapping follows a
general view based on what individuals at each of these tiers view or
114 INFORMATION TECHNOLOGY
seek as their job responsibilities and what learning method best sup-
ports their activities within their environment. For example, execu-
tive learning focuses on system-level thinking and learning because
executives need to view their organizations in a longer-term way (e.g.,
return on investment), as opposed to viewing learning on an indi-
vidual, transactional event way. Yet, executives play an integral part in
long-term support for technology, as an accelerator. Their role within
ROD is to provide the stimulus to support the process of cultural
assimilation, and they are also very much a component of strategic
integration. Executives do not require as much event-driven reflective
change, but they need to be part of the overall “social” structure that
paves the way for marrying the benefits of technology with organi-
zational learning. What executives do need to see, are the planned
measurable outcomes linked to performance from the investment of
coupling organizational learning with technology. The lack of execu-
tive involvement and knowledge will be detrimental to the likelihood
of making this relationship successful.
Operations, on the other hand, are based more on individual prac-
tices of learning. Attempting to incorporate organizational vision
and social discourse at this level is problematic until event-driven
learning is experienced individually to prove the benefits that can be
derived from reflective practices. In addition, there is the problem of
the credibility of a learning program. Workers are often wary of new
Management/
operational
layers
Executive tier
Middle
management
tiers
Operations tier Support Event-driven
individual
Reflective practices
Organization/
system on
driver individual
on support
Communities of
practice (driver)
reflective practices
(supporter)
Driver/support
life cycle
Organizational
learning
method
Knowledge
management
Learning
approach
Organization
system
Driver/support
life cycle
involvement
Driver
Figure 5.1 Three-tier organizational structure.
115MAnAGInG orGAnIzAtIonAl leArnInG
programs designed to enhance their development and productivity.
Many question the intentions of the organization and why it is mak-
ing the investment, especially given what has occurred in corporations
over the last 20 years: Layoffs and scandals have riddled organizations
and hurt employee confidence in the credibility of employer programs.
Ravell showed us that using reflective practices during events pro-
duces accelerated change, driven by technological innovation, which
in turn, supports the development of the learning organization. It is
important at this level of operations to understand the narrow and
pragmatic nature of the way workers think and learn. The way opera-
tions personnel are evaluated is also a factor. Indeed, operations per-
sonnel are evaluated based on specific performance criteria.
The most complex, yet combined, learning methods relate to the
middle management layers. Line managers, within these layers, are
engrossed in a double-sided learning infrastructure. On one side, they
need to communicate and share with executives what they perceive to
be the “overall” issues of the organization. Thus, they need to learn
using an organizational learning approach, which is less dependent
on event-driven learning and uses reflective practice. Line managers
must, along with their senior colleagues, be able to see the business
from a more proactive perspective and use social-oriented methods
if they hope to influence executives. Details of events are more of an
assumed responsibility to them than a preferred way of interacting. In
other words, most executives would rather interface with line manag-
ers on how they can improve overall operations efficiently and effec-
tively, as opposed to dealing with them on a micro, event-by-event
basis. The assumption, then, is that line managers are expected to deal
with the details of their operations, unless there are serious problems
that require the attention of executives; such problems are usually cor-
related to failures in the line manager’s operations.
On the other side are the daily relationships and responsibilities
managers face for their business units. They need to incorporate more
individual-based learning techniques that support reflective practices
within their operations to assist in the personal development of their
staff. The middle management tier described in Figure 5.1 is shown
at a summary level and needs to be further described. Figure 5.2 pro-
vides a more detailed analysis based on the three types of middle man-
agers described. The figure shows the ratio of organizational learning
116 INFORMATION TECHNOLOGY
to individual learning based on manager type. The more senior the
manager, the more learning is based on systems and social processes.
Knowledge Management
There is an increasing recognition that the competitive advantage of
organizations depends on their “ability to create, transfer, utilize, and
protect difficult-to-intimate knowledge assets” (Teece, 2001, p. 125).
Indeed, according to Bertels and Savage (1998), the dominant logic
of the industrial era requires an understanding of how to break the
learning barrier to comprehending the information era. While we
have developed powerful solutions to change internal processes and
organizational structures, most organizations have failed to address
the cultural dimensions of the information era. Organizational
knowledge creation is a result of organizational learning through stra-
tegic processes. Nonaka and Takeuchi (1995) define organizational
knowledge as “the capability of a company as a whole to create new
knowledge, disseminate it throughout the organization, and embody
it in products, services, and systems” (p. 3). Nonaka and Takeuchi use
the steps shown in Figure 5.3 to assess the value and chain of events
surrounding the valuation of organization knowledge.
Supervisor
High individual-
based learning
High org/system-
based learning
Individual
System
Manager Director
Figure 5.2 Organizational/system versus individual learning by middle manager level.
Knowledge creation
Continuous innovation
Competitive advantage
Figure 5.3 Nonaka and Takeuchi steps to organizational knowledge.
117MAnAGInG orGAnIzAtIonAl leArnInG
If we view the Figure 5.3 processes as leading to competitive advan-
tage, we may ask how technology affects the chain of actions that
Nonaka and Takeuchi (1995) identify. Without violating the model,
we may insert technology and observe the effects it has on each step,
as shown in Figure 5.4.
According to Nonaka and Takeuchi (1995), to create new knowl-
edge means to re-create the company, and everyone in it, in an ongo-
ing process that requires personal and organizational self-renewal.
That is, knowledge creation is the responsibility of everyone in the
organization. The viability of this definition, however, must be ques-
tioned. Can organizations create personnel that will adhere to such
parameters, and under what conditions will senior management sup-
port such an endeavor?
Again, technology has a remarkable role to play in substantiat-
ing the need for knowledge management. First, executives are still
challenged to understand how they need to deal with emerging tech-
nologies as this relates to whether their organizations are capable
of using them effectively and efficiently. Knowledge management
provides a way for the organization to learn how technology will be
used to support innovation and competitive advantage. Second, IT
departments need to understand how they can best operate within
the larger scope of the organization—they are often searching for a
true mission that contains measurable outcomes, as defined by the
entire organization, including senior management. Third, both execu-
tives and IT staff agree that understanding the uses of technology is a
continuous process that should not be utilized solely in a reactionary
Knowledge creation: Technology provides more dynamic shifts in knowledge,
thus accelerating the number of knowledge-creation events that can occur.
Continuous innovation: Innovations are accelerated because of the dynamic
nature of events and the time required to respond—therefore, continuous
innovation procedures are more significant to have in each department in order
to respond to technological opportunities on an ongoing basis.
Competitive advantage: Technology has generated more global competition.
Competitive advantages that depend on technological innovation
are more common.
Figure 5.4 Nonaka and Takeuchi organizational knowledge with technology extension.
118 INFORMATION TECHNOLOGY
and event-driven way. Finally, most employees accept the fact that
technology is a major component of their lives at work and at home,
that technology signifies change, and that participating in knowledge
creation is an important role for them.
Again, we can see that technology provides the initiator for
understanding how organizational learning is important for com-
petitive advantage. The combination of IT and other organizational
departments, when operating within the processes outlined in ROD,
can significantly enhance learning and competitive advantage. To
expand on this point, I now focus on the literature specifically relat-
ing to tacit knowledge and its important role in knowledge man-
agement. Scholars theorize knowledge management is an ability to
transfer individual tacit knowledge into explicit knowledge. Kulkki
and Kosonen (2001) define tacit knowledge as an experience-based
type of knowledge and skill and as the individual capacity to give
intuitive forms to new things; that is, to anticipate and preconcep-
tualize the future. Technology, by its very definition and form of
being, requires this anticipation and preconceptualization. Indeed,
it provides the perfect educational opportunity in which to practice
the transformation of tacit into explicit knowledge. Tacit knowledge
is an asset, and having individual dynamic abilities to work with
such knowledge commands a “higher premium when rapid organic
growth is enabled by technology” (Teece, 2001, p. 140). Thus,
knowledge management is likely to be greater when technological
opportunity is richer.
Because evaluating emerging technologies requires the ability to
look into the future, it also requires that individuals translate valu-
able tacit knowledge, and creatively see how these opportunities are
to be judged if implemented. Examples of applicable tacit knowledge
in this process are here extracted from Kulkki and Kosonen (2001):
• Cultural and social history
• Problem-solving modes
• Orientation to risks and uncertainties
• Worldview organizing principles
• Horizons of expectations
I approach each of these forms of tacit knowledge from the per-
spective of the components of ROD as shown in Table 5.1.
119MAnAGInG orGAnIzAtIonAl leArnInG
It is not my intention to suggest that all technologies should be, or
can be, used to generate competitive advantage. To this extent, some
technologies may indeed get rejected because they cannot assist the
organization in terms of strategic value and competitive advantage. As
Teece (2001) states, “Information transfer is not knowledge transfer and
information management is not knowledge management, although the
former can assist the latter. Individuals and organizations can suffer
from information overload” (p. 129). While this is a significant issue for
many firms, the ability to have an organization that can select, interpret,
Table 5.1 Mapping Tacit Knowledge to Responsive Organizational Dynamism
TACIT KNOWLEDGE STRATEGIC INTEGRATION CULTURAL ASSIMILATION
Cultural and social
history
How the IT department and other
departments translate emerging
technologies into their existing processes
and organization.
Problem-solving
modes
Individual reflective practices that assist
in determining how specific technologies
can be useful and how they can be
applied.
Technology opportunities
may require organizational
and structural changes to
transfer tacit knowledge to
explicit knowledge.
Utilization of tacit knowledge
to evaluate probabilities for
success.
Orientation to risks
and uncertainties
Technology offers many risks and
uncertainties. All new technologies may
not be valid for the organization.
Tacit knowledge is a
valuable component to fully
understand realities, risks,
and uncertainties.
Worldviews Technology has global effects and changes
market boundaries that cross business
cultures. It requires tacit knowledge to
understand existing dispositions on how
others work together.
Review how technology
affects the dynamics of
operations.
Organizing
principles
How will new technologies actually be
integrated? What are the organizational
challenges to “rolling out” products and
to implementation timelines? What
positions are needed, and who in the
organization might be best qualified to
fill new responsibilities?
Identify limitations of the
organization; that is, tacit
knowledge versus explicit
knowledge realities.
Horizons of
expectations
Individual limitations in the tacit domain
that may hinder or support whether a
technology can be strategically
integrated into the organization.
120 INFORMATION TECHNOLOGY
and integrate information is a valuable part of knowledge management.
Furthermore, advances in IT have propelled much of the excitement
surrounding knowledge management. It is important to recognize that
learning organizations, reflective practices, and communities of prac-
tice all participate in creating new organizational knowledge. This is
why knowledge management is so important. Knowledge must be built
on its own terms, which requires intensive and laborious interactions
among members of the organization.
Change Management
Because technology requires that organizations accelerate their
actions, it is necessary to examine how ROD corresponds to theories
in organizational change. Burke (2002) states that most organiza-
tional change is evolutionary; however, he defines two distinct types
of change: planned versus unplanned and revolutionary versus evolu-
tionary. Burke also suggests that the external environmental changes
are more rapid today and that most organizations “are playing catch
up.” Many rapid changes to the external environment can be attrib-
uted to emerging technologies, which have accelerated the divide
between what an organization does and what it needs to do to remain
competitive. This is the situation that creates the need for ROD.
The catching-up process becomes more difficult because the amount
of change required is only increasing given ever-newer technologies.
Burke (2002) suggests that this catching up will likely require planned
and revolutionary change. Such change can be mapped onto much of
my work at Ravell. Certainly, change was required; I planned it, and
change had to occur. However, the creation of a learning organiza-
tion, using many of the organizational learning theories addressed
in Chapter 4, supports the eventual establishment of an operating
organization that can deal with unplanned and evolutionary change.
When using technology as the reason for change, it is then important
that the components of ROD be integrated with theories of organi-
zational change.
History has shown that most organizational change is not success-
ful in providing its intended outcomes, because of cultural lock-in.
Cultural lockin is defined by Foster and Kaplan (2001) as the inability
of an organization to change its corporate culture even when there
121MAnAGInG orGAnIzAtIonAl leArnInG
are clear market threats. Based on their definition, then, technology
may not be able to change the way an organization behaves, even
when there are obvious competitive advantages to doing so. My con-
cern with Foster and Kaplan’s conclusion is whether individuals truly
understand exactly how their organizations are being affected—or are
we to assume that they do understand? In other words, is there a pro-
cess to ensure that employees understand the impact of not changing?
I believe that ROD provides the infrastructure required to resolve
this dilemma by establishing the processes that can support ongoing
unplanned and evolutionary change.
To best show the relationship of ROD to organizational change
theory, I use Burke’s (2002) six major points in assisting change in
organizations:
1. Understanding the external environment: What are competitors
and customers’ expectations? This is certainly an issue, specif-
ically when tracking whether expected technologies are made
available in the client– vendor relationship. But, more critical
is the process of how emerging technologies, brought about
through external channels, are evaluated and put into produc-
tion; that is, having a process in place. Strategic integration of
ROD is the infrastructure that needs to facilitate the moni-
toring and management of the external environment.
2. Evaluation of the inside of the organization: This directly relates
to technology and how it can be best utilized to improve
internal operations. While evaluation may also relate to a
restructuring of an organization’s mission, technology is often
an important driver for why a mission needs to be changed
(e.g., expanding a market due to e-commerce capabilities).
3. Readiness of the organization: The question here is not whether
to change but how fast the organization can change to address
technological innovations. The ROD arc provides the steps
necessary to create organizations that can sustain change as a
way of operation, blending strategic integration with cultural
assimilation. The maturation of learning: moving toward sys-
tem-based learning also supports the creation of infrastruc-
tures that are vitally prepared for changes from emerging
technologies.
122 INFORMATION TECHNOLOGY
4. Cultural change as inevitable: Cultural assimilation essentially
demands that organizations must dynamically assimilate new
technologies and be prepared to evolve their cultures. Such
evolution must be accelerated and be systemic within business
units, to be able to respond effectively to the rate of change
created by technological innovations.
5. Making the case for change: It is often difficult to explain why
change is inevitable. Much of the need for change can be sup-
ported using the reflective practices implemented at Ravell.
However, such acceptance is directly related to the process of
time. Major events can assist in establishing the many needs
for change, as discussed by Burke (2002).
6. Sustaining change: Perhaps the strongest part of ROD is its
ability to create a process that is evolutionary and systemic. It
focuses on driving change to every aspect of the organization
and provides organizational learning constructs to address
each level of operation. It addresses what Burke (2002) calls
the “prelaunch, launch, postlaunch, and sustaining,” in the
important sequences of organizational change (p. 286).
Another important aspect of change management is leadership.
Leadership takes many forms and has multiple definitions. Technology
plays an interesting role in how leadership can be presented to orga-
nizations, especially in terms of the management style of leadership,
or what Eisenhardt and Bourgeois (1988) have coined as “power cen-
tralization.” Their study examines high-velocity environments in the
microcomputer industry during the late 1980s. By high velocity, they
refer to “those environments in which there is a rapid and discon-
tinuous change in demand, competitors, technology, or regulation, so
that information is often inaccurate, unavailable, or obsolete” (p. 738).
During the period of their study, the microcomputer industry was
undergoing substantial technological change, including the introduc-
tion of many new competitors. As it turns out, the concept of high
velocity is becoming more the norm today given the way organizations
find themselves needing to operate in constant fluxes of velocity. The
term power centralization is defined as the amount of decision-making
control wielded by the CEO. Eisenhardt and Bourgeois’s study finds
that the more the CEO engages in power-centralized leadership,
123MAnAGInG orGAnIzAtIonAl leArnInG
the greater the degree of politics, which has a negative impact on the
strategic performance of the firms examined. This finding suggests
that the less democratic the leadership is in high-velocity environ-
ments, the less productive the organization will be. Indeed, the study
found that when individuals engaged in team learning, political ten-
sion was reduced, and the performance of the firms improved.
The structure of ROD provides the means of avoiding the high-
velocity problems discovered by the Eisenhardt and Bourgeois (1988)
study. This is because ROD allows for the development of more indi-
vidual learning, as well as system thinking, across the executive ranks
of the business. If technology is to continue to establish such high
velocities, firms need to examine the Eisenhardt and Bourgeois study
for its relevance to everyday operations. They also need to use orga-
nizational learning theories as a basis for establishing leadership that
can empower employees to operate in an accelerated and unpredict-
able environment.
Change Management for IT Organizations
While change management theories address a broad population in
organizations, there is a need to create a more IT-specific approach to
address the unique needs of this group. Lientz and Rea (2004) estab-
lish five specific goals for IT change managers:
1. Gain support for change from employees and non-IT
managers.
2. Implement change along measurements for the work so that
the results of the change are clearly determined.
3. Implement a new culture of collaboration in which employees
share more information and work more in teams.
4. Raise the level of awareness of the technology process and
work so that there is less of a tendency for reversion.
5. Implement an ongoing measurement process for the work to
detect any problems.
Lientz and Rea’s (2004) position is that when a new culture is
instilled in IT departments, it is particularly important that it should
not require massive management intervention. IT people need to be
self-motivated to keep up with the myriad accelerated changes in the
124 INFORMATION TECHNOLOGY
world of technology. These changes occur inside IT in two critical
areas. The first relates to the technology itself. For example, how do
IT personnel keep up with new versions of hardware and software?
Many times, these changes come in the form of hardware (often
called system) and software upgrades from vendors who require
them to maintain support contracts. The ongoing self-management
of how such upgrades and changes will ultimately affect the rest
of the organization is a major challenge and one that is difficult to
manage top-down. The second area is the impact of new or emerg-
ing technologies on business strategy. The challenge is to develop IT
personnel who can transform their technical knowledge into busi-
ness knowledge and, as discussed, take their tacit knowledge and
convert it into explicit, strategic knowledge. Further understanding
of the key risks to the components of these accelerated changes is
provided as follows:
System and software version control: IT personnel must continue
to track and upgrade new releases and understand the impact
of product enhancements. Some product-related enhance-
ments have no bearing on strategic use; they essentially fix
problems in the system or software. On the other hand, some
new releases offer new features and functions that need to be
communicated to both IT and business managers.
Existing legacy systems: Many of these systems cannot support
the current needs of the business. This often forces IT staff to
figure out how to create what is called “workarounds” (quick
fixes) to these systems. This can be problematic given that
workarounds might require system changes or modifications
to existing software. The risk of these changes, both short and
long term, needs to be discussed between user and IT staff
communities of practice.
Software packages (offtheshelf software): Since the 1990s, the use
of preprogrammed third-party software packages has become
a preferred mode of software use among users. However,
many of these packages can be inflexible and do not support
the exact processes required by business users. IT personnel
need to address users’ false expectations about what software
packages can and cannot do.
125MAnAGInG orGAnIzAtIonAl leArnInG
System or software changes: Replacement of systems or software
applications is rarely 100% complete. Most often, remnants of
old systems will remain. IT personnel can at times be insensi-
tive to the lack of a complete replacement.
Project completion: IT personnel often misevaluate when their
involvement is finished. Projects are rarely finished when the
software is installed and training completed. IT staff tend to
move on to other projects and tasks and lose focus on the like-
lihood that there will be problems discovered or last-minute
requests made by business users.
Technical knowledge: IT staff members need to keep their techni-
cal skills up to date. If this is not done, emerging technolo-
gies may not be evaluated properly as there may be a lack of
technical ability inside the organization to map new technical
developments onto strategic advantage.
Pleasing users : While pleasing business users appears to be a
good thing, it can also present serious problems with respect
to IT projects. What users want, and what they need, may
not be the same. IT staff members need to judge when they
might need assistance from business and IT management
because users may be unfairly requesting things that are not
feasible within the constraints of a project. Thus, IT staff must
have the ability to articulate what the system can do and what
might be advisable. These issues tend to occur when certain
business users want new systems to behave like old ones.
Documentation: This, traditionally, is prepared by IT staff and
contains jargon that can confuse business users. Furthermore,
written procedures prepared by IT staff members do not con-
sider the entire user experience and process.
Training: This is often carried out by IT staff and is restricted
to covering system issues, as opposed to the business realities
surrounding when, how, and why things are done.
These issues essentially define key risks to the success of imple-
menting technology projects. Much of this book, thus far, has focused
on the process of organizational learning from an infrastructure per-
spective. However, the implementation component of technology
possesses new risks to successfully creating an organization that can
126 INFORMATION TECHNOLOGY
learn within the needs of ROD. These risks, from the issues enumer-
ated, along with those discussed by Lientz and Rea (2004) are sum-
marized as follows:
Business user involvement: Continuous involvement from busi-
ness users is necessary. Unfortunately, during the life of a proj-
ect there are so many human interfaces between IT staff and
business users that it is unrealistic to attempt to control these
communications through tight management procedures.
Requirements, definition, and scope: These relate to the process
by which IT personnel work with business users to deter-
mine exactly what software and systems need to accomplish.
Determining requirements is a process, not a predetermined
list that business users will necessarily have available to
them. The discourse that occurs in conversations is critical to
whether such communities are capable of developing require-
ments that are unambiguous in terms of expected outcomes.
Business rules: These rules have a great effect on how the organi-
zation handles data and transactions. The difference between
requirements and business rules is subtle. Specifically, busi-
ness rules, unlike requirements, are not necessarily related to
processes or events of the business. As such, the determina-
tion of business rules cannot be made by reviewing proce-
dures; for example, all account numbers must be numeric.
Documentation and training materials: IT staff members need to
interact with business users and establish joint processes that
foster the development of documentation and training that
best fit user needs and business processes.
Data conversion: New systems and applications require that data
from legacy systems be converted into the new formats. This
process is called data mapping; IT staff and key business users
review each data field to ensure that the proper data are rep-
resented correctly in the new system. IT staff members should
not be doing this process without user involvement.
Process measurement: Organizations typically perform a post-
completion review after the system or software application
is installed. Unfortunately, this process measurement should
occur during and after project completion.
127MAnAGInG orGAnIzAtIonAl leArnInG
IT change management poses some unique challenges to imple-
menting organizational learning, mostly because managers cannot
conceivably be available for all of the risks identified. Furthermore,
the very nature of new technologies requires that IT staff mem-
bers develop the ability to self-manage more of their daily functions
and interactions, particularly with other staff members outside the
IT department. The need for self-development is even more critical
because of the existence of technological dynamism, which focuses
on dynamic and unpredictable transactions that often must be han-
dled directly by IT staff members and not their managers. Finally,
because so many risks during technology projects require business
user interfaces, non-IT staff members also need to develop better and
more efficient self-management than they are accustomed to doing.
Technological dynamism, then, has established another need for
change management theory. This need relates to the implementation
of self-development methods. Indeed, part of the reason for the lack
of success of IT projects can be attributed to the inability of the core
IT and business staff to perform in a more dynamic way. Historically,
more management cannot provide the necessary learning and reduc-
tion of risk.
The idea of self-development became popular in the early 1980s as
an approach to the training and education of managers, and managers
to be. Thus, the focus of management self-development is to increase
the ability and willingness of managers to take responsibility for
themselves, particularly for their own learning (Pedler et al., 1988).
I believe that management self-development theory can be applied to
nonmanagers, or to staff members, who need to practice self-manage-
ment skills that can assist them in transitioning to operating under
the conditions of technological dynamism.
Management self-development draws on the idea that many peo-
ple emphasize the need for learner centeredness. This is an impor-
tant concept in that it ties self-development theory to organizational
learning, particularly to the work of Chris Argyris and Malcolm
Knowles. The concept of learner centeredness holds that individuals
must take prime responsibility for their own learning: when and how
to learn. The teacher (or manager) is assigned the task of facilitator—a
role that fosters guidance as opposed to direct initiation of learning.
In many ways, a facilitator can be seen as a mentor whose role it is to
128 INFORMATION TECHNOLOGY
guide an individual through various levels of learning and individual
development.
What makes self-development techniques so attractive is that
learners work on actual tasks and then reflect on their own efforts.
The methods of reflective practice theory, therefore, are applicable
and can be integrated with self-development practices. Although self-
development places the focus on the individual’s own efforts, manag-
ers still have responsibilities to mentor, coach, and counsel their staff.
This support network allows staff to receive appropriate feedback and
guidance. In many ways, self-development relates to the professional
process of apprenticeship but differs from it in that the worker may not
aspire to become the manager but may wish simply to develop better
management skills. Workers are expected to make mistakes and to be
guided through a process that helps them reflect and improve. This is
why self-development can be seen as a management issue as opposed
to just a learning theory.
A mentor or coach can be a supervisor, line manager, director, or
an outside consultant. The bottom line is that technological dyna-
mism requires staff members who can provide self- management
to cope with constant project changes and risks. These individu-
als must be able to learn, be self-aware of what they do not know,
and possess enough confidence to initiate the required learning
and assistance that they need to be successful (Pedler et al., 1988).
Self-development methods, like other techniques, have risks.
Most notable, is the initial decrement in performance followed by
a slow increment as workers become more comfortable with the
process and learn from their mistakes. However, staff members
must be given support and time to allow this process to occur;
self-development is a trial-and-error method founded on the basis
of mastery learning (i.e., learning from one’s mistakes). Thus, the
notion of self-development is both continuous and discontinuous
and must be implemented in a series of phases, each having unique
outcomes and maturity. The concept of self-development is also
consistent with the ROD arc, in which early phases of maturation
require more individual learning, particularly reflective practices.
Self-development, in effect, becomes a method of indirect man-
agement to assist in personal transformation. This personal trans-
formation will inevitably better prepare individuals to participate
129MAnAGInG orGAnIzAtIonAl leArnInG
in group- and organizational-level learning at later stages of
maturation.
The first phase of establishing a self-development program is to
create a “learning-to-learn” process. Teaching individuals to learn is a
fundamental need before implementing self-development techniques.
Mumford (1988) defines learning to learn as
1. Helping staff to understand the stages of the learning process
and the pitfalls to not learning
2. Helping staff to find their own preferences to learning
3. Assisting staff in understanding their present learning prefer-
ences and how to deal with, and overcome, learning weaknesses
4. Helping staff to build on their learning experience and apply
it to their current challenges in their job
The first phase of self-development clearly embraces the Kolb
(1999) Learning Style Inventory and the applied individual learn-
ing wheel that were introduced in Chapter 4. Thus, all staff members
should be provided with both of these learning wheels, made aware
of their natural learning strengths and weaknesses, and provided with
exercises to help them overcome their limitations. Most important is
that the Kolb system will make staff aware of their shortfalls with
learning. The applied individual learning wheel will provide a per-
spective on how individuals can link generic learning preferences into
organizational learning needs to support ROD.
The second phase of self-development is to establish a formal learn-
ing program in which staff members
1. Are responsible for their own learning, coordinated with a
mentor or coach
2. Have the right to determine how they will meet their own
learning needs, within available resources, time frames, and
set outcomes
3. Are responsible for evaluating and assessing their progress
with their learning
In parallel, staff coaches or mentors
1. Have the responsibility to frame the learning objectives so
that they are consistent with agreed-on individual weaknesses
130 INFORMATION TECHNOLOGY
2. Are responsible for providing access and support for staff
3. Must determine the extent of their involvement with mentor-
ing and their commitment to assisting staff members achieve
stated outcomes
4. Are ultimately responsible for the evaluation of individual’s
progress and success
This program must also have a formal process and structure.
According to Mossman and Stewart (1988), formal programs, called
self-managed learning (SML), need the following organization and
materials:
1. Staff members should work in groups as opposed to on their
own. This is a good opportunity to intermix IT and non-
IT staff with similar issues and objectives. The size of these
groups is (typically) from four to six members. Groups should
meet every two– three weeks, and should develop what are
known as learning contracts . Learning contracts specifically
state what the individual and management have agreed on.
Essentially, the structure of self-development allows staff
members to experience communities of practice, which by
their very nature, will also introduce them to group learning
and system-level thinking.
2. Mentors or coaches should preside over a group as opposed to
presiding over just one individual. There are two benefits to
doing this: (1) There are simply economies of scale for which
managers cannot cover staff on an individual basis, and (2)
facilitating a group with similar objectives benefits interac-
tion among the members. Coaches obviously need to play an
important role in defining the structure of the sessions, in
offering ideas about how to begin the self-development pro-
cess, and in providing general support.
3. Staff members need to have workbooks, films, courses,
study guides, books, and specialists in the organization,
all of which learners can use to help them accomplish their
goals.
4. Typically, learning contracts will state the assessment meth-
ods. However, assessment should not be limited only to indi-
viduals but also should include group accomplishments.
131MAnAGInG orGAnIzAtIonAl leArnInG
An SML should be designed to ensure that the learning program
for staff members represents a commitment by management to a for-
mal process, that can assist in the improvement of the project teams.
The third phase of self-development is evaluation. This process is a
mixture of individual and group assessments from phase II, coupled
with assessments from actual practice results. These are results from
proven outcomes during normal workday operations. To garner the
appropriate practice evaluation, mentors and coaches must be involved
in monitoring results and noting the progress on specific events that
occur. For example, if a new version of software is implemented, we
will want to know if IT staff and business users worked together to
determine how and when it should be implemented. These results
need to be formally communicated back to the learning groups. This
process needs to be continued on an ongoing basis to sustain the
effects of change management. Figure 5.5 represents the flow of the
three phases of the process.
The process for self-development provides an important approach
in assisting staff to perform better under the conditions of technologi-
cal dynamism. It is one thing to teach reflective practice; it is another
Individual learning contracts
Learning styles inventory
Self-managed learning program
communities of
practice IT and non-IT staff
Phase 1:
Establish
learning to
learn
objectives
Phase 2:
Create formal
learning
program
Make necessary
changes to self-
development
learning
Individual and group assessment
monitor operations for
measurable outcomes
Phase 3:
Implement
evaluation
Figure 5.5 Phases of self-development.
132 INFORMATION TECHNOLOGY
to get staff members to learn how to think in a manner that takes into
consideration the many risks that have plagued systems and software
projects for decades. While the role of management continues to play
a major part in getting things done within strategic objectives, self-
development can provide a strong learning method, that can foster
sustained bottom-up management, which is missing in most learning
organizations.
The Ravell case study provides some concrete evidence on how
self-development techniques can indeed get results. Because of the
time pressures at Ravell, I was not able to invest in the learning-to-
learn component at the start of the process. However, I used informal
methods to determine the learning preferences of the staff. This can
be accomplished through interviews in which staff responses can pro-
vide a qualitative basis for evaluating how specific personnel prefer to
learn. This helped me to formulate a specific training program that
involved group meetings with IT and non-IT-oriented groups.
In effect, phase II at Ravell had two communities. The first com-
munity was the IT staff. We met each week to review progress and
to set short-term objectives of what the community of IT wanted to
accomplish. I acted as a facilitator, and although I was in a power
position as their manager, I did not use my position unless there were
clear signs of resistance in the team (which there were in specific situ-
ations). The second community was formed with various line manager
departments. This is where I formed “dotted-line” reporting struc-
tures, which required IT staff members also to join other commu-
nities of practice. This proved to be an invaluable strategy because
it brought IT and business users together and formed the links that
eventually allowed IT staff members to begin to learn and to form
relationships with the user community, which fostered reflective
thinking and transformation.
As stated, there are setbacks at the start of any self-development
program, and the experience at Ravell was no exception. Initially,
IT staff members had difficulty understanding what was expected
of them; they did not immediately perceive the learning program as
an opportunity for their professional growth. It was through ongo-
ing, motivated discourse in and outside of the IT community that
helped achieve measurable increments of self-developmental growth.
Furthermore, I found it necessary to integrate individual coaching
133MAnAGInG orGAnIzAtIonAl leArnInG
sessions with IT staff. While group sessions were useful, they were
not a substitute for individual discussions, which at times allowed
IT staff members to personally discuss their concerns and learning
requirements. I found the process to be ultimately valuable, and I
maintained the role of coach, as opposed to that of a manager who
tells IT staff members what to do in every instance. I knew that direct
management only would never allow for the development of learning.
Eventually, self-development through discourse will foster identity
development. Such was the case at Ravell, where both user and IT
groups eventually came together to form specific and interactive com-
munities of practice. This helped form a clearer identity for IT staff
members, and they began to develop the ability to address the many
project risk issues that I defined in this chapter. Most important for
the organization was that Ravell phase I built the foundation for later
phases that required more group and system thinking among the IT
ranks.
Evaluation of the performance at Ravell (phase III of the self-
development process) was actually easier than expected, which means
that if the first two phases are successful, evaluation will naturally be
easy to determine. As reflective thinking became more evident in the
group, it was easier to see the growth in transformative behavior; the
IT groups became more proactive and critical by themselves, without
necessarily needing my input. In fact, my participation fell into more
of a supporter role; I was asked to participate more when I felt needed
to provide a specific task for the group. Evaluation based on perfor-
mance was also easier to determine, mainly because we had formed
interdepartmental communities and because of the relationships I
established with line managers.
Another important decision we made and one that nurtured our
evaluation capabilities was the fact that line managers often joined
our IT staff meetings. So, getting feedback on actual results was
always open for discussion.
Viewing self-development in the scope of organizational learning
and management techniques provides an important support method
for later development in system thinking. The Ravel experience did
just that, as the self-development process inevitably laid the foun-
dation for more sophisticated organizational learning, required as a
business matures under ROD.
134 INFORMATION TECHNOLOGY
Social Networks and Information Technology
The expansion of social networks, through the use of technological
innovations, has substantially changed the way information flows in
and out of a business community. Some companies, particularly in the
financial services communities, have attempted to “lock out” social
network capabilities. These attempts are ways for organizations to
control, as opposed to change, behavior. Historically, such controls
to enforce compliance have not worked. This is particularly relevant
because of the emergence of a younger generation of workers who use
social networking tools as a regular way to communicate and carry out
discourse. Indeed, social networking has become the main vehicle for
social discourse both inside and outside organizations. There are those
who feel that the end of confidentiality may be on the horizon. This
is not to suggest that technology executives give up on security—we
all know this would be ludicrous. On the other hand, the increasing
pressure to “open” the Web will inevitably become too significant to
ignore. Thus, the technology executive of the future must be prepared
to provide desired social and professional networks to their employees
while figuring out how to minimize risk—certainly not an easy objec-
tive. Organizations will need to provide the necessary learning tech-
niques to help employees understand the limits of what can be done.
We must remember that organizations, governments, and busi-
nesses have never been successful at controlling the flow of information
to any population to or from any specific interest group—inevitably,
information flows through. As stated by Cross and Thomas (2009),
“The network perspective could trigger new approaches to organiza-
tion design at a time when environmental and competitive conditions
seem to be exhausting conventional wisdom” (p. 186). Most important
is the understanding that multinational organizations need to think
globally and nationally at the same time. To do this, employees must
transform their behavior and how they interact. Controlling access
does not address this concern; it only makes communication more
difficult and therefore does not provide a solution. Controls typically
manifest themselves in the form of new processes and procedures. I
often see technology executives proclaiming the need to change pro-
cesses in the name of security without really understanding that they
are not providing a solution, but rather, fostering new procedures that
135MAnAGInG orGAnIzAtIonAl leArnInG
will allow individuals to evade the new security measures. As Cross and
Thomas (2009) point out, “Formal structures often overlook the fact
that every formal organization has in its shadow an informal or ‘ invis-
ible’ organization” (p. 1). Instead, technology executives concerned
with security, need to focus on new organizational design to assist
businesses to be “social network ready.” ROD must then be extended
to allow for the expansion of social network integration, including,
but not limited to, such products as Linkedln, Facebook, and Twitter.
It may also be necessary to create new internal network infrastruc-
tures that specifically cater to social network communication.
Many software application companies have learned that compat-
ibility in an open systems environment is a key factor for success-
ful deployment of an enterprise-wide application solution. Thus, all
applications developed within or for an organization need to have
compatibility with the common and popular social network products.
This popularity is not static, but rather, a constant process of deter-
mining which products will become important social networks that
the company may want to leverage. We see social networks having
such an impact within the consumer environment—or what we can
consider to be the “market.” I explained in my definition of ROD that
it is the acceleration of market changes—or the changing relationship
between a buyer and seller—that dictates the successes and failures of
businesses. That said, technology executives must focus their attention
on how such networks will require their organizations to embrace
them. Obviously, this change carries risks. Adapting too early could
be overreacting to market hype, while lagging could mean late entry.
The challenge, then, for today’s technology leaders is to create
dynamic, yet functional, social networks that allow businesses to
compete while maintaining the controls they must have to protect
themselves. The IT organization must concentrate on how to provide
the infrastructure that allows these dynamic connections to be made
without overcontrol. The first mission for the technology executive is
to negotiate this challenge by working with the senior management
of the organization to reach consensus on the risk factors. The issues
typically involve the processes, behavior patterns, and risks shown in
Figure 5.6.
Ultimately, the technology executive must provide a new road map
that promotes interagency and cross-customer collaboration in a way
136 INFORMATION TECHNOLOGY
that will assist the organization to attain a ROD culture. Social net-
works are here to stay and will continue to necessitate 24/7 access for
everyone. This inevitably raises salient issues relating to the manage-
ment structure within businesses and how best to manage them.
In Chapter 2, I defined the IT dilemma in a number of contexts.
During an interview, a chief executive raised an interesting issue that
relates to the subject: “My direct reports have been complaining that
because of all this technology that they cannot get away from—that
their days never seem to end.” I responded to this CEO by asking,
Business process
Design a social network
that allows
participants to
respond dynamically
to customer and
business needs
Aspired behavior patterns Risks
Users understand the inherent limits
to what can be communicated
outside the organization, limit
personal transactions, and use
judgment when foreign e-mails are
forwarded.
Users cannot properly
determine the ethics of
behavior and will not take
the necessary precautions
to avoid exposing the
organization to outside
security breaches.
Discern which critical
functions are required
for the social network
to work effectively and
maintain the firm’s
competitive
positioning
Users are active and form strategic
groups (communities of practice)
that define needs on a regular basis
and work closely with IT and senior
management.
Users cannot keep up with
changes in social
networks, and it is
impossible to track
individual needs and
behaviors.
Provide a network
design that can be
scaled as needs
change within the
budget limitations of
the organization
�e organization must understand
that hard budgets for social
networking may not be feasible.
Rather, the network needs are
dynamic, and costs must be
assessed dynamically within the
appropriate operating teams in the
organizations.
Reality tells us that all
organizations operate
within budget limitations.
Large organizations find
it difficult to govern
dynamically, and smaller
organizations cannot
afford the personnel
necessary to manage
dynamically.
Create a social network
that “flattens” the
organization so that
all levels are
accessible
Particularly large organizations need
to have a network that allows its
people better access to its
departments, talent, and
management. In the 1980s, the
book In Search of Excellence (Peters
& Waterman, 1982) was the first
effort to present the value of a
“flatter” organizational structure.
Social networks provide the
infrastructure to make this a reality.
With access come the
challenges of responding
to all that connect to the
system. �e organization
needs to provide the
correct etiquette of how
individuals respond
dynamically without
creating anarchy.
Figure 5.6 Social network management issues.
137MAnAGInG orGAnIzAtIonAl leArnInG
“Why are they e-mailing and calling you? Is it possible that tech-
nology has exposed a problem that has always existed?” The CEO
seemed surprised at my response and said, “What do you mean?”
Again, I responded by suggesting that technology allowed access,
but perhaps, that was not really the problem. In my opinion, the real
problem was a weakness in management or organizational structure.
I argued that good managers build organizations that should handle
the questions that were the subject of these executives’ complaints.
Perhaps the real problem was that the organization or management
was not handling day-to-day issues. This case supports my thesis that
technology dynamism requires reevaluation of how the organization
operates and stresses the need to understand the cultural assimilation
abilities of dealing with change.
Another interesting aspect of social networks is the emergence of
otherwise invisible participants. Technology-driven networks have
allowed individuals to emerge not only because of the access determi-
nant but also because of statistics. Let me be specific. Network traffic
can easily be tracked, as can individual access. Even with limited his-
tory, organizations are discovering the valued members of their com-
panies simply by seeing who is active and why. This should not suggest
that social networks are spy networks. Indeed, organizations need to
provide learning techniques to guide how access is tracked and to
highlight the value that it brings to a business. As with other issues,
the technology executive must align with other units and individuals;
the following are some examples:
• Human resources (HR): This department has specific needs
that can align effectively with the entire social network.
Obviously, there are compliance issues that limit what can
be done over a network. Unfortunately, this is an area that
requires reassessment: In general, governance and controls do
not drive an organization to adopt ROD. There are other fac-
tors related to the HR function. First, is the assimilation of
new employees and the new talents that they might bring to
the network. Second, is the challenge of adapting to ongoing
change within the network. Third, is the knowledge lost of
those who leave the organization yet may still want to partici-
pate socially within the organization (friends of the company).
138 INFORMATION TECHNOLOGY
• Gender: Face-to-face meetings have always shown differences
in participation by gender. Men tend to dominate meetings
and the positions they hold in an organization. However, the
advent of social virtual networks has begun to show a shift
in the ways women participate and hold leadership positions
among their peers. In an article in Business Week (May 19,
2008), Auren Hoffman reports that women dominate social
network traffic. This may result in seeing more women-centric
communication. The question, then, is whether the expan-
sion of social networks will give rise to more women in senior
management positions.
• Marketing: The phenomenon of social networking has allowed
for the creation of more targeted connectivity; that is, the abil-
ity to connect with specific clients in special ways. Marketing
departments are undergoing an extraordinary transformation
in the way they target and connect with prospective custom-
ers. The technology executive is essentially at the center of
designing networks that provide customizable responses and
facilitate complex matrix structures. Having such abilities
could be the differentiator between success and failure for
many organizations.
One can see that the expansion of social networks is likely to have
both good and bad effects. Thus far, in this section I have discussed the
good. The bad relates to the expansion of what seems to be an unlim-
ited network. How does one manage such expansion? The answer lies
within the concept of alignment. Alignment has always been critical
to attain organizational effectiveness. The heart of alignment is deal-
ing with cultural values, goals, and processes that are key to meet
strategic objectives (Cross & Thomas, 2009). While the social net-
work acts to expose these issues, it does not necessarily offer solutions
to these differences. Thus, the challenge for the technology executive
of today is to balance the power of social networks while providing
direction on how to deal with alignment and control—not an easy
task but clearly an opportunity for leadership. The following chapters
offer some methods to address the challenges discussed in this chap-
ter, and the opportunities they provide for technology executives.
139
6
oRganizaTional
TRansfoRmaTion anD The
balanCeD sCoReCaRD
Introduction
The purpose of this chapter is to examine the nature of organiza-
tional transformation, how it occurs, and how it can be measured.
Aldrich (2001) defines organizational transformation along three
possible dimensions: changes in goals, boundaries, and activities.
According to Aldrich, transformations “must involve a qualita-
tive break with routines and a shift to new kinds of competencies
that challenge existing organizational knowledge” (p. 163). He
warns us that many changes in organizations disguise themselves
as transformative but are not. Thus, focusing on the qualifications
of authentic or substantial transformation is key to understanding
whether it has truly occurred in an organization. Technology, as
with any independent variable, may or may not have the capacity to
instigate organizational transformation. Therefore, it is important
to integrate transformation theory with responsive organizational
dynamism (ROD). In this way, the measurable outcomes of orga-
nizational learning and technology can be assessed in organizations
that implement ROD. Most important in this regard, is that organi-
zational transformation, along with knowledge creation, be directly
correlated to the results of implementing organizational learning.
That is, the results of using organizational learning techniques must
result in organizational transformation.
Organizational transformation is significant for three key reasons:
1. Organizations that cannot change will fundamentally be at
risk against competitors, especially in a quickly changing
market.
140 INFORMATION TECHNOLOGY
2. If the organization cannot evolve, it will persist in its norms
and be unwilling to change unless forced to do so.
3. If the community population is forced to change and is con-
strained in its evolutionary path, it is likely that it will not be
able to transform and thus, will need to be replaced.
Aldrich (2001) establishes three dimensions of organizational
transformation. By examining them, we can apply technology-
specific changes and determine within each dimension what consti-
tutes authentic organizational transformation.
1. Goals: There are two types of goal-related transformations: (a)
change in the market or target population of the organiza-
tion; (b) the overall goal of the organization itself changes. I
have already observed that technology can affect the mission
of an organization, often because it establishes new market
niches (or changes them). Changed mission statements also
inevitably modify goals and objectives.
2. Boundaries: Organizational boundaries transform when there
is expansion or contraction. Technology has historically
expanded domains by opening up new markets that could
not otherwise be reached without technological innovation.
E-business is an example of a transformation brought about
by an emerging technology. Of course, business can contract
as a result of not assimilating a technology; technology also
can create organizational transformation.
3. Activity systems: Activity systems define the way things are
done. They include the processing culture, such as behav-
ioral roles. Changes in roles and responsibilities alone do
not necessarily represent organizational transformation
unless it is accompanied by cultural shifts in behavior. The
cultural assimilation component of ROD provides a method
with which to facilitate transformations that are unpredict-
able yet evolutionary. Sometimes, transformations in activ-
ity systems deriving from technological innovations can
be categorized by the depth and breadth of its impact on
other units. For example, a decision could be made to use
technology as part of a total quality management (TQM)
141orGAnIzAtIonAl trAnsForMAtIon
effort. Thus, activity transformations can be indirect and
need to be evaluated based on multiple and simultaneous
events.
Aldrich’ s (2001) concept of organizational transformation bears
on the issue of frequency of change. In general, he concludes that
the changes that follow a regular cycle are part of normal evolution
and “flow of organizational life” (p. 169) and should not be treated as
transformations. Technology, on the other hand, presents an inter-
esting case in that it can be perceived as normal in its persistence
and regularity of change while being unpredictable in its dynamism.
However, Aldrich’ s definition of transformation poses an interesting
issue for determining transformations resulting from technological
innovations. Specifically, under what conditions is a technological
innovation considered to have a transformative effect on the organi-
zation? And, when is it to be considered as part of regular change? I
refer to Figure 6.1, first presented in Chapter 3 on driver and sup-
porter life cycles to respond to this question.
The flows in this cycle can be used as the method to determine
technological events that are normal change agents versus transforma-
tive ones. To understand this point, one should view all driver-related
technologies as transformational agents because they, by definition,
affect strategic innovation and are approved based on return on
investment (ROI). Aldrich’ s (2001) “normal ebb and flows” repre-
sent the “mini-loops” that are new enhancements or subtechnologies,
which are part of normal everyday changes necessary to mature a
Mini loop technology enhancementsTechnology
driver
Evaluation
cycle
Driver
maturation
Support
status
Replacement or
outsource
Economies
of scale
Figure 6.1 Driver-to-supporter life cycle.
142 INFORMATION TECHNOLOGY
technological innovation. Thus, driver variables that result from mini-
loops, would not be considered transformational agents of change.
It is important to recognize that Aldrich’ s (2001) definition of
organizational transformation should not be confused with theories
of transformative learning. As West (1996) proclaims, “The goal of
organizational learning is to transform the organization” (p. 54). The
study of transformative learning has been relevant to adult education,
and has focused on individual, as opposed to organizational, devel-
opment and learning. Thus, transformative learning has been better
integrated in individual learning and reflective practice theories than
in organizational ones. While these modes of learning are related to
the overall learning in organizations, they should not be confused
with organizations that are attempting to realize their performance
objectives.
Yorks and Marsick (2000) offer two strategies that can produce
transformative learning for individuals, groups, or organizations:
action learning and collaborative inquiry. I covered action science in
Chapter 4, particularly reflective practices, as key interventions to fos-
ter both individual and group evolution of learning, specifically in
reference to how to manage ROD. Aspects of collaborative inquiry
are applied to later stages of maturation and to more senior levels of
management based on systems-level learning. As Yorks and Marsick
(2000) state, “For the most part the political dimensions of how the
organization functions is off limits, as are discussions of larger social
consequences” (p. 274).
Technological innovations provide acceleration factors and foster
the need for ROD. Technology also furnishes the potential tangible
and measurable outcomes necessary to normalize York and Marsick’ s
(2000) framework for transformative learning theory into organiza-
tional contexts as follows:
1. Technology, specifically e-business, has created a critical need
for organizations to engage with clients and individuals in a
new interactive context. This kind of discourse has established
accelerated needs, such as understanding the magnitude of
alternative courses of action between customer and vendor.
The building of sophisticated intranets (internal Internets) and
their evolution to assimilate with other Internet operations
143orGAnIzAtIonAl trAnsForMAtIon
has also fueled the need for learning to occur more often than
before and at organizational level.
Because technology can produce measurable outcomes,
individuals are faced with accelerated reflections about the
cultural impact of their own behaviors. This is directly related
to the implementation of the cultural assimilation component
of ROD, by which individuals determine how their behaviors
are affected by emerging technologies.
2. Early in the process of implementing strategic integration,
reflective practices are critical for event-driven technology
projects. These practices force individuals to continually reex-
amine their existing meaning perspectives (specifically, their
views and habits of mind). Individual reflection in, on, and to
practice will evolve to system-level group and organizational
learning contexts, as shown in the ROD arc.
3. The process of moving from individual to system-level learn-
ing during technology maturation is strengthened by the
learners’ abilities to comprehend why historical events have
influenced their existing habits of mind.
4. The combination of strategic integration and cultural assimi-
lation lays the foundation for organizational transformation
to occur. Technology provides an appropriate blend of being
both strategic and organizational in nature, thus allow-
ing learners to confront their prior actions and develop new
practices.
Aldrich (2001) also provides an interesting set of explanations for
why it is necessary to recognize the evolutionary aspect of organiza-
tional transformations. I have extended them to operate within the
context of ROD, as follows:
Variation : Defined as “change from current routines and compe-
tencies and change in organizational forms” (Aldrich, 2001,
p. 22). Technology provides perhaps the greatest amount of
variation in routines and thereby establishes the need for
something to manage it: ROD. The higher the frequency of
variation, the greater the chance that organizational transfor-
mation can occur. Variation is directly correlated to cultural
assimilation.
144 INFORMATION TECHNOLOGY
Selection : This is the process of determining whether to use a
technology variation. Selections can be affected by external
(outside the organization) and internal (inside the organi-
zation) factors, such as changes in market segments or new
business missions, respectively. The process of selection can be
related to the strategic integration component of ROD.
Retention : Selected variations are retained or preserved by the
organization. Retention is a key way of validating whether
organizational transformation has occurred. As Aldrich
states: “Transformations are completed when knowledge
required for reproducing the new form is embodied in a com-
munity of practice” (p. 171).
Because of the importance of knowledge creation as the basis of
transformation, communities of practice are the fundamental struc-
tures of organizational learning to support organizational transforma-
tion. Aldrich (2001) also goes beyond learning; he includes policies,
programs, and networks as parts of the organizational transformative
process. Figure 6.2 shows Aldrich’ s evolutionary process and its rela-
tionship to ROD components.
Thus, we see from Figure 6.2 the relationships between the pro-
cesses of creating organizational transformation, the stages required
to reach it, the ROD components in each stage, and the correspond-
ing organizational learning method that is needed. Notice that the
mapping of organizational learning methods onto Aldrich’ s (2001)
scheme for organizational transformation can be related to the ROD
arc. It shows us that as we get closer to retention, organizational learn-
ing evolves from an individual technique to a system/organizational
learning perspective. Aldrich’ s model is consistent with my driver-
versus-supporter concept. He notes, “When the new form becomes
a taken-for-granted aspect of every day life in the organization, its
legitimacy is assumed” (p. 175).
Hence, the assimilation of new technologies cannot be consid-
ered transformative until it behaves as a supporter. Only then can we
determine that the technology has changed organizational biases and
norms. Representing the driver and supporter life cycle to include this
important relationship is shown in Figure 6.3.
145orGAnIzAtIonAl trAnsForMAtIon
Technology
Variation
Strategic integration–
assess value of
technology
Cultural assimilation–
assess extent of what
to implement and
determine effects on
structure
Strategic integration–
determine which
technologies best fit
corporate needs and
provide highest ROI
Corresponding organizational learning methods
Individual
reflective
practices
Group-based
reflective
practices
Social discourse
using
communities
of practice
Communities of
practice and
knowledge
management
Validation of
organizational
transformation–
technology has provided
strategic outcomes and
modified structures
and processes
Selection
Retention
Figure 6.2 Stages of organizational transformation and ROD.
Individual
reflective practice
Group-based
reflective
practice
Communities
of practice
Knowledge
management
Organizational
transformation
Mini loop technology enhancementsTechnology
driver
Evaluation
cycle
Driver
maturation
Support
status
Replacement or
outsource
Economies
of scale
Figure 6.3 Organizational transformation in the driver-to-supporter life cycle.
146 INFORMATION TECHNOLOGY
Methods of Ongoing Evaluation
If we define organizational transformation as the retention of knowl-
edge within the body of communities of practice, the question to be
answered is how this retention actually is determined in practice.
The possibility often occurs that transformations are partial or in
some phase of completion. This would mean that the transformation
is incomplete or needs to continue along some phase of approach.
Indeed, cultural assimilation does not occur immediately, but rather,
over periods of transition. Much of the literature on organizational
transformation does not address the practical aspects of evaluation
from this perspective. This lack of information is particularly prob-
lematic with respect to technology, since so much of how technology
is implemented relates to phased steps that rarely happen in one major
event. Thus, it is important to have some method of ongoing evalua-
tion to determine the extent of transformation that has occurred and
which organizational learning methods need to be applied to help
continue the process toward complete transformation.
Aldrich’ s (2001) retention can also be misleading. We know that
organizational transformation is an ongoing process, especially as
advocated in ROD. It is probable that transformations continue and
move from one aspect of importance to another, so a completed trans-
formation may never exist. Another way of viewing this concept is to
treat transformations as event milestones. Individuals and communi-
ties of practice are able to track where they are in the learning process.
It also fits into the phased approach of technology implementation.
Furthermore, the notion of phases allows for integration of organiza-
tional transformation concepts with stage and development theories.
With the acceptance of this concept, there needs to be a method or
model that can help organizations define and track such phases of
transformation. Such a model would also allow for mapping outcomes
onto targeted business strategies. Another way of understanding the
importance of validating organizational transformation is to recognize
its uniqueness, since most companies fail to execute their strategies.
The method that can be applied to the validation of organizational
transformation is a management tool called the balanced scorecard.
The balanced scorecard was introduced by Kaplan and Norton (2001)
in the early 1990s as a tool to solve measurement problems. The ability
147orGAnIzAtIonAl trAnsForMAtIon
of an organization to develop and operationalize its intangible assets
has become more and more a critical component for success. As I
have already expressed regarding the work of Lucas (1999), financial
measurement may not be capable of capturing all IT value. This is
particularly true in knowledge-based theories. The balanced score-
card can be used as a solution for measuring outcomes that are not
always financial and tangible. Furthermore, the balanced scorecard
is a “living” document that can be modified as certain objectives or
measurements require change. This is a critical advantage because, as
I have demonstrated, technology projects often change in scope and in
objectives as a result of internal and external factions.
The ultimate value, then, of the balanced scorecard, in this con-
text, is to provide a means for evaluating transformation not only for
measuring completion against set targets but also for defining how
expected transformations map onto the strategic objectives of the
organization. In effect, it is the ability of the organization to execute
its strategy. Before explaining the details of how a balanced scorecard
can be applied specifically to ROD, I offer Figure 6.4, which shows
Mobilize change
through executive
leadership
Balanced
Translate the
strategy to
operational terms
Scorecard
Make strategy a
continual processStrategy
Align
organizational to
the strategy
Make strategy
everyone’s job
Figure 6.4 Balanced scorecard. (From Kaplan, R.S., & Norton, D.P., The Strategy-Focused
Organization , Harvard University Press, Cambridge, MA, 2001.)
148 INFORMATION TECHNOLOGY
exactly where the scorecard fits into the overall picture of transition-
ing emerging technologies into concrete strategic benefit.
The generic objectives of a balanced scorecard are designed to cre-
ate a strategy-focused organization. Thus, all of the objectives and
measurements should be derived from the vision and strategy of the
organization (Kaplan & Norton, 2001). These measurements are
based on the fundamental principles of any strategically focused orga-
nization and on alignment and focus. Kaplan and Norton define these
principles as the core of the balanced scorecard:
1. Translate the strategy to operational terms : This principle
includes two major components that allow an organization to
define its strategy from a cause-and-effect perspective using
a strategy map and scorecard. Thus, the strategy map and its
corresponding balanced scorecard provide the basic measure-
ment system.
2. Align the organization to the strategy: Kaplan and Norton
define this principle as favoring synergies among organiza-
tional departments that allow communities of practice to have
a shared view, and common understanding of their roles.
3. Make strategy everyone’ s everyday job: This principle supports
the notion of a learning organization that requires everyone’ s
participation, from the chief executive officer (CEO) to cleri-
cal levels. To accomplish this mission, the members of the
organization must be aware of business strategy; individuals
may need “personal” scorecards and a matching reward sys-
tem for accomplishing the strategy.
4. Make strategy a continual process: This process requires the
linking of important, yet fundamental, components, includ-
ing organizational learning, budgeting, management reviews,
and a process of adaptation. Much of this principle falls into
the areas of learning organization theories that link learning
and strategy in ongoing perpetual cycles.
5. Mobilize change through executive leadership: This principle
stresses the need for a strategy-focused organization that
incorporates the involvement of senior management and can
mobilize the organization and provide sponsorship to the
overall process.
149orGAnIzAtIonAl trAnsForMAtIon
Using the core balanced scorecard schematic, I have modified it to
operate with technology and ROD, as shown in Figure 6.5.
1. Evaluation of technology: The first step is to have an infrastruc-
ture that can determine how technology fits into a specific
strategy. Once this is targeted, the evaluation team needs to
define it in operational terms. This principle requires the stra-
tegic integration component of ROD.
2. Align technology with business strategy : Once technology is
evaluated, it must be integrated into the business strategy.
This involves ascertaining whether the addition of technology
will change the current business strategy. This principle is also
connected to the strategic integration component of ROD.
3. Make technology projects part of communities of practice : Affected
communities need to be strategically aware of the project.
Organizational structures must determine how they distrib-
ute rewards and objectives across departments. This principle
requires the cultural assimilation component of ROD.
4. Phasedin technology implementation : Short- and long-term
project objectives are based on driver and supporter life cycles.
Executive
interfaces
Balanced
Evaluation of
technology
Scorecard
Phase technology
implementation
Responsive
org
dynamism
strategy
Align
technology
with business
strategy
Make technology
project part of
communities of
practice
Figure 6.5 Balanced scorecard ROD.
150 INFORMATION TECHNOLOGY
This will allow organizational transformation phases to be
linked to implementation milestones. This principle maps
onto the cultural assimilation component of ROD.
5. Executive interface : CEO and senior managers act as executive
sponsors and project champions. Communities of practice
and their common “threads” need to be defined, including
middle management and operations personnel, so that top-
down, middle-up-down, and bottom-up information flows
can occur.
The balanced scorecard ultimately provides a framework to view
strategy from four different measures:
1. Financial : ROI and risk continue to be important components
of strategic evaluation.
2. Customer : This involves the strategic part of how to create
value for the customers of the organization.
3. Internal business processes : This relates to the business pro-
cesses that provide both customer satisfaction and operational
efficiency.
4. Learning and growth : This encompasses the priorities and
infrastructure to support organizational transformation
through ROD.
The generic balanced scorecard framework needs to be extended to
address technology and ROD. I propose the following adjustments:
1. Financial : Requires the inclusion of indirect benefits from
technology, particularly as Lucas (1999) specifies, in nonmon-
etary methods of evaluating ROI. Risk must also be factored
in, based on specific issues for each technology project.
2. Customer : Technology-based products are integrated with
customer needs and provide direct customer package inter-
faces. Further, web systems that use the Internet are depen-
dent on consumer use. As such, technology can modify
organizational strategy because of its direct effect on the cus-
tomer interface.
3. Internal business processes : Technology requires business pro-
cess reengineering (BPR), which is the process of reevaluat-
ing existing internal norms and behaviors before designing a
151orGAnIzAtIonAl trAnsForMAtIon
new system. This new evaluation process addresses customers,
operational efficiencies, and cost.
4. Learning and growth : Organizational learning techniques,
under the umbrella of ROD, need to be applied on an ongo-
ing and evolutionary basis. Progress needs to be linked to the
ROD arc.
The major portion of the balanced scorecard strategy is in its initial
design; that is, in translating the strategy or, as in the ROD scorecard,
the evaluation of technology. During this phase, a strategy map and
actual balanced scorecards are created. This process should begin by
designing a balanced scorecard that articulates the business strategy.
Remember, every organization needs to build a strategy that is unique
and based on its evaluation of the external and internal situation (Olve
et al., 2003). To clarify the definition of this strategy, it is easier to
consider drawing the scorecard initially in the form of a strategy map.
A generic strategy map essentially defines the components of each
perspective, showing specific strategies within each one, as shown in
Figure 6.6.
Perspective:
Financial
Customer
Process
Learning and
growth
Improve
technology
Improve staff
skills
Establish new
markets
Increase
customer
service
Increase
efficiency
More satisfied
customers
Improve
profitability
Stronger
finances
Increase
customer
base
Figure 6.6 Strategy map. (From Olve, N., et al., Making Scorecards Actionable: Balancing
Strategy and Control , Wiley, New York, 2003.)
152 INFORMATION TECHNOLOGY
We can apply the generic strategy map to an actual case study,
Ravell phase I, as shown in Figure 6.7.
Recall that Ravell phase I created a learning organization using
reflective practices and action science. Much of the organization
transformation at Ravell was accelerated by a major event— the relo-
cation of the company. The move was part of a strategic decision for
the organization, specifically the economies of scale for rental expense
and an opportunity to retire old computers and replace them with
a much needed state-of-the-art network. Furthermore, there was a
grave need to replace old legacy applications that were incapable of
operating on the new equipment and were also not providing the
competitive advantage that the company sought. In using the strategy
map, a balanced scorecard can be developed containing the specific
outcomes to achieve the overall mission. The balanced scorecard is
shown in Figure 6.8.
The Ravell balanced scorecard has an additional column that defines
the expected organizational transformation from ROD. This model
addresses the issue of whether a change is truly a transformation. This
method also provides a systematic process to forecast, understand, and
Perspective:
Financial
Users
Process
Learning
and growth
New technology
products
New ways of
staff interaction
Establish
new organization
structure
Improved
systems
Provide accurate
and timely
information
More satisfied
users
Improve return
on project
investments
Reduce
technology
overhead
Increase user
IT support
Figure 6.7 Technology strategy map.
153orGAnIzAtIonAl trAnsForMAtIon
present what technology initiatives will ultimately change in the stra-
tegic integration and cultural assimilation components of ROD.
There are two other important factors embedded in this modified
balanced scorecard technique. First, scorecards can be designed at
varying levels of detail. Thus, two more balanced scorecards could
Strategy map
perspective
Financial
Measureable
outcomes Strategic objectives Organizational
transformation
Combine IT expenses with
relocation and capitalize
entire expense
Combination of expenses
requires formation of new
communities of practice,
which includes finance,
engineering, and IT
Improve
returns on
project
investments
Reduce
technology
overhead
costs
More
satisfied
users
Increase
user IT
support
Users
Process
Learning and
growth
Provide
accurate
and timely
information
Improved
systems
New
technology
products
New ways
of staff
interaction
structure
Establish
new
organization
Integrate new telephone
system with computer
network expenses
Leverage engineering
and communications
expenses with technology
Retire old equipment
from financial statements
Increase access to
central applications
Integrate IT within other
departments to improve
dynamic customer
support requirements
Provide new products to
replace old e-mail
system and make
standard applications
available to all users
Establish help desk
personnel
Process of supporting users
requires IT staff to embrace
reflective practices. User
relationship formed through
new communities of practice
and cultural assimilation
with user community
New culture at Ravell
established
Startegic integration occurs
through increased discourse
and language among
communities of practice
engaged in making
relocation successful. New
knowledge created and
needs knowledge
management
Improve decision support
for improved reporting
and strategic marketing
Upgrade new internal
systems, including
customer relationship
management (CRM),
general ledger, and
rights and royalties
Investigate new
voice-messaging
technology to improve
integration of e-mail and
telephone systems
Physically relocate IT
staff across departments
Modify IT reporting
structure with “dotted
line” to business units
IT becomes more critically
reflective, understands value
of their participation with
learning organization. IT
staff seeks to know less and
understands view of the
“other”
·
·
·
·
·
·
·
·
· ·
·
·
·
·
·
·
·
·
·
·
·
·
Figure 6.8 Ravell phase I balanced scorecard.
154 INFORMATION TECHNOLOGY
be developed that reflect the organizational transformations that
occurred in Ravell phases II and III, or the three phases could
be summarized as one large balanced scorecard or some combina-
tion of summary and detail together. Second, the scorecard can
be modified to reflect unexpected changes during implementa-
tion of a technology. These changes could be related to a shift-
ing mission statement or to external changes in the market that
require a change in business strategy. Most important, though,
are the expected outcomes and transformations that occur during
the course of a project. Essentially, it is difficult to predict how
organizations will actually react to changes during an IT project
and transform.
The balanced scorecard provides a checklist and tracking system
that is structured and sustainable— but not perfect. Indeed, many
of the outcomes from the three phases of Ravell were unexpected or
certainly not exactly what I expected. The salient issue here is that it
allows an organization to understand when such unexpected changes
have occurred. When this does happen, organizations need to have
an infrastructure and a structured system to examine what a change
in their mission, strategy, or expectations means to all of the com-
ponents of the project. This can be described as a “rippling effect,” in
which one change can instigate others, affecting many other parts of
the whole. Thus, the balanced scorecard, particularly using a strat-
egy map, allows practitioners to reconcile how changes will affect the
entire plan.
Another important component of the balanced scorecard, and the
reason why I use it as the measurement model for outcomes, is its
applicability to organizational learning. In particular, the learning
and growth perspective shows how the balanced scorecard ensures
that learning and strategy are linked in organizational development
efforts.
Implementing balanced scorecards is another critical part of the
project— who does the work, what the roles are, and who has the
responsibility for operating the scorecards? While many companies
use consultants to guide them, it is important to recognize that bal-
anced scorecards reflect the unique features and functions of the com-
pany. As such, the rank and file need to be involved with the design
and support of balanced scorecards.
155orGAnIzAtIonAl trAnsForMAtIon
Every business unit that has a scorecard needs to have someone
assigned to it, someone accountable for it. A special task force may
often be required to launch the training for staff and to agree on how
the scorecard should be designed and supported. It is advisable that the
scorecard be implemented using some application software and made
available on an Internet network. This provides a number of benefits:
It reduces paper or local files that might get lost or not be secured, allows
for easy “roll-up” of multiple scorecards, to a summary level, and access
via the Internet (using an external secured hookup) allows the scorecard
to be maintained from multiple locations. This is particularly attractive
for staff members and management individuals who travel.
According to Olve et al. (2003), there are four primary responsi-
bilities that can support balanced scorecards:
1. Business stakeholders : These are typically senior managers
who are responsible for the group that is using the score-
card. These individuals are advocates of using scorecards and
require compliance if deemed necessary. Stakeholders use
scorecards to help them manage the life cycle of a technology
implementation.
2. Scorecard designers : These individuals are responsible for the
“look and feel” of the scorecard as well as its content. To some
extent, the designers set standards for appearance, text, and
terminology. In certain situations, the scorecard designers
have dual roles as project managers. Their use of scorecards
helps them understand how the technology will operate.
3. Information providers : These people collect, measure, and
report on the data in the balanced scorecard. This function
can be implemented with personnel on the business unit level
or from a central services department. Reporting informa-
tion often requires support from IT staff, so it makes sense to
have someone from IT handle this responsibility. Information
providers use the scorecard to perform the measurement of
project performance and the handling of data.
4. Learning pilots : These individuals link the scorecard to organi-
zational learning. This is particularly important when measur-
ing organizational transformation and individual development.
156 INFORMATION TECHNOLOGY
The size and complexity of an organization will ultimately deter-
mine the exact configuration of roles and responsibilities that are
needed to implement balanced scorecards. Perhaps the most appli-
cable variables are:
Competence : Having individuals who are knowledgeable about
the business and its processes, as well as knowledgeable
about IT.
Availability : Individuals must be made available and appropri-
ately accommodated in the budget. Balanced scorecards that
do not have sufficient staffing will fail.
Executive management support: As with most technology proj-
ects, there needs to be a project advocate at the executive level.
Enthusiasm : Implementation of balanced scorecards requires a
certain energy and excitement level from the staff and their
management. This is one of those intangible, yet invaluable,
variables.
Balanced Scorecards and Discourse
In Chapter 4, I discussed the importance of language and discourse
in organizational learning. Balanced scorecards require ongoing dia-
logues that need to occur at various levels and between different com-
munities of practice. Therefore, it is important to integrate language
and discourse and communities of practice theory with balanced
scorecard strategy. The target areas are as follows:
• Developing of strategy maps
• Validating links across balanced scorecard perspectives
• Setting milestones
• Analyzing results
• Evaluating organizational transformation
Figure 6.9 indicates a community of practice relationship that
exists at a company. Each of these three levels was connected by a
concept I called “common threads of communication.” This model can
be extended to include the balanced scorecard.
The first level of discourse occurs at the executive community
of practice. The executive management team needs to agree on the
157orGAnIzAtIonAl trAnsForMAtIon
specific business strategy that will be used as the basis of the mis-
sion statement for the balanced scorecard. This requires conversations
and meetings that engage the CEO, executive board members (when
deemed applicable), and executive managers, like the chief operat-
ing officer (COO), chief financial officer (CFO), chief information
officer (CIO), and so on. Each of these individuals needs to represent
his or her specific area of responsibility and influence from an execu-
tive perspective. The important concept is that the balanced scorecard
mission and strategy should be a shared vision and responsibility for
the executive management team as a whole. To accomplish this task,
the executive team needs to be instructed on how the balanced score-
card operates and on its potential for accomplishing organizational
transformation that leads to strategic performance. Ultimately, the
discourse must lead to a discussion of the four balanced scorecard
perspectives: financial, customer, process, and learning and growth.
From a middle management level, the balanced scorecard allows
for a measurable model to be used as the basis of discourse with
Executive community of practice
CEO
Americas
Executive
board Consultants
CEO
AmericasNew ideas
and
adjustments
Senior
management
Operations
Middle
management
Middle
management
Adjustments as a
result of discourse
with operations
community
Operations management community of practice
Implementation community of practice
Figure 6.9 Community of practice “threads.”
158 INFORMATION TECHNOLOGY
executives. For example, the strategy map can be the vehicle for
conducting meaningful conversations on how to transform execu-
tive-level thinking and meaning into a more operationally focused
strategy. Furthermore, the scorecard outlines the intended outcomes
for strategy and organizational learning and transformation.
The concept of using the balanced scorecard as a method with
which to balance thinking and meaning across communities of prac-
tice extends to the operational level as well. Indeed, the challenge of
making the transition from thinking and meaning at the executive
level of operations is complicated, especially since these communi-
ties rarely speak the same language. The measurable outcomes section
of the scorecard provides the concrete layer of outcomes that opera-
tions staff tend to embrace. At the same time, this section provides
corresponding strategic impact and organizational changes needed to
satisfy business strategies set by management.
An alternative method of fostering the need forms of discourse is to
create multiple-tiered balanced scorecards designed to fit the language
of each community of practice, as shown in Figure 6.10. The diagram
in Figure 6.10 shows that each community can maintain its own lan-
guage and methods while establishing “common threads” to foster a
transition of thinking and meaning between it and other communi-
ties. The common threads from this perspective look at communica-
tion at the organizational/group level, as opposed to the individual
level. This relates to my discussion in Chapter 4, which identified
individual methods of improving personal learning, and development
within the organization. This suggests that each balanced scorecard
must embrace language that is common to any two communities to
establish a working and learning relationship— in fact, this common
language is the relationship.
Knowledge Creation, Culture, and Strategy
Balanced scorecards have been used as a measurement of knowledge
creation. Knowledge creation, especially in technology, has signifi-
cant meaning, specifically in the relationship between data and infor-
mation. Understanding the sequence between these two is interesting.
We know that organizations, through their utilization of software
applications, inevitably store data in file systems called databases.
159orGAnIzAtIonAl trAnsForMAtIon
The information stored in these databases can be accessed by many
different software applications across the organization. Accessing
multiple databases and integrating them across business units creates
further valuable information. Indeed, the definition of information
is “organized data.” These organized data are usually stored in data
infrastructures called data warehouses or data marts, where the infor-
mation can be queried and reported on to assist managers in their
decision-making processes. We see, in the Ravell balanced scorecard,
that decision-support systems were actually one of the strategic objec-
tives for the process perspective.
Organization-level balanced scorecard
Common
discourse threads
Executive-level
balanced scorecard
Management-level
balanced scorecard
Operational-level
balanced scorecard
Common
discourse threads
Common
discourse threads
Figure 6.10 Community of practice “common threads.”
160 INFORMATION TECHNOLOGY
Unfortunately, information does not ensure new knowledge cre-
ation. New knowledge can only be created by individuals who evolve
in their roles and responsibilities. Individuals, by participating in
groups and communities of practice, can foster the creation of new
organizational knowledge. However, to change or evolve one’ s behav-
ior, there must be individual or organizational transformation. This
means that knowledge is linked to organizational transformation. The
process to institutionalize organizational transformation is dependent
on management interventions at various levels. Management needs to
concentrate on knowledge management and change management and
to act as a catalyst and advocate for the successful implementation of
organizational learning techniques. These techniques are necessary to
address the unique needs of ROD.
Ultimately, the process must be linked to business strategy. ROD
changes the culture of an organization, through the process of cul-
tural assimilation. Thus, there is an ongoing need to reestablish align-
ment between culture and strategy, with culture altered to fit new
strategy, or strategy first, then culture (Pietersen, 2002). We see this
as a recurring theme, particularly from the case studies, that busi-
ness strategy must drive organizational behavior, even when technol-
ogy acts as a dynamic variable. Pietersen identifies what he called six
myths of corporate culture:
1. Corporate culture is vague and mysterious.
2. Corporate culture and strategy are separate and distinct
things.
3. The first step in reducing our company should be defining our
values.
4. Culture cannot be measured or rewarded.
5. Our leaders must communicate what our culture is.
6. Our culture is the one constant that never changes.
Resulting from these myths, Pietersen (2002) establishes four basic
rules of success for creating a starting point for the balance between
culture and strategy:
1. Company values should directly support strategic priorities.
2. They should be described as behaviors.
3. They should be simple and specific.
161orGAnIzAtIonAl trAnsForMAtIon
4. They should be arrived at through a process of enrollment
(motivation).
Once business synergy is created, sustaining the relationship
becomes an ongoing challenge. According to Pietersen (2002), this
must be accomplished by continual alignment, measurement, set-
ting examples, and a reward system for desired behaviors. To lead
change, organizations must create compelling statements of the case
for change, communicate constantly and honestly with their employ-
ees, maximize participation, remove ongoing resistance in the ranks,
and generate some wins. The balanced scorecard system provides the
mechanism to address the culture– strategy relationship while main-
taining an important link to organizational learning and ROD. These
linkages are critical because of the behavior of technology. Sustaining
the relationship between culture and strategy is simply more critical
with technology as the variable of change.
Ultimately, the importance of the balanced scorecard is that it
forces an understanding that everything in an organization is con-
nected to some form of business strategy. Strategy calls for change,
which requires organizational transformation.
Mission : To accelerate investment in technology during the reloca-
tion of the company for reasons of economies of scale and competitive
advantage.
163
7
viRTual Teams anD
ouTsouRCing
Introduction
Much has been written and published about virtual teams. Most
define virtual teams as those that are geographically dispersed,
although others state that virtual teams are those that primarily inter-
act electronically. Technology has been the main driver of the growth
of virtual teams. In fact, technology organizations, due mostly to the
advent of competitive outsourcing abroad, have pushed information
technology (IT) teams to learn how to manage across geographical
locations, in such countries as India, China, Brazil, Ireland, and many
others. These countries are not only physically remote but also present
barriers of culture and language. These barriers often impede commu-
nications about project status, and affect the likelihood of delivering a
project on time, and within forecasted budgets.
Despite these major challenges, outsourcing remains attractive due
to the associated cost savings and talent supply. These two advantages
are closely associated. Consider the migration of IT talent that began
with the growth of India in providing cheap and educated talent. The
promise of cost savings caused many IT development departments to
begin using more India-based firms. The ensuing decline in IT jobs
in the United States resulted in fewer students entering IT curricu-
lums at U.S. universities for fear that they would not be able to find
work. Thus, began a cycle of lost jobs in the United States and further
demand for talent abroad. Now, technology organizations are faced
with the fact that they must learn to manage virtually because the tal-
ent they need is far away.
From an IT perspective, successful outsourcing depends on effec-
tive use of virtual teams. However, the converse is not true; that is,
virtual teams do not necessarily imply outsourcing. Virtual teams can
164 INFORMATION TECHNOLOGY
be made up of workers anywhere, even those in the United States
who are working from a distance rather than reporting to an office
for work. A growing number of employees in the United States want
more personal flexibility; in response, many companies are allow-
ing employees to work from home more often— and have found the
experience most productive. This type of virtual team management
generally follows a hybrid model, with employees working at home
most of the time but reporting to the office for critical meetings; an
arrangement that dramatically helps with communication and allows
management to have quality checkpoints.
This chapter addresses virtual teams working both within the
United States and on an outsource basis and provides readers with
an understanding of when and how to consider outsource partners.
Chapter topics include management considerations, dealing with
multiple locations, contract administration, and in-house alternatives.
Most important, this chapter examines organizational learning as a
critical component of success in using virtual teams. Although the
advent of virtual teams creates another level of complexity for design-
ing and maintaining learning organizations, organizational learning
approaches represent a formidable solution to the growing dilemma of
how teams work, especially those that are 100% virtual.
Most failures in virtual management are caused by poor communi-
cation. From an organizational learning perspective, we would define
this as differences in meaning making— stemming mostly from cul-
tural differences in the meaning of words and differing behavioral
norms. There is also no question that time zone differences play a role
in certain malfunctions of teams, but the core issues remain commu-
nication related.
As stated, concerning the Ravell case study, cultural transformation
is slow to occur and often happens in small intervals. In many virtual
team settings, team members may never do more than communicate
via e-mail. As an example, I had a client who was outsourcing produc-
tion in China. One day, they received an e-mail stating, “ We cannot
do business with you.” Of course, the management team was confused
and worried, seeking to understand why the business arrangement
was ending without any formal discussions of the problem. A trans-
lator in China was hired to help clarify the dilemma. As it turned
out, the statement was meant to suggest that the company needed
165vIrtuAl teAMs And outsourCInG
to provide more business— more work, that is. The way the Chinese
communicated that need was different from the Western interpre-
tation. This is just a small example of what can happen without a
well-thought-out organizational learning scheme. That is, individuals
need to develop more reflective abilities to comprehend the meaning
of words before they take action, especially in virtual environments
across multiple cultures. The development of such abilities— the
continual need for organizations to respond effectively to dynamic
changes, brought about by technology, in this case, e-mail— is consis-
tent with my theory of responsive organizational dynamism (ROD).
The e-mail established a new dynamic of communication. Think how
often specifications and product requirements are changing and need
virtual teams to somehow come together and agree on how to get the
work done— or think they agree.
Prior research and case studies provide tools and procedures as ways
to improve productivity and quality of virtual team operations. While
such processes and methodologies are helpful, they will not necessar-
ily ensure the successful outcomes that IT operations seek unless they
also change. Specifically, new processes alone are not sufficient or a
substitute for learning how to better communicate and make mean-
ing in a virtual context. Individuals must learn how to develop new
behaviors when working virtually. We must also remember that vir-
tual team operations are not limited to IT staffs. Business users often
need to be involved as they would in any project, particularly when
users are needed to validate requirements and test the product.
Status of Virtual Teams
The consensus tells us that virtual teams render results. According to
Bazarova and Walther (2009), “ Virtual groups whose members com-
municate primarily or entirely via email, computer conferencing, chat,
or voice— have become a common feature of twenty-first century
organizations” (p. 252). Lipnack and Stamps (2000) state that virtual
teams will become the accepted way to work and will likely reshape
the work world. While this prediction seems accurate, there has also
been evidence of negative attribution or judgment about problems that
arise in virtual team performance. Thus, it is important to understand
how virtual teams need to be managed and how realistic expectations
166 INFORMATION TECHNOLOGY
of such teams might be formed. So, while organizations understand
the need for virtual teams, they are not necessarily happy with proj-
ect results. Most of the disappointment relates to a lack of individual
development that helps change the mindset of how people need to
communicate, coupled with updated processes.
Management Considerations
Attribution theory “ describes how people typically generate explana-
tions for outcomes and actions— their own and others” (Bazarova &
Walther, 2009, p. 153). This theory explains certain behavior patterns
that have manifested during dysfunctional problems occurring in man-
aging virtual teams. Virtual teams are especially vulnerable to such
problems because their limited interactions can lead to members not
having accurate information about one another. Members of virtual
teams can easily develop perceptions of each other’ s motives that are
inaccurate or distorted by differing cultural norms. Research also shows
us that virtual team members typically attribute failure to the external
factors and successes to internal factors. Problems are blamed on the
virtual or outside members for not being available or accountable to the
physical community. The successes then tend to reinforce that virtual
teams are problematic because of their very nature. This then estab-
lishes the dilemma of the use of virtual teams and organizations— its
use will continue to increase and dominate workplace structures and
yet will present challenges to organizations that do not want to change.
The lack of support to change will be substantiated during failures in
expected outcomes. Some of the failures, however, can and should be
attributable to distance. As Olson and Olson (2000) state: “ Distance
will persist as an important element of human experience” (p. 172). So,
despite the advent of technology, it is important not to ignore the social
needs that teams need to have to be effective.
Dealing with Multiple Locations
Perhaps the greatest difficulty in implementing virtual teams is the
reality that they span multiple locations. More often, these locations
can be in different time zones and within multiple cultures. To prop-
erly understand the complexity of interactions, it makes sense to revisit
167vIrtuAl teAMs And outsourCInG
the organizational learning tools discussed in prior chapters. Perhaps
another way of viewing virtual teams and their effects on organiza-
tion learning is to perceive it as another dimension— a dimension that
is similar to multiple layers in a spreadsheet. This notion means that
virtual teams do not upset the prior relations between technology as
a variable from a two-dimensional perspective, rather in the depth
of how it affects this relationship in a third dimension. Figure 7.1
reflects how this dimension should be perceived.
In other words, the study of virtual teams should be viewed as
a subset of the study of organizations. When we talk about work-
place activities, we need to address issues at the component level. In
this example, the components are the physical organization and the
Technology as an
independent
variable
Creates
Virtual organizational
dynamism dimension
Physical organizational
dynamism dimension
Virtual acceleration
dimension
Strategic
integration
Cultural
assimilation
Total
organizational
dynamism
Acceleration of events that
require different
infrastructures and
organizational processes
Virtual cultural
assimilation dimension
Virtual strategic
integration dimension
Figure 7.1 The three-dimensional ROD.
168 INFORMATION TECHNOLOGY
virtual organization. The two together make up the superset or the
entire organization. To be fruitful, any discussion of virtual organiza-
tions must be grounded in the context of the entire organization and
address the complete topic of workplace learning and transformation.
In Chapter 4, I discussed organizational learning in communities of
practice (COP). In this section, I expand that discussion to include
virtual organizational structures.
The growing use of virtual teams may facilitate the complete inte-
gration of IT and non-IT workers. The ability to connect from various
locations using technology itself has the potential to expand COP.
But, as discussed in Chapter 4, it also presents new challenges, most
of which relate to the transient nature of members, who tend to par-
ticipate on more of a subject or transactional basis, rather than being
permanent members of a group. Table 7.1 reflects some of the key
differences between physical and virtual teams.
There has been much discussion about whether every employee is
suited to perform effectively in a virtual community. The consensus is
that effective virtual team members need to be self-motivated, able to
work independently, and able to communicate clearly and in a posi-
tive way. However, given that many workers lack some or all of these
skills, it seems impractical to declare that workers who do not meet
these criteria should be denied the opportunity to work in virtual
Table 7.1 Operating Differences between Traditional and Virtual Teams
TRADITIONAL OR PHYSICAL TEAMS VIRTUAL TEAMS
Teams tend to have fixed participation and
members.
Membership shifts based on topics and needs.
Members tend to be from the same
organization.
Team members can include people from outside
the organization (clients and collaborators).
Team members are 100% dedicated. Members are assigned to multiple teams.
Team members are collocated geographically
and by organization.
Team members are distributed geographically and
by organization.
Teams tend to have a fixed term of
membership; that is, start and stop dates.
Teams are reconfigured dynamically and may
never terminate.
Teams tend to have one overall manager. Teams have multiple reporting relationships with
different parts of the organization at different
times.
Teamwork is physical and practiced in
face-to-face interactions.
Teamwork is basically social.
Engagement is often during group events
and can often be hierarchical in nature.
Individual engagement is inseparable from
empowerment.
169vIrtuAl teAMs And outsourCInG
teams. A more productive approach might be to encourage workers to
recognize that they must adapt to changing work environments at the
risk of becoming marginal in their organizations.
To better understand this issue, I extended the COP matrix,
presented in Chapter 4, to include virtual team considerations in
Table 7.2.
Item 7 in Table 7.2 links the study of knowledge management with
COP. Managing knowledge in virtual communities within an orga-
nization has become associated directly with the ability of a firm to
sustain competitive advantage. Indeed, Peddibhotla and Subramani
(2008) state that “ virtual communities are not only recognized as
important contributors to both the development of social networks
among individuals but also towards individual performance and firm
performance” (p. 229). However, technology-enabled facilities and
support, while providing a repository for better documentation, also
create challenges in maintaining such knowledge. The process of how
information might become explicit has also dramatically changed
with the advent of virtual team communications. For example, much
technology-related documentation evolves from bottom-up sources,
rather than the traditional top-down process. In effect, virtual com-
munities share knowledge more on a peer-to-peer basis or through
mutual consensus of the members. As a result, virtual communities
have historically failed to meet expectations, particularly those of
management, because managers tend to be uninvolved in communi-
cation. While physical teams can meet with management more often
before making decisions, virtual teams have no such contact available.
To better understand the complexities of knowledge management and
virtual teams, Sabherwal and Becerra-Fernandez (2005) expand on
Nonaka’ s (1994) work on knowledge management, which outlined
four modes of knowledge creation: externalization, internalization,
combination, and socialization. Each of these modes is defined and
discussed next.
Externalization
Externalization is the process of converting or translating tacit knowl-
edge (undocumented knowledge) into explicit forms. The problem with
this concept is whether individuals really understand what they know
170 INFORMATION TECHNOLOGY
Table 7.2 Communities of Practice: Virtual Team Extensions
STEP
COMMUNITIES-OF-
PRACTICE STEP TECHNOLOGY EXTENSION VIRTUAL EXTENSION
1 Understanding strategic
knowledge needs: What
knowledge is critical to
success.
Understanding how
technology affects
strategic knowledge and
what specific
technological knowledge
is critical to success.
Understanding how to
integrate multiple visions
of strategic knowledge and
where it can be found
across the organization.
2 Engaging practice
domains: Where people
form communities of
practice to engage in
and identify with.
Technology identifies
groups based on
business-related
benefits, requiring
domains to work together
toward measurable
results.
Virtual domains are more
dynamic and can be
formed for specific
purposes and then
reconfigured based on
practice needs of subjects
discussed.
3 Developing communities:
How to help key
communities reach their
full potential.
Technologies have life
cycles that require
communities to continue;
treats the life cycle as a
supporter for attaining
maturation and full
potential.
Communities can be
reallocated to participate
in multiple objectives.
Domains of discussion
have no limits to reach
organizational needs.
4 Working the boundaries:
How to link communities
to form broader learning
systems
Technology life cycles
require new boundaries
to be formed. This will
link other communities
that were previously
outside of discussions
and thus expand input
into technology
innovations.
Virtual abilities allow for
customer interfaces,
vendors, and other
interested parties to join
the community.
5 Fostering a sense of
belonging: How to
engage people’ s
identities and sense of
belonging.
The process of integrating
communities: IT and
other organizational
units will create new
evolving cultures that
foster belonging as well
as new social identities.
Communities establish
belonging in a virtual way.
Identities are established
more on content of
discussion than on
physical attributes of
members.
6 Running the business:
How to integrate
communities of practice
into running the
business of the
organization.
Cultural assimilation
provides new
organizational structures
that are necessary to
operate communities of
practice and to support
new technological
innovations.
The organization functions
more as a virtual
community or team, being
more agile to demands of
the business, and
interactions may not
always include all
members.
(Continued)
171vIrtuAl teAMs And outsourCInG
and how it might affect organizational knowledge. Virtual communi-
ties have further challenges in that the repository of tacit information
can be found in myriad storage facilities, namely, audit trails of e-mail
communications. While Sabherwal and Becerra-Fernandez (2005)
suggest that technology may indeed assist in providing the infrastruc-
ture to access such information, the reality is that the challenge is not
one of process but rather of thinking and doing. That is, it is more a
process of unlearning existing processes of thinking and doing, into
new modes of using knowledge that is abundantly available.
Internalization
Internalization is a reversal of externalization: It is the process of
transferring explicit knowledge into tacit knowledge— or individual-
ized learning. The individual thus makes the explicit process into his
or her own stabilized thinking system so that it becomes intuitive
in operation. The value of virtual team interactions is that they can
provide more authentic evidence of why explicit knowledge is valu-
able to the individual. Virtual systems simply can provide more people
who find such knowledge useful, and such individuals, coming from
a more peer relationship, can understand why their procedures can be
internalized and become part of the self.
Combination
Combination allows individuals to integrate their physical processes
with virtual requirements. The association, particularly in a global
Table 7.2 (Continued) Communities of Practice: Virtual Team Extensions
STEP
COMMUNITIES-OF-
PRACTICE STEP TECHNOLOGY EXTENSION VIRTUAL EXTENSION
7 Applying, assessing,
reflecting, renewing: How
to deploy knowledge
strategy through waves
of organizational
transformation.
The active process of
dealing with multiple
new technologies that
accelerates the
deployment of knowledge
strategy. Emerging
technologies increase
the need for
organizational
transformation.
Virtual systems allow for
more knowledge strategy
because of the ability to
deploy information and
procedures. Tacit
knowledge is easier to
transform to explicit forms.
172 INFORMATION TECHNOLOGY
environment, allows virtual team members to integrate new explicit
forms into their own, not by replacing their beliefs, but rather, by
establishing new hybrid knowledge systems. This is particularly
advantageous across multiple cultures and business systems in coun-
tries that hold different and, possibly complementary, knowledge
about how things can get done. Nonaka’ s (1994) concept of combina-
tion requires participants in the community to be at later stages of
multiplicity— suggesting that this form can only be successful among
certain levels or positions of learners.
Socialization
As Nonaka (1994) notes, individuals learn by observation, imitation,
and practice. The very expansion of conversations via technology
can provide a social network in which individuals can learn simply
through discourse. Discourse, as I discussed in Chapter 4, is the basis
of successful implementations of COP. The challenge in virtual social
networks is the difficulty participants have in assessing the authentic-
ity of the information provided by those in the community.
The four modes of knowledge management formulated by Nonaka
(1994) need to be expanded to embrace the complexities of virtual
team COPs. Most of the adjustments are predicated on the team’ s
ability to deal with the three fundamental factors of ROD that I
introduced in this book; that is, acceleration, dynamic, and unpre-
dictability. The application of these three factors of ROD to Nonaka’ s
four modes is discussed next.
Externalization Dynamism
The externalization mode must be dynamic and ongoing with little
ability to forecast the longevity of any tacit-to-explicit formulation.
In other words, tacit-to-explicit change may occur daily but may
only operate effectively for a shorter period due to additional changes
brought on by technology dynamism. This means that members in
a community must continually challenge themselves to revisit pre-
vious tacit processes and acknowledge the need to reformulate their
tacit systems. Thus, transformation from tacit knowledge to explicit
knowledge can be a daily challenge for COP virtual organizations.
173vIrtuAl teAMs And outsourCInG
Internalization Dynamism
Careful reflection on this process of internalizing explicit forms must
be done. Given the differences in cultures and acceleration of busi-
ness change, individualized learning creating new tacit abilities may
not operate the same in different firm settings. It may be necessary
to adopt multiple processes depending on the environment in which
tacit operations are being performed. As stated, what might work in
China may not work in Brazil, for example. Tacit behavior is culture
oriented, so multiple and simultaneous versions must be respected
and practiced. Further expansion of internalization is a virtual team’ s
understanding of how such tacit behaviors change over time due to
the acceleration of new business challenges.
Combination Dynamism
I believe the combination dynamism mode is the most important com-
ponent of virtual team formation. Any combination or hybrid model
requires a mature self— as specified in my maturity arcs discussed in
Chapter 4. This means that individuals in virtual teams may need to
be operating at a later stage of maturity to deal with the complexities
of changing dispositions. Members of COPs must be observed, and a
determination of readiness must be made for such new structures to
develop in a virtual world. Thus, COP members need training; the lack
of such training might explain why so many virtual teams have had
disappointing results. Readiness for virtual team participation depends
on a certain level of relativistic thinking. To be successful, virtual team
members must be able to see themselves outside their own world and
have the ability to understand the importance of what “ others” need.
This position suggests that individuals need to be tested to the extent
that they are ready for such challenges. Organizational learning tech-
niques remain a valid method for developing workers who can cope
with the dynamic changes that occur in virtual team organizations.
Socialization Dynamism
Socialization challenges the virtual team members’ abilities to
understand the meaning of words and requires critical reflection
174 INFORMATION TECHNOLOGY
of its constituents. ROD requires that virtual teams be agile and,
especially, that they be responsive to the emotions of others in the
community. This may require individuals to understand another
member’ s maturity. Thus, virtual team members need to be able
to understand why another member is behaving as he or she is or
reacting in a dualistic manner. Assessment in a virtual collabora-
tion becomes a necessity, especially given the unpredictability of
technology-based projects.
In Table 5.1, I showed how tacit knowledge is mapped to ROD.
Table 7.3 further extends this mapping to include virtual teams.
The requirements support research findings that knowledge man-
agement in a virtual context has significant factors that must be
addressed to improve its success. These factors include management
commitment, resource availability, modification of work practices,
marketing of the initiative, training, and facilitation of cultural dif-
ferences (Peddibhotla & Subramani, 2008).
The following are some action steps that organizations need to take
to address these factors:
1. The executive team needs to advocate the commitment and
support for virtual teams. The chief information officer (CIO)
and his or her counterparts need to provide teams with the
“ sponsorship” that the organization will need to endure set-
backs until the virtual organization becomes fully integrated
into the learning organization. This commitment can be
accomplished via multiple actions, including, but not limited
to, a kickoff meeting with staff, status reports to virtual teams
on successes and setbacks, e-mails and memos on new vir-
tual formations, and a general update on the effort, perhaps
on a quarterly basis. This approach allows the organization to
understand the evolution of the effort and know that virtual
teams are an important direction for the firm.
2. There should be training and practice sessions with collocated
groups that allow teams to voice their concerns and receive
direction on how best to proceed. Practice sessions should
focus on team member responsibilities and advocating their
ownership of responsibility. These sessions should cover les-
sons learned from actual experiences, so that groups can learn
175vIrtuAl teAMs And outsourCInG
Ta
bl
e
7.
3
Ta
ci
t K
no
wl
ed
ge
a
nd
V
irt
ua
l T
ea
m
s
TA
CI
T
KN
OW
LE
DG
E
ST
RA
TE
GI
C
IN
TE
GR
AT
IO
N
ST
RA
TE
GI
C
IN
TE
GR
AT
IO
N
VI
RT
UA
L
CU
LT
UR
AL
A
SS
IM
IL
AT
IO
N
CU
LT
UR
AL
A
SS
IM
IL
AT
IO
N
VI
RT
UA
L
Cu
ltu
ra
l a
nd
s
oc
ia
l
hi
st
or
y
Ho
w
th
e
IT
d
ep
ar
tm
en
t a
nd
o
th
er
de
pa
rtm
en
ts
tr
an
sl
at
e
em
er
gi
ng
te
ch
no
lo
gi
es
in
to
th
ei
r e
xis
tin
g
pr
oc
es
se
s
an
d
or
ga
ni
za
tio
n.
Ho
w
ca
n
vi
rtu
al
a
nd
n
on
vi
rtu
al
de
pa
rtm
en
ts
tr
an
sl
at
e
em
er
gi
ng
te
ch
no
lo
gi
es
in
to
th
ei
r p
ro
je
ct
s
ac
ro
ss
m
ul
tip
le
lo
ca
tio
ns
a
nd
cu
ltu
re
s?
Pr
ob
le
m
-s
ol
vi
ng
m
od
es
In
di
vi
du
al
re
fle
ct
iv
e
pr
ac
tic
es
th
at
a
ss
is
t i
n
de
te
rm
in
in
g
ho
w
sp
ec
ifi
c
te
ch
no
lo
gi
es
c
an
b
e
us
ef
ul
a
nd
h
ow
th
ey
c
an
b
e
ap
pl
ie
d;
u
til
iza
tio
n
of
ta
ci
t
kn
ow
le
dg
e
to
e
va
lu
at
e
pr
ob
ab
ili
tie
s
fo
r s
uc
ce
ss
.
In
di
vi
du
al
re
fle
ct
iv
e
pr
ac
tic
es
an
d
in
te
rc
ul
tu
ra
l
co
m
m
un
ic
at
io
ns
n
ee
de
d
to
de
te
rm
in
e
ho
w
ta
ci
t k
no
wl
ed
ge
sh
ou
ld
b
e
ap
pl
ie
d
to
s
pe
ci
fic
gr
ou
p
an
d
pr
oj
ec
t n
ee
ds
.
Te
ch
no
lo
gy
o
pp
or
tu
ni
tie
s
m
ay
re
qu
ire
o
rg
an
iza
tio
na
l a
nd
st
ru
ct
ur
al
c
ha
ng
es
to
tr
an
sf
er
ta
ci
t k
no
wl
ed
ge
to
e
xp
lic
it
kn
ow
le
dg
e.
Te
ch
no
lo
gi
ca
l o
pp
or
tu
ni
tie
s
m
ay
re
qu
ire
c
on
fig
ur
at
io
n
of
v
irt
ua
l
co
m
m
un
iti
es
o
f p
ra
ct
ic
e
an
d
ex
pl
ic
it
kn
ow
le
dg
e.
Or
ie
nt
at
io
n
to
ri
sk
s
an
d
un
ce
rta
in
tie
s
Te
ch
no
lo
gy
o
ffe
rs
m
an
y r
is
ks
a
nd
un
ce
rta
in
tie
s.
A
ll
ne
w
te
ch
no
lo
gi
es
m
ay
n
ot
b
e
va
lid
fo
r t
he
o
rg
an
iza
tio
n.
Ta
ci
t
kn
ow
le
dg
e
is
a
v
al
ua
bl
e
co
m
po
ne
nt
to
fu
lly
u
nd
er
st
an
d
re
al
iti
es
, r
is
ks
, a
nd
un
ce
rta
in
tie
s.
Te
ch
no
lo
gy
ri
sk
s
an
d
un
ce
rta
in
tie
s
ne
ed
to
b
e
as
se
ss
ed
b
y m
ul
tip
le
v
irt
ua
l
an
d
ph
ys
ic
al
te
am
s
to
de
te
rm
in
e
ho
w
te
ch
no
lo
gi
es
wi
ll
op
er
at
e
ac
ro
ss
m
ul
tip
le
lo
ca
tio
ns
a
nd
c
ul
tu
re
s.
(C
on
tin
ue
d)
176 INFORMATION TECHNOLOGY
Ta
bl
e
7.
3
(C
on
tin
ue
d)
Ta
ci
t K
no
wl
ed
ge
a
nd
V
irt
ua
l T
ea
m
s
TA
CI
T
KN
OW
LE
DG
E
ST
RA
TE
GI
C
IN
TE
GR
AT
IO
N
ST
RA
TE
GI
C
IN
TE
GR
AT
IO
N
VI
RT
UA
L
CU
LT
UR
AL
A
SS
IM
IL
AT
IO
N
CU
LT
UR
AL
A
SS
IM
IL
AT
IO
N
VI
RT
UA
L
W
or
ld
vi
ew
s
Te
ch
no
lo
gy
h
as
g
lo
ba
l e
ffe
ct
s
an
d
ch
an
ge
s
m
ar
ke
t
bo
un
da
rie
s,
th
at
c
ro
ss
bu
si
ne
ss
c
ul
tu
re
s;
it
re
qu
ire
s
ta
ci
t k
no
wl
ed
ge
to
u
nd
er
st
an
d
ex
is
tin
g
di
sp
os
iti
on
s
on
h
ow
ot
he
rs
w
or
k
to
ge
th
er
. R
ev
ie
ws
ho
w
te
ch
no
lo
gy
a
ffe
ct
s
th
e
dy
na
m
ic
s
of
o
pe
ra
tio
ns
.
M
ar
ke
t b
ou
nd
ar
ie
s
ar
e
m
or
e
dy
na
m
ic
a
cr
os
s
vi
rtu
al
te
am
s
th
at
o
pe
ra
te
to
s
ol
ve
c
ro
ss
–
cu
ltu
ra
l a
nd
b
us
in
es
s
pr
ob
le
m
s.
W
or
ld
vi
ew
s
ar
e
m
or
e
th
e
no
rm
th
an
th
e
ex
ce
pt
io
n.
Or
ga
ni
zin
g
pr
in
ci
pl
es
Ho
w
wi
ll
ne
w
te
ch
no
lo
gi
es
ac
tu
al
ly
be
in
te
gr
at
ed
?
W
ha
t
ar
e
th
e
or
ga
ni
za
tio
na
l
ch
al
le
ng
es
to
“
ro
lli
ng
o
ut
”
pr
od
uc
ts
, a
nd
to
im
pl
em
en
ta
tio
n
tim
el
in
es
?
W
ha
t p
os
iti
on
s
ar
e
ne
ed
ed
, a
nd
wh
o
in
th
e
or
ga
ni
za
tio
n
m
ig
ht
be
b
es
t q
ua
lifi
ed
to
fi
ll
ne
w
re
sp
on
si
bi
lit
ie
s?
Id
en
tif
y
lim
ita
tio
ns
o
f t
he
o
rg
an
iza
tio
n;
th
at
is
, t
ac
it
kn
ow
le
dg
e
ve
rs
us
ex
pl
ic
it
kn
ow
le
dg
e
re
al
iti
es
.
W
ha
t a
re
th
e
dy
na
m
ic
n
ee
ds
o
f t
he
vi
rtu
al
te
am
, t
o
ha
nd
le
n
ew
te
ch
no
lo
gi
es
o
n
pr
oj
ec
ts
?
W
ha
t
ar
e
th
e
ne
w
ro
le
s
an
d
re
sp
on
si
bi
lit
ie
s
of
v
irt
ua
l t
ea
m
m
em
be
rs
?
De
te
rm
in
e
wh
at
ta
ci
t
an
d
ex
pl
ic
it
kn
ow
le
dg
e
wi
ll
be
us
ed
to
m
ak
e
de
ci
si
on
s.
Ho
riz
on
s
of
e
xp
ec
ta
tio
n
In
di
vi
du
al
li
m
ita
tio
ns
in
th
e
ta
ci
t
do
m
ai
n
th
at
m
ay
h
in
de
r o
r
su
pp
or
t w
he
th
er
a
te
ch
no
lo
gy
ca
n
be
s
tra
te
gi
ca
lly
in
te
gr
at
ed
in
to
th
e
or
ga
ni
za
tio
n.
In
di
vi
du
al
s
wi
th
in
th
e
vi
rtu
al
co
m
m
un
ity
n
ee
d
to
u
nd
er
st
an
d
th
e
lim
ita
tio
ns
o
n
st
ra
te
gi
c
us
es
o
f t
ec
hn
ol
og
y .
Th
is
m
ay
va
ry
a
cr
os
s
cu
ltu
re
s.
177vIrtuAl teAMs And outsourCInG
from others. Training should set the goals and establish the
criteria for how virtual teams interact in the firm. This should
include the application software and repositories that are in
place and the procedures for keeping information and knowl-
edge current.
3. External reminders should be practiced so that virtual teams
do not become lax and develop bad habits since no one is
monitoring or measuring success. Providing documented
processes, perhaps a balanced scorecard or International
Organization for Standardization (ISO) 9000-type proce-
dures and measurements, is a good practice for monitoring
compliance.
Dealing with Multiple Locations and Outsourcing
Virtual organizations are often a given in outsourcing environments,
especially those that are offshore. Offshore outsourcing also means
that communications originate in multiple locations. The first step in
dealing with multiple locations is finding ways to deal with different
time zones. Project management can become more complicated when
team meetings occur at obscure times for certain members of the
community. Dealing with unanticipated problems can be more chal-
lenging when assembling the entire team may not be feasible because
of time differences. The second challenge in running organizations
in multiple locations is culture. Differing cultural norms can espe-
cially cause problems during off-hour virtual sessions. For example,
European work culture does not often support having meetings out-
side work hours. In some countries, work hours may be regulated by
the government or powerful unions.
A further complication in outsourcing is that the virtual team
members may be employed by different companies. For instance,
part of the community may include a vendor who has assigned staff
resources to the effort. Thus, these outsourced team members belong
to the community of the project yet also work for another organiza-
tion. The relationship between an outside consultant and the internal
team is not straightforward and varies among projects. For example,
some outsourced technical resources may be permanently assigned to
the project, so while they actually work for another firm, they behave
178 INFORMATION TECHNOLOGY
and take daily direction as if they were an employee of the focal busi-
ness. Yet, in other relationships, outsourced resources work closely
under the auspices of the outsourced “ project manager,” who acts
as a buffer between the firm and the vendor. Such COP formations
vary. Still other outsourcing arrangements involve team members the
firm does not actually know unless outsourced staff is called in to
solve a problem. This situation exists when organizations outsource
complete systems, so that the expectation is based more on the results
than on the interaction. Notwithstanding the arrangement or level
of integration, a COP must exist, and its behavior in all three of
these examples varies in participation, but all are driven in a virtual
relationship more by dynamic business events than by preplanned
activities.
If we look closely at COP approaches to operations, it is neces-
sary to create an extension of dynamism in a virtual team commu-
nity. The extension reflects the reliance on dynamic transactions,
which creates temporary team formations based on demographic
similarity needs. This means that virtual teams will often be formed
based on specific interests of people within the same departments.
Table 7.4 shows the expansion of dynamism in a virtual setting
of COPs.
Thus, the advent of modern-day IT outsourcing has complicated
the way COPs function. IT outsourcing has simultaneously brought
attention to the importance of COP and knowledge management
in general. It also further supports the reality of technology dyna-
mism as more of a norm in human communication in the twenty-
first century.
Revisiting Social Discourse
In Chapter 4, I covered the importance of social discourse and the use
of language as a distinct component of how technology changes COP.
That section introduced three components that linked talk and action,
according to the schema of Grant et al. (1998): Identity, skills and
emotion. Figure 7.2 shows this relationship again. The expansion of
virtual team communications further emphasizes the importance of
discourse and the need to rethink how these three components relate
to each other in a virtual context.
179vIrtuAl teAMs And outsourCInG
Identity
I spoke about the “ cultures of practice” due to expansion of contacts
from technology capacities. This certainly holds true with virtual
teams. However, identities can be transactional— in ways such that
an individual may be a member of multiple COP environments and
have different identities in each. This fact emphasizes the multitask-
ing aspect of the linear development modules discussed throughout
this book. Ultimately, social discourse will dynamically change based
on the COP to which an individual belongs, and that individual needs
to be able to “ inventory” these multiple roles and responsibilities.
Such roles and responsibilities themselves will transform, due to the
dynamic nature of technology-driven projects. Individuals will thus
have multiple identities and must be able to manage those identities
across different COPs and in different contexts within those COPs.
Table 7.4 COP Virtual Dynamism
COP PHYSICAL SOCIAL SETTINGS COP VIRTUAL DYNAMISMS
There is shared pursuit of interest
accomplished through group meetings.
Interest in discussion is based more on dynamic
transactions and remote needs to satisfy specific
personal needs.
Creation of the “ community” is typically
established within the same, or similar,
departments.
The notion of permanency is deemphasized.
Specific objectives based on the needs of the
group will establish the community.
Demographic similarity is a strong
contributor to selection of community
members.
Demographic similarity has little to do with
community selection. Selection is based more on
subject-matter expertise.
Situated learning is often accomplished by
assisting members to help develop others.
Learning occurs within a framework of
social participation.
Situated learning to help others has less focus. It
may not be seen as the purpose or responsibility
of virtual team members. Social participation has
more concrete perspective.
Community needs to assess technology
dynamism using ROD in more physical
environments requiring a formal
infrastructure.
Community is less identifiable from a physical
perspective. ROD must be accomplished by
members who have special interests at the
subject level as opposed to the group level.
COP works well with cultural assimilation
of formal work groups where participants
are clearly identified.
Cultural assimilation in virtual settings is more
transaction-based. Assimilation can be a limited
reality during the time of the transaction to
ensure success of outcomes.
COP can be used for realignment of work
departments based on similar needs.
COP in a virtual environment creates temporary
realignments, based on similar needs during the
process.
COP supports continual learning and
dealing with unplanned action.
COPs are continually reconfigured, and do not have
permanency of group size or interest.
180 INFORMATION TECHNOLOGY
This requires individual maturities that must be able to cope with the
“ other” and understand the relativistic nature of multiple cultures and
the way discourse transforms into action.
Skills
I mentioned the importance of persuasion as a skill to transform talk
into action. Having the ability to persuade others across virtual teams
is critical. Often, skills are misrepresented as technical abilities that
give people a right of passage. Across multiple cultures, individuals
in teams must be able to recognize norms and understand how to
communicate with others to get tangible results on their projects. It is
difficult to make such determinations about individuals that one has
never met face to face. Furthermore, virtual meetings may not pro-
vide the necessary background required to properly understand a per-
son’ s skill sets, both “ hard” and “ soft.” The soft skills analysis is more
important as the individual’ s technical credentials become assumed.
We see such assumptions when individuals transition into manage-
ment positions. Ascertaining technical knowledge at the staff level is
easier— almost like an inventory analysis of technical requirements.
Conversational
activity
Skills
Identity
Emotions
Action
Conversational
content
Figure 7.2 Grant’ s schema of the relationship between talk and action.
181vIrtuAl teAMs And outsourCInG
However, assessing an individual’ s soft skills is much more challeng-
ing. Virtual teams will need to create more complex and broadened
inventories of their team’ s skill sets, as well as establish better criteria
on how to measure soft skills. Soft skills will require individuals to
have better “ multicultural” abilities, so that team members can be
better equipped to deal with multinational and cross-cultural issues.
Emotion
Like persuasion, emotion involves an individual’ s ability to motivate
others and to create positive energy. Many of those who successfully
use emotion are more likely to have done so in a physical context than
a virtual one. Transferring positive emotion in a virtual world can
be analogous to what organizations experienced in the e-commerce
world, in which organizations needed to rebrand themselves across
the Web in such a way that their image was reflected virtually to
their customers. Marketing had to be accomplished without exposure
to the buyer during purchase decisions. Virtual COPs are similar:
Representation must be what the individual takes away, without see-
ing the results physically. This certainly offers a new dimension for
managing teams. This means that the development requirements for
virtual members must include advanced abstract thinking so that the
individual can better forecast virtual team reactions to what will be
said, as opposed to reacting while the conversation is being conducted
or thinking about what to do after virtual meetings.
In Chapter 4, I presented Marshak’ s (1998) work on types of
talk that lead to action: tool-talk, frame-talk, and mythopoetic-talk.
Virtual teams require modification to the sequence of talk; that is,
the use of talk is altered. Let us first look at Figure 7.3, represent-
ing Marshak’ s model. To be effective, virtual teams must follow this
sequence from the outside inward. That is, the virtual team must focus
on mythopoetic-talk in the center as opposed to an outer ring. This
means that ideogenic issues must precede interpretation in a virtual
world. Thus, tool-talk, which in the physical world lies at the center of
types of tools, is now moved to the outside rectangle. In other words,
instrumental actions lag those of ideology and interpretation. This is
restructured in Figure 7.4.
182 INFORMATION TECHNOLOGY
Mythopoetic-talk is at the foundation of grounding ideas in a vir-
tual COP. It would only make sense that a COP-driven talk requires
ideogenic behavior before migrating to instrumental outcomes.
Remember that ideogenic talk allows for concepts of intuition and
ideas for concrete application especially relevant among multiple cul-
tures and societies. So, we again see that virtual teams require changes
in the sequence of how learning occurs. This change in sequence
places more emphasis on the need for an individual to be more devel-
opmentally mature— with respect to thinking, handling differences,
and thinking abstractly. This new “ abstract individual” must be able
to reflect before action and reflect in action to be functionally compe-
tent in virtual team participation.
Because ROD is relevant, it is important to determine how virtual
teams affect the ROD maturity arc first presented in Figure 4.10 and
redisplayed in Figure 7.5.
Mythopoetic-talk: Ideogenic
Frame-talk: Interpretive
Tool-talk: Instrumental
Figure 7.3 Marshak’ s types of talk.
Mythopoetic-talk: Ideogenic
Frame-talk: Interpretive
Tool-talk: Instrumental
Figure 7.4 Virtual team depiction of Marshak’ s types of talk.
183vIrtuAl teAMs And outsourCInG
St
ag
es
o
f i
nd
iv
id
ua
l a
nd
o
rg
an
iz
at
io
na
l l
ea
rn
in
g
Se
ct
or
v
ar
ia
bl
e
O
pe
ra
tio
na
l
kn
ow
le
dg
e
D
ep
ar
tm
en
t/
un
it
vi
ew
as
o
th
er
In
te
gr
at
ed
d
isp
os
iti
on
St
ab
le
o
pe
ra
tio
ns
O
rg
an
iz
at
io
na
l
le
ad
er
sh
ip
O
pe
ra
tio
ns
p
er
so
nn
el
un
de
rs
ta
nd
th
at
te
ch
no
lo
gy
h
as
a
n
im
pa
ct
o
n
st
ra
te
gi
c
de
ve
lo
pm
en
t,
pa
rt
ic
ul
ar
ly
o
n
ex
ist
in
g
pr
oc
es
se
s
In
di
vi
du
al
b
el
ie
fs
o
f
st
ra
te
gi
c
im
pa
ct
a
re
in
co
m
pl
et
e;
in
di
vi
du
al
ne
ed
s t
o
in
co
rp
or
at
e
ot
he
r v
ie
w
s w
ith
in
th
e
de
pa
rt
m
en
t o
r b
us
in
es
s
un
it
C
ha
ng
es
b
ro
ug
ht
fo
rt
h
by
te
ch
no
lo
gy
n
ee
d
to
be
a
ss
im
ila
te
d
in
to
de
pa
rt
m
en
ts
a
nd
a
re
de
pe
nd
en
t o
n
ho
w
ot
he
rs
p
ar
tic
ip
at
e
Sm
al
l-g
ro
up
b
as
ed
re
fle
ct
iv
e
pr
ac
tic
es
O
pe
ra
tio
n
an
d
m
id
dl
e
m
an
ag
em
en
t
M
id
dl
e
m
an
ag
em
en
t
In
te
ra
ct
iv
e
w
ith
b
ot
h
in
di
vi
du
al
a
nd
m
id
dl
e
m
an
ag
em
en
t u
sin
g
co
m
m
un
iti
es
o
f p
ra
ct
ic
e
U
nd
er
st
an
ds
n
ee
d
fo
r
or
ga
ni
za
tio
na
l c
ha
ng
es
;
di
ffe
re
nt
c
ul
tu
ra
l
be
ha
vi
or
n
ew
st
ru
ct
ur
es
ar
e
se
en
a
s v
ia
bl
e
so
lu
tio
ns
Re
co
gn
iti
on
th
at
in
di
vi
du
al
a
nd
de
pa
rt
m
en
t v
ie
w
s m
us
t
be
in
te
gr
at
ed
to
b
e
co
m
pl
et
e
an
d
st
ra
te
gi
ca
lly
p
ro
du
ct
iv
e
fo
r t
he
d
ep
ar
tm
en
t/
un
it
C
ha
ng
es
m
ad
e
to
p
ro
ce
ss
es
at
th
e
de
pa
rt
m
en
t/
un
it
le
ve
l f
or
m
al
ly
in
co
rp
or
at
e
em
er
gi
ng
te
ch
no
lo
gi
es
O
rg
an
iz
at
io
na
l c
ha
ng
es
ar
e
co
m
pl
et
ed
a
nd
in
op
er
at
io
n;
e
xi
st
en
ce
o
f
ne
w
o
r m
od
ifi
ed
e
m
pl
oy
ee
po
sit
io
ns
In
te
ra
ct
iv
e
be
tw
ee
n
m
id
dl
e
m
an
ag
em
en
t a
nd
ex
ec
ut
iv
es
u
sin
g
so
ci
al
di
sc
ou
rs
e
m
et
ho
ds
to
pr
om
ot
e
tr
an
sf
or
m
at
io
n
M
id
dl
e
m
an
ag
em
en
t a
nd
ex
ec
ut
iv
e
Ex
ec
ut
iv
e
O
rg
an
iz
at
io
na
l l
ea
rn
in
g
at
e
xe
cu
tiv
e
le
ve
l u
sin
g
kn
ow
le
dg
e
m
an
ag
em
en
t
D
ep
ar
tm
en
t-l
ev
el
or
ga
ni
za
tio
na
l c
ha
ng
es
an
d
cu
ltu
ra
l e
vo
lu
tio
n
ar
e
in
te
gr
at
ed
w
ith
or
ga
ni
za
tio
n-
w
id
e
fu
nc
tio
ns
a
nd
c
ul
tu
re
s
D
ep
ar
tm
en
t s
tr
at
eg
ie
s
ar
e
pr
op
ag
at
ed
a
nd
in
te
gr
at
ed
a
t
or
ga
ni
za
tio
n
le
ve
l
St
ra
te
gi
c
in
te
gr
at
io
n
C
ul
tu
ra
l a
ss
im
ila
tio
n
O
rg
an
iz
at
io
na
l
le
ar
ni
ng
c
on
st
ru
ct
s
M
an
ag
em
en
t l
ev
el
O
pe
ra
tio
ns
In
di
vi
du
al
-b
as
ed
re
fle
ct
iv
e
pr
ac
tic
e
Vi
ew
th
at
te
ch
no
lo
gy
ca
n
an
d
w
ill
a
ffe
ct
th
e
w
ay
th
e
or
ga
ni
za
tio
n
op
er
at
es
a
nd
th
at
it
ca
n
aff
ec
t r
ol
es
a
nd
re
sp
on
sib
ili
tie
s
Fi
gu
re
7
.5
R
es
po
ns
iv
e
or
ga
ni
za
tio
na
l d
yn
am
is
m
a
rc
m
od
el
.
184 INFORMATION TECHNOLOGY
C
ul
tu
ra
l
as
sim
ila
tio
n
O
rg
an
iz
at
io
na
l
le
ar
ni
ng
co
ns
tr
uc
ts
M
an
ag
em
en
t
le
ve
l
O
pe
ra
tio
ns
, m
id
dl
e m
an
ag
em
en
t,
an
d
ex
ec
ut
iv
e
M
id
dl
e m
an
ag
em
en
t a
nd
ex
ec
ut
iv
e
In
te
ra
ct
iv
e w
ith
in
di
vi
du
al
,
m
id
dl
e m
an
ag
em
en
t,
an
d
ex
ec
ut
iv
e u
sin
g v
ir
tu
al
co
m
m
un
iti
es
o
f p
ra
ct
ic
e
U
nd
er
st
an
ds
n
ee
d
fo
r
or
ga
ni
za
tio
na
l c
ha
ng
es
ac
ro
ss
m
ul
tip
le
o
rg
an
iz
at
io
ns
.
D
iff
er
en
t c
ul
tu
ra
l b
eh
av
io
rs
an
d
ne
w
st
ru
ct
ur
es
a
re
se
en
as
v
ia
bl
e s
ol
ut
io
ns
th
at
ca
n
be
p
er
m
an
en
t o
r t
em
po
ra
ry
be
ca
us
e o
f t
he
d
yn
am
ic
m
em
be
rs
hi
ps
in
C
O
P
O
rg
an
iz
at
io
na
l c
ha
ng
es
a
re
ne
ve
r c
om
pl
et
ed
a
nd
m
ay
be
in
te
m
po
ra
ry
o
pe
ra
tio
n.
Ex
ist
en
ce
o
f n
ew
o
r
m
od
ifi
ed
C
O
P
m
em
be
r
po
sit
io
ns
co
ul
d
be
pe
rm
an
en
t o
r t
ra
ns
iti
on
al
ba
se
d
on
p
ro
je
ct
n
ee
ds
D
ep
ar
tm
en
t-
le
ve
l
or
ga
ni
za
tio
na
l c
ha
ng
es
an
d
cu
ltu
ra
l e
vo
lu
tio
n
m
ay
re
m
ai
n
se
pa
ra
te
an
d
ca
se
d
ri
ve
n.
S
om
e
as
sim
ila
tio
n
m
ay
b
e
in
te
gr
at
ed
w
ith
or
ga
ni
za
io
n-
w
id
e
fu
nc
tio
ns
a
nd
cu
ltu
re
s
In
te
ra
ct
iv
e b
et
w
ee
n
m
id
dl
e
m
an
ag
em
en
t a
nd
ex
ec
ut
iv
es
u
sin
g s
oc
ia
l
di
sc
ou
rs
e m
et
ho
ds
to
pr
om
ot
e m
or
e t
ra
ns
ac
tio
na
l
be
ha
vi
or
al
tr
an
sit
io
ns
.
So
m
e t
ra
ns
iti
on
s m
ay
le
ad
to
tr
an
sfo
rm
at
io
n
O
rg
an
iz
at
io
na
l l
ea
rn
in
g
at
e
xe
cu
tiv
e l
ev
el
in
co
rp
or
at
es
v
ir
tu
al
te
am
n
ee
ds
u
sin
g m
or
e
dy
na
m
ic
C
O
P
st
ru
ct
ur
es
an
d
br
oa
de
ne
d
kn
ow
le
dg
e m
an
ag
m
en
t
th
at
is
m
or
e s
itu
at
io
na
l
M
id
dl
e m
an
ag
em
en
t a
nd
ex
ec
ut
iv
e
Ex
ec
ut
iv
e
Vi
rt
ua
l g
ro
up
-b
as
ed
re
fle
ct
iv
e
pr
ac
tic
es
a
re
n
ec
es
sa
ry
to
un
de
rs
ta
nd
h
ow
to
o
pe
ra
te
in
a
C
O
P
en
vi
ro
nm
en
t w
ith
in
di
vi
du
al
s
w
ho
h
av
e d
iff
er
en
t p
er
sp
ec
tiv
es
C
ha
ng
es
b
ro
ug
ht
fo
rt
h
by
te
ch
no
lo
gy
ne
ed
to
b
e a
ss
im
ila
te
d
in
to
de
pa
rt
m
en
ts
a
nd
a
re
d
ep
en
de
nt
o
n
ho
w
o
th
er
s p
ar
tic
ip
at
e.
A
ss
im
ila
tio
n
of
cu
ltu
ra
l n
or
m
m
ay
ha
ve
v
er
y
di
ffe
re
nt
ro
le
s a
nd
re
sp
on
sib
ili
tie
s i
n
ot
he
r c
ul
tu
re
s.
Sh
ift
in
g a
ss
im
ila
tio
n
ne
ed
s m
ay
di
ffe
r a
s d
iff
er
en
t m
em
be
rs
jo
in
o
r
le
av
e t
he
C
O
P
In
di
vi
du
al
-b
as
ed
re
fle
ct
iv
e
pr
ac
tic
e
O
pe
ra
tio
ns
Vi
ew
th
at
te
ch
no
lo
gy
ca
n
an
d
w
ill
a
ffe
ct
th
e
w
ay
th
e
or
ga
ni
za
tio
n
op
er
at
es
, a
nd
th
at
it
ca
n
aff
ec
t r
ol
es
a
nd
re
sp
on
sib
ili
tie
s
Fi
gu
re
7
.6
V
irt
ua
l t
ea
m
e
xt
en
si
on
to
th
e
RO
D
ar
c.
C
ha
ng
es
a
re
s
ho
wn
in
it
al
ic
s.
185vIrtuAl teAMs And outsourCInG
Figure 7.6 represents the virtual team extension to the ROD arc.
The changes to the cells are shown in italics. Note that there are no
changes to operational knowledge because this stage focuses solely
on self-knowledge learned from authoritative sources. However, as
the individual matures, there is greater need to deal with uncer-
tainty. This includes the uncertainty that conditions in a COP may
be temporary, and thus knowledge may need to vary from meeting to
meeting. Furthermore, while operational realities may be more trans-
actional, it does not necessarily mean that adopted changes are not
permanent. Most important is the reality that permanence in general
may no longer be a characteristic of how the organization operates;
this further emphasizes ROD as a way of life. As a result of this
extreme complexity in operations, there is an accelerated requirement
for executives to become involved earlier in the development process.
Specifically, by stage two (department/unit view of the other), execu-
tives must be engaged in virtual team management considerations.
Ultimately, the virtual team ROD arc demonstrates that vir-
tual teams are more complex and therefore need members who are
more mature to ensure the success of outsourcing and other virtual
constructs. It also explains why virtual teams have struggled, likely
because their members are not ready for the complex participation
necessary for adequate outcomes.
We must also remember that maturity growth is likely not parallel
in its linear progression. This was previously shown in Figure 4.12.
This arc demonstrates the challenge managers face in gauging
the readiness of their staff to cope with virtual team engagement.
On the other hand, the model also provides an effective measure-
ment schema that can be used to determine where members should
be deployed and their required roles and responsibilities. Finally, the
model allows management to prepare staff for the training and devel-
opment they need as part of the organizational learning approach to
dealing with ROD.
187
8
syneRgisTiC union of
iT anD oRganizaTional
leaRning
Introduction
This chapter presents case studies that demonstrate how information
technology (IT) and organizational learning occur in the real corpo-
rate world. It examines the actual processes of how technological and
organizational learning can be implemented in an organization and
what management perspectives can support its growth so that forms
of responsive organizational dynamism can be formed and developed.
I will demonstrate these important synergies through three case stud-
ies that will show how the components of responsive organizational
dynamism, strategic integration and cultural assimilation, actually
operate in practice.
Siemens AG
The first case study offers a perspective from the chief informa-
tion officer (CIO). The CIO of Siemens of the Americas at the
time of this study was Dana Deasy, and his role was to introduce
and expand the use of e-business across 20 discrete businesses. The
Siemens Corporation worldwide network was composed of over 150
diverse sets of businesses, including transportation, healthcare, and
telecommunications. Deasy’ s mission was to create a common road
map across different businesses and cultures. What makes this case
so distinct from others is that each business is highly decentralized
under the umbrella of the Siemens Corporation. Furthermore, each
company has its own mission; the companies have never been asked
to come together and discuss common issues with regard to technol-
ogy. That is, each business focused on itself as opposed to the entire
188 InForMAtIon teChnoloGY
organization. Deasy had to deal with two sectors of scope and hence,
two levels of learning: the Americas as a region and the global firm
internationally.
The challenge was to introduce a new e-business strategy from the
top-down in each business in the Americas and then to integrate it
with the global firm. Ultimately, the mission was to review what each
business was doing in e-business and to determine whether there was
an opportunity to consolidate efforts into a common direction.
IT was, for the most part, viewed as a back-office operation—
handling services of the company as a support function as opposed to
thinking about ways to drive business strategy. In terms of IT report-
ing, most CIOs reported directly to the chief financial officer (CFO).
While some IT executives view this as a disadvantage because CFOs
are typically too focused on financial issues, Deasy felt that a focus on
cost containment was fine as long as the CIO had access to the chief
executive officer (CEO) and others who ultimately drove business
strategy. So, the real challenge was to ensure that CIOs had access to
the various strategic boards that existed at Siemens.
What are the challenges in transforming an organization the size
of Siemens? The most important issue was the need to educate CIOs
on the importance of their role with respect to the business as opposed
to the technology. As Deasy stated in an interview, “ Business must
come first and we need to remind our CIOs that all technology issues
must refer back to the benefits it brings to the business.” The question
then is how to implement this kind of learning.
Perhaps the best way to understand how Siemens approached this
dilemma is to understand Deasy’ s role as a corporate CIO. The reality
is that there was no alternative but to create his position. What drove
Siemens to this realization was fear that they needed someone to drive
e-business, according to Deasy—fear of losing competitive edge in
this area, fear that they were behind the competition and that smaller
firms would begin to obtain more market share. Indeed, the growth
of e-business occurred during the dot-com era, and there were huge
pressures to respond to new business opportunities brought about by
emerging technologies, specifically the Internet. It was, therefore, a
lack of an internal capacity, such as responsive organizational dyna-
mism, that stimulated the need for senior management to get involved
and provide a catalyst for change.
189sYnerGIstIC unIon oF It
The first aspect of Siemens’ s approach can be correlated to the
strategic integration component of responsive organizational dyna-
mism. We see that Siemens was concerned about whether technology
was properly being integrated in strategic discussions. It established
the Deasy role as a catalyst to begin determining the way technol-
ogy needed to be incorporated within the strategic dimension of the
business. This process cannot occur without executive assistance, so
evolutionary learning must first be initiated by senior management.
Unfortunately, Deasy realized early on that he needed a central pro-
cess to allow over 25 CIOs in the Americas to interact regularly. This
was important to understand the collective needs of the community
and to pave the way for the joining of technology and strategic inte-
gration from a more global perspective. Deasy established an infra-
structure to support open discourse by forming CIO forums, similar
to communities of practice, in which CIOs came together to discuss
common challenges, share strategies, and have workshops on the
ways technology could help the business. Most important at these
forums was the goal of consolidating their ideas and their common
challenges.
There are numerous discussions regarding the common problems
that organizations face regarding IT expenditures, specifically the
approach to its valuation and return on investment (ROI). While
there are a number of paper-related formulas that financial executives
use (e.g., percentage of gross revenues within an industry), Deasy uti-
lized learning theories, specifically, communities of practice, to foster
more thinking and learning about what was valuable to Siemens, as
opposed to using formulas that might not address important indi-
rect benefits from technology. In effect, Deasy promoted learning
among a relatively small but important group of CIOs who needed
to better understand the importance of strategic innovation and the
value it could bring to the overall business mission. Furthermore,
these forums provided a place where CIOs could develop their own
community—a community that allowed its members to openly par-
ticipate in strategic discourse that could help transform the organiza-
tion. It was also a place to understand the tacit knowledge of the CIO
organization and to use the knowledge of the CIOs to summarize
common practices and share them among the other members of the
community.
190 InForMAtIon teChnoloGY
Most of the CIOs at Siemens found it challenging to understand
how their jobs were to be integrated into business strategy. Indeed,
this is not a surprise. In Chapter 1, I discuss the feedback from my
research on CEO evaluation of technology; I found that there were few
IT executives who were actually involved in business strategy. Thus,
the organization sought to create an advocate in terms of a central-
ized corporate headquarter that could provide assistance as opposed
to forcing compliance. That is, it sought a structure with which to
foster organizational learning concepts and develop an approach to
create a more collective effort that would result in global direction for
IT strategic integration.
To establish credibility among the CIO community, Deasy needed
to ensure that the CIOs of each individual company were able to inter-
act with board-level executives. In the case of Siemens, this board is
called the president’ s council. The president’ s council has regularly held
meetings in which each president attends and receives presentations on
ideas about the regional businesses. Furthermore, there are quarterly
CFO meetings as well, where CIOs can participate in understand-
ing the financial implications of their IT investments. At the same
time, these meetings provided the very exposure to the executive team
that CIOs needed. Finally, Deasy established a CIO advisory board
comprised of CIOs who actually vote on the common strategic issues
and thus manage the overall direction of technology at Siemens. Each
of these groups established different types of communities of practice
that focused on a specific aspect of technology. The groups were geared
to create better discourse and working relationships among these com-
munities to, ultimately, improve Siemens’ s competitive advantage.
The three communities of practice at work in the Siemens model—
executive, finance, and technology—suggest that having only one gen-
eral community of practice to address technology issues may be too
limiting. Thus, theories related to communities of practice may need
to be expanded to create discourse among multiple communities. This
might be somewhat unique for IT, not in that there is a need for mul-
tiple communities, but that the same individuals must have an identity
in each community. This shows the complexity of the CIO role today
in the ability to articulate technology to different types and tiers of
management. Figure 8.1 shows the interrelationships among the CIO
communities of practice at Siemens.
191sYnerGIstIC unIon oF It
Another way to represent these communities of practice is to view
them as part of a process composed of three operating levels. Each level
represents a different strategic role of management that is responsible
for a unique component of discourse and on the authorization for uses
of technology. Therefore, if the three different communities of prac-
tice are viewed strategically, each component could be constructed as
a process leading to overall organizational cooperation, learning, and
strategic integration as follows:
Tier 1: CIO Advisory Board : This community discusses issues of
technology standards, operations, communications, and ini-
tiatives that reflect technology-specific areas. Such issues are
seen as CIO specific and only need this community’ s agree-
ment and justification. However, issues or initiatives that
require financial approval, such as those that may not yet be
budgeted or approved, need to be discussed with group CFOs.
Proposals to executive management—that is, the President’ s
Council—also need prior approval from the CFOs.
Communities of practice consist of
presidents from each company.
Regular meetings are designed for
discussion over common issues on
business strategy. Corporate CIOs
can use this forum to present new
proposals on emerging technologies
and seek approval for their plans
and vision.
President’s council
Corporate
CIO of the
Americas
CFO quarterly
meetings
CIO advisory
board
Communities of practice consist of CIOs
from each company. Forum is designed to
openly discuss common challenges, agree
on technology initiatives, foster a more
united community and build on shared
knowledge across businesses.
Communities of practice consist of CFOs
from each company. Discussions
relate to how strategies can be
implemented with respect to ROI.
CIOs need to understand IT costs, both
direct and indirect.
Figure 8.1 Inter-relationships among CIO communities of practice at Siemens.
192 InForMAtIon teChnoloGY
Tier 2: CFO Quarterly : CFOs discuss new emerging technolo-
gies and ascertain their related costs and benefits (ROI).
Those technologies that are already budgeted can be approved
based on agreed ROI scenarios. Proposals for new technology
projects are approved in terms of their financial viability and
are prepared for further discussion at the President’ s Council.
Tier 3: President’ s Council : Proposals for new technology projects
and initiatives are discussed with a focus on their strategic
implications on the business and their expected outcome.
Deasy realized that he needed to create a common connection
among these three communities. While he depended on the initia-
tives of others, he coordinated where these CIO initiatives needed to
be presented, based on their area of responsibility.
Graphically, this can be shown as a linear progression of commu-
nity-based discussions and approvals, as in Figure 8.2.
The common thread to all three tiers is the corporate CIO. Deasy
was active in each community; however, his specific activities within
each community of practice were different. CIOs needed to estab-
lish peer relationships with other CIOs share their tacit knowledge
and contribute ideas that could be useful to other Siemens companies.
Thus, CIOs needed to transform their personal views of technology
and expand them to a group-level perspective. Their challenge was
to learn how to share concepts and how to understand new ones that
emanated at the CIO advisory board level. From this perspective,
they could create the link between the local strategic issues and those
discussed at the regional and global levels, as shown in Figure 8.3.
Using this infrastructure, Siemens’ s organizational learning in
technology, occurred at two levels of knowledge management. The
first is represented by Deasy’ s position, which effectively represents a
top-down structure to initiate the learning process. Second, are the
tiers of communities of practice when viewed hierarchically. This view
reflects a more bottom-up learning strategy, with technological oppor-
tunities initiated by a community of regional, company CIOs, each
representing the specific interests of their companies or specific lines
of business. This view can also be structured as an evolutionary cycle
in which top-down management is used to initiate organizational
learning from the bottom-up, the bottom, in this case, represented by
193sYnerGIstIC unIon oF It
local operating company CIOs. This means that the CIO is seen rela-
tively, in this case, as the lower of the senior management population.
Figure 8.4 depicts the CIO as this “ senior lower level.”
From this frame of reference, the CIO represents the bottom-up
approach to the support of organizational learning by addressing the
technology dilemma created by technological dynamism— specifically,
in this case, e-business strategy.
The role of IT in marketing and e-business was another important
factor in Siemens’ s model of organizational learning. The technology
strategy at Siemens was consistent with the overall objectives of the
organization: to create a shared environment that complements each
Tier 3
Tier 2
Corporate CIO
oversig
ht and management
Tier 1
CFO
quarterly
CIO advisory
board
President’s
council
Outcomes
Budgeted but not approved
implementations. Projects are
approved within budget
constraints
Proposals reviewed based on
strategy and corporate
direction, and approved for
implementation, including
financial commitment
Outcomes
Outcomes
Local or pre-budgeted
technology specific
implementation issues
Requires financial
approval
Requires strategic
approval
Figure 8.2 Siemens’ community-based links.
194 InForMAtIon teChnoloGY
Dana Deasy
Strategic senior
Management level
President and
executive
management
Chief financial
officer
Local CIO
Financial senior
Management level
Senior lower level
Figure 8.4 CIO as the “ senior lower level.”
Company president
CIO advisory board
Company CFO
Technology issues related to
sharing across businesses or
issues for discussion that
require consesus among CIO
population
Company-specific strategic
issues regarding how
technology affects specific
corporate goals and
objectives
Financial implications and
direct reporting at the
company level
Figure 8.3 Siemens’ local to global links.
195sYnerGIstIC unIon oF It
business by creating the opportunity to utilize resources. This shared
environment became an opportunity for IT to lead the process and
become the main catalyst for change. I discuss this kind of support in
Chapter 5, in which I note that workers see technology as an accept-
able agent of change. Essentially, the CIOs were challenged with the
responsibility of rebranding their assets into clusters based on their
generic business areas, such as hospitals, medical interests, and com-
munications. The essence of this strategic driver was to use e-business
strategy to provide multiple offerings to the same customer base.
As with the Ravell case discussed in Chapter 1, the Siemens case
represents an organization that was attempting to identify the driver
component of IT. To create the driver component, it became necessary
for executive management to establish a corporate position (embodied
by Deasy) to lay out a plan for transformation, through learning and
through the use of many of the organizational learning theories pre-
sented in Chapter 4.
The Siemens challenge, then, was to transform its CIOs from being
back-office professionals to proactive technologists focused primarily
on learning to drive business strategy. That is not to say that back-office
issues became less important; they became, instead, responsibilities left
to the internal organizations of the local CIOs. However, back-office
issues can often become strategic problems, such as with the use of
e-mail. This is an example of a driver situation even though it still per-
tains to a support concern. That is, back-office technologies can indeed
be drivers, especially when new or emerging technologies are available.
As with any transition, the transformation of the CIO role was not
accomplished without difficulty. The ultimate message from executive
management to the CIO community was that it should fuse the vital
goals of the business with its technology initiatives. Siemens asked its
CIOs to think of new ways that technology could be used to drive
strategic innovations. It also required CIOs to change their behavior
by asking them to think more about business strategy.
The first decision that Deasy confronted was whether to change
the reporting structure of the CIO. Most CIOs at Siemens reported
directly to the CFO as opposed to the CEO. After careful thought,
Deasy felt that to whom the CIO reported was less important than
giving access and exposure to the President’ s Council meetings. It was
Deasy’ s perspective that only through exposure and experience could
196 InForMAtIon teChnoloGY
CIOs be able to transform from back-office managers to strategic
planners. As such, CIO training was necessary to prepare them for
participation in communities of practice. Eventually, Siemens recog-
nized this need and, as a result, sponsored programs, usually lasting
one week, in which CIOs would be introduced to new thinking and
learning by using individual-based reflective practices. Thus, we see
an evolutionary approach, similar to that of the responsive organiza-
tional dynamism arc, presented in Chapter 4; that is, one that uses
both individual and organizational learning techniques.
Deasy also understood the importance of his relationship and role
with each of the three communities of practice. With respect to the
CEOs of each company, Deasy certainly had the freedom to pick up
the phone and speak with them directly. However, this was rarely a
realistic option as Deasy knew early on that he needed the trust and
cooperation of the local CIO to be successful. The community with
CEOs was then broadened to include CIOs and other senior manag-
ers. This was another way in which Deasy facilitated the interaction
and exposure of his CIOs to the executives at Siemens.
Disagreement among the communities can and does occur. Deasy
believed in the “ pushing-back” approach. This means that, inevitably,
not everyone will agree to agree, and, at times, senior executives may
need to press on important strategic issues even though they are not
mutually in agreement with the community. However, while this type
of decision appears to be contrary to the process of learning embed-
ded in communities of practice learning, it can be a productive and
acceptable part of the process. Therefore, while a democratic process
of learning is supported and preferred, someone in the CIO posi-
tion ultimately may need to make a decision when a community is
deadlocked.
The most important component of executive decision making is
that trust exists within the community. In an organizational learning
infrastructure, it is vital that senior management share in the value
proposition of learning with members of the community. In this way,
members feel that they are involved, and are a part of decision mak-
ing as opposed to feeling that they are a part of a token effort that
allows some level of participation. As Deasy stated, “ I was not try-
ing to create a corporate bureaucracy, but rather always representing
myself as an ambassador for their interest, however, this does not
197sYnerGIstIC unIon oF It
guarantee that I will always agree with them.” Disagreements, when
managed properly, require patience, which can result in iterative dis-
cussions with members of the community before a consensus posi-
tion may be reached, if it is at all. Only after this iterative process is
exhausted does a senior overarching decision need to be made. Deasy
attributed his success to his experience in field operations, similar to
those of his constituents. As a prior business-line CIO, he understood
the dilemma that many members of the community were facing.
Interestingly, because of his background, Deasy was able to “ qual-
ify” as a true member of the CIO community of practice. This truth
establishes an important part of knowledge management and change
management—senior managers who attempt to create communities
of practice will be more effective when they share a similar back-
ground and history with the community that they hope to manage.
Furthermore, leaders of such communities must allow members to
act independently and not confuse that independence with autonomy.
Finally, managers of communities of practice are really champions
of their group and as such must ensure that the trust among mem-
bers remains strong. This suggests that CIO communities must first
undergo their own cultural assimilation to be prepared to integrate
with larger communities within the organization.
Another important part of Deasy’ s role was managing the technol-
ogy itself. This part of his job required strategic integration in that
his focus was more about uses of technology, as opposed to commu-
nity behavior or cultural assimilation. Another way of looking at this
issue is to consider the ways in which communities of practice actually
transform tacit knowledge and present it to senior management as
explicit knowledge. This explicit knowledge about uses of technology
must be presented in a strategic way and show the benefits for the
organization. The ways that technology can benefit a business often
reside within IT as tacit knowledge. Indeed, many senior manag-
ers often criticize IT managers for their inability to articulate what
they know and to describe it so that managers can understand what it
means to the business. Thus, IT managers need to practice transform-
ing their tacit knowledge about technology and presenting it effec-
tively, as it relates to business strategy.
Attempting to keep up with technology can be a daunting, if not
impossible, task. In some cases, Siemens allows outside consultants
198 InForMAtIon teChnoloGY
to provide help on specific applications if there is not enough
expertise within the organization. The biggest challenge, however,
is not necessarily in keeping up with new technologies but rather, in
testing technologies to determine exactly the benefit they have on
the business. To address this dilemma, Deasy established the con-
cept of “ revalidation.” Specifically, approved technology projects
are reviewed every 90 days to determine whether they are indeed
providing the planned outcomes, whether new outcomes need
to be established, or whether the technology is no longer useful.
The concept of revalidation can be associated with my discussion
in Chapter 3, which introduced the concept of “ driver” aspects of
technology. This required that IT be given the ability to invest and
experiment with technology to fully maximize the evaluation of
IT in strategic integration. This was particularly useful to Deasy,
who needed to transform the culture at Siemens to one that rec-
ognized that not all approved technologies succeed. In addition,
he needed to dramatically alter the application development life
cycle and reengineer the process of how technology was evaluated
by IT and senior management. This challenge was significant in
that it had to be accepted by over 25 autonomous presidents, who
were more focused on short and precise outcomes from technology
investments.
Deasy was able to address the challenges that many presidents
had in understanding IT jargon, specifically as it related to ben-
efits of using technology. He engaged in an initiative to communi-
cate with non-IT executives by using a process called storyboarding.
Storyboarding is the process of creating prototypes that allow users to
actually see examples of technology and how it will look and operate.
Storyboarding tells a story and can quickly educate executives without
being intimidating. Deasy’ s process of revaluation had its own unique
life cycle at Siemens:
1. Create excitement through animation. What would Siemens
be like if … ?
2. Evaluate the way the technology would be supported.
3. Recognize implementation considerations about how the
technology as a business driver is consistent with what the
organization is doing and experiencing.
199sYnerGIstIC unIon oF It
4. Technology is reviewed every 90 days by the CIO advisory
board after experimental use with customers and presented to
the president’ s council on an as-needed basis.
5. Establish responsive organizational dynamism with cultural
assimilation; that is, recognize the instability of technol-
ogy and that there are no guarantees to planned outcomes.
Instead, promote business units to understand the concept of
“ forever prototyping.”
Thus, Siemens was faced with the challenge of cultural assimi-
lation, which required dramatic changes in thinking and business
life cycles. This process resembles Bradley and Nolan’ s (1998) Sense
and Respond —the ongoing sensing of technology opportunities and
responding to them dynamically. This process disturbs traditional and
existing organizational value chains and therefore represents the need
for a cultural shift in thinking and doing. Deasy, using technology as
the change variable, began the process of reinventing the operation of
many traditional value chains.
Siemens provides us with an interesting case study for responsive
organizational dynamism because it had so many diverse companies
(in over 190 countries) and over 425,000 employees. As such, Siemens
represents an excellent structure to examine the importance of cul-
tural assimilation. Deasy, as a corporate CIO, had a counterpart in
Asia/Australia. Both corporate CIOs reported to a global CIO in
Germany, the home office of Siemens. There was also a topic-centered
CIO responsible for global security and application-specific planning
software. This position also reported directly to the global CIO. There
were regional and local CIOs who focused on specific geographical
areas and vertical lines of business and operating company CIOs. This
organization is shown in Figure 8.5.
Deasy’ s operation represents one portion (although the most
quickly changing and growing) of Siemens worldwide. Thus, the issue
of globalization is critical for technologies that are scalable beyond
regional operating domains. Standardization and evaluations of tech-
nology often need to be ascertained at the global level and as a result
introduce new complexities relating to cultural differences in business
methods and general thinking processes. Specifically, what works in
one country may not work the same way in another. Some of these
200 InForMAtIon teChnoloGY
matters can be legally based (e.g., licensing of software or assumptions
about whether a technology is legally justified). To a large extent, solv-
ing legal matters relating to technology is easier than cultural ones.
Cultural assimilation matters about technology typically occur
in global organizations with respect to acceptability of operational
norms from one country to another. This becomes a particularly dif-
ficult situation when international firms attempt to justify standards.
At Siemens, Deasy introduced three “ standards” of technology that
defined how it could be used across cultures, and communities of
practice:
1. Corporate services : These are technologies that are required to
be used by the business units. There are central service charges
for their use as well.
2. Mandatory services : Everyone must comply with using a par-
ticular type of application; that is, mandatory software based
on a specific type of application. For example, if you use a
Web browser, it must be Internet Explorer.
3. Optional : These are technologies related to a specific business
and used only within a local domain. There may be a preferred
solution, but IT is not required to use it.
This matrix of standards allows for a culture to utilize technologies
that are specific to its business needs, when justified. Standards at
Siemens are determined by a series of steering committees, starting
Siemens global CIO
(Germany)
Topic centered CIO
Regional CIOs Operating
company CIO
Operating
company CIORegional CIOs
Corporate CIO
Asia/Australia
Corporate CIO
Americas
(deasy)
Figure 8.5 Siemens’ CIO organization.
201sYnerGIstIC unIon oF It
at the regional level, that meet two to three times annually. Without
question, implementing standards across cultures is, as Deasy phrased
it, “ a constant wrestling match which might need to change by the
time a standard is actually reached.” This is why strategic integra-
tion is so important, given the reality that technology cannot always
be controlled or determined at senior levels. Organizations must be
able to dynamically integrate technology changes parallel to business
changes.
Deasy’ s longer-term mission was to provide a community of CIOs
who could combine the business and technology challenges. It was
his initial vision that the CIO of the future would be more involved
than before with marketing and value chain creation. He felt that
“ the CIO community needed to be detached from its technology-
specific issues or they would never be a credible business partner.”
It was his intent to establish organizational learning initiatives that
helped CIOs “ seize and succeed,” to essentially help senior manage-
ment by creating vision and excitement, by establishing best practices,
and by learning better ways to communicate through open discourse
in communities of practice.
Three years after his initial work, I reviewed the progress that
Deasy had made at Siemens. Interestingly, most of his initiatives
had been implemented and were maturing—except for the role of
e-business strategy. I discovered, after this period, that the orga-
nization thought that e-business was an IT responsibility. As such,
they expected that the CIOs had not been able to determine the
best business strategy. This was a mistake; the CIO could not estab-
lish strategy but rather needed to react to the strategies set forth
by senior management. This means that the CIO was not able to
really establish stand-alone strategies as drivers based on technology
alone. CIOs needed, as Deasy stated, “ to be a participant with the
business strategist and to replace this was inappropriate.” This raises
a number of questions:
1. Did this occur because CIOs at Siemens do not have the edu-
cation and skills to drive aspects of business strategy?
2. Did the change in economy and the downfall of the dot-coms
create a negative feeling toward technology as a business
driver?
202 InForMAtIon teChnoloGY
3. Are CEOs not cognizant enough about uses of technology,
and do they need better education and skills to better under-
stand the role of technology?
4. Is the number of communities of practice across the organi-
zation integrated enough so that IT can effectively commu-
nicate and form new cultures that can adapt to the changes
brought about by emerging technologies?
5. Is there too much impatience with the evolution of tech-
nology? Does its assimilation in an organization the size of
Siemens simply take too long to appreciate and realize the
returns from investments in technology?
I believe that all of these questions apply, to some extent, and are
part of the challenges that lie ahead at Siemens. The company has now
initiated a series of educational seminars designed to provide more
business training for CIOs, which further emphasizes the importance
of focusing on business strategy as opposed to just technology. It could
also mean the eventual establishment of a new “ breed” of CIOs who
are better educated in business strategy. However, it is inappropriate
for non-IT managers to expect that the CIOs will be able to handle
strategy by themselves; they must disconnect e-business as solely being
about technology. The results at Siemens only serve to strengthen the
concept that responsive organizational dynamism requires that cul-
tural assimilation occur within all the entities of a company.
Aftermath
Dana Deasy left Siemens a few years after this case study was com-
pleted. During that time, the executive team at Siemens realized that
the CIO alone could not provide business strategy or react quickly
enough to market needs. Rather, such strategy required the integra-
tion of all aspects of the organization, with the CIO only one part of
the team to determine strategic shifts that lead or use components of
technology. Thus, the executives realized that they needed to become
much better versed in technology so that they also could engage in
strategic conversations. This does not suggest that executives needed
technology training per se, but that they do need training that allows
them to comment intelligently on technology issues. What is the best
203sYnerGIstIC unIon oF It
way to accomplish this goal? The answer is through short seminars
that can provide executives with terminology and familiarize them
with the processes their decisions will affect. The case also raised the
question of whether a new wave of executives would inevitably be
required to move the organization forward to compete more effec-
tively. While these initiatives appear to make sense, they still need to
address the fundamental challenges posed by technology dynamism
and the need to develop an organization that is positioned to respond
(i.e., responsive organizational dynamism). We know from the results
of the Ravell case that executives cannot be excluded. However, the
case also showed that all levels of the organization need to be involved.
Therefore, the move to responsive organizational dynamism requires
a reinvention of the way individuals work, think, and operate across
multiple tiers of management and organizational business units. This
challenge will continue to be a difficult but achievable objective of
large multinational companies.
ICAP
This second case study focuses on a financial organization called ICAP,
a leading money and securities broker. When software development
exceeded 40% of IT activities, ICAP knew it was time to recognize
IT as more than just technical support. Stephen McDermott provided
the leadership, leaving his role as CEO of the Americas at ICAP to
become CEO of the Electronic Trading Community (ETC), a new
entity focused solely on software development. This IT community
needed to be integrated with a traditional business model that was
undergoing significant change due to emerging technologies, in this
specific case, the movement from voice to electronic trading systems.
This case study reflects many aspects of the operation of responsive
organizational dynamism. From the strategic integration perspec-
tive, ICAP needed to understand the ways electronic trading could
ultimately affect business strategy. For example, would it replace all
voice-related business interactions, specifically voice trading? Second,
what would be the effect on its culture, particularly with respect to the
way the business needed to be organizationally structured? This study
focuses on the role of the CEO as a pioneer in reexamining his own
biases, which favored an old-line business process, and for developing
204 InForMAtIon teChnoloGY
a realization to manage a major change in business strategy and
organizational philosophy. Indeed, as McDermott stated, “ It was the
challenge of operating at the top, yet learning from the bottom.” This
sentiment essentially reflects the reality of a management dilemma.
Could a CEO who, without question, had substantial knowledge of
securities trading, learn to lead a technology-driven operation, for
which he had little knowledge and experience?
To better understand the impact of technology on the business of
ICAP, it is important to have some background information. Since
1975, the use of technology at ICAP was limited to operations of
the back-office type. Brokers (the front-end or sales force of a trad-
ing business), communicated with customers via telephone. As such,
processing transactions was always limited to the time necessary
to manually disseminate prices and trading activity over the phone
to a securities trader. However, by 1997 a number of technological
advancements, particularly with the proliferation of Internet-based
communication and the increased bandwidth available, enabled bro-
kers and dealers to communicate bidirectionally. The result was that
every aspect of the trade process could now be streamlined, includ-
ing the ability for the trader to enter orders directly into the brokers’
trading systems. The technological advancements and the availability
of capital in the mid-1990s made it difficult to invest in computer
operations. Specifically, the barriers to investing in technology had
been high as developing proprietary trading systems and deploying a
private network were all costly. The market of available products was
scarce, filled with relatively tiny competitors with little more than a
concept, rather than an integrated product that could do what a com-
pany like ICAP needed, in order to maintain its competitive position.
The existing system, called the ICAP Trading Network application
was far from a trading system that would compete against the newer
emerging technologies. The goal was to develop a new trading sys-
tem that would establish an electronic link between the back-office
systems of ICAP and its clients. The system would need to be simple
to use as the traders were not necessarily technology literate. It would
need to be robust, include features that were specific to the markets,
and easily installed and distributed. In addition, as ICAP decided
to fund the entire project, it would have to be cost-effective and not
burden the other areas of the business. As competitive systems were
205sYnerGIstIC unIon oF It
already being introduced, the new system needed to be operational
within three to six months for ICAP to remain competitive.
McDermott recognized that designing a new product would require
that IT developers and business matter experts learn to work together.
As a result of this realization, a representative from the operation was
selected to see if a third-party developer could modify an existing
product. After exploring and evaluating responses, the search team
concluded that off-the-shelf solutions, prohibitive in cost, were not
available that would meet the critical timing needs of the business.
However, during the period when IT and the business users worked
together, these groups came to realize that the core components of
its own trading system could be modified and used to build the new
system. This realization resulted from discussions between IT and the
business users that promoted organizational learning. This process
resembles the situation in the Ravell study, in which I concluded that
specific events could accelerate organizational learning and actually
provide an opportunity to embed the process in the normal discourse
of an organization. I also concluded that such learning starts with
individual reflective practices, and understanding how both factions,
in this case, IT and the business community, can help each other in a
common cause. In the case of Ravell, it was an important relocation
of the business that promoted integration between IT and the busi-
ness community. At ICAP, the common cause was about maintaining
competitive advantage.
The project to develop the new electronic trading application was
approved in August 1999, and the ETC was formed. The new entity
included an IT staff and selected members from the business commu-
nity, who moved over to the new group. Thus, because of technologi-
cal dynamism, it was determined that the creation of a new product
established the need for a new business entity that would form its
own strategic integration and cultural assimilation. An initial test of
the new product took place in November, and it successfully executed
the first electronic trade via the Internet. In addition to their design
responsibility, ETC was responsible for marketing, installing, and
training clients on the use of the product. The product went live in
February 2000. Since its introduction, the ETC product has been
modified to accommodate 59 different fixed-income products, serving
more than 1,000 users worldwide in multiple languages.
206 InForMAtIon teChnoloGY
While the software launch was successful, McDermott’ s role was
a challenge, from coordinating the short- and long-term goals of
ETC with the traditional business models of ICAP to shifting from
management of a global financial enterprise to management of an IT
community. The ICAP case study examines the experiences and per-
ceptions one year after the launch of the new entity.
The first most daunting result, after a year of operations, was
the significant growth of technology uses in the business. Initially,
McDermott noted that electronic trading was about 40% of opera-
tions and that it had grown over 60%. He stated that ETC had
become, without question, the single most important component of
the ICAP international business focus. The growth of electronic trad-
ing created an accelerated need for transformation within ICAP and
its related businesses. This transformation essentially changed the
balance between voice or traditional trading and electronic trading.
McDermott found himself responsible for much of this transforma-
tion and was initially concerned whether he had the technical exper-
tise to manage it.
McDermott admitted that as a chief executive of the traditional
ICAP business, he was conservative and questioned the practicality
and value of many IT investments. He often turned down requests
for more funding and looked at technology as more of a supporter of
the business. As I explain in Chapter 3, IT as a supporter will always
be managed, based on efficiencies and cost controls. McDermott’ s
view was consistent with this position. In many ways, it was ironic
that he became the CEO of the electronic component of the business.
Like many CEOs, McDermott initially had the wrong impression of
the Internet. Originally looking at it as a “ big threat,” he eventually
realized from the experience that the Internet was just another way
of communicating with his clients and that its largest contribution
was that it could be done more cost-effectively, thus leading to higher
profits.
One of the more difficult challenges for McDermott was develop-
ing the mission for ETC. At the time of the launch of the new product,
this mission was unclear. With the assistance of IT and the business
community, the mission of ETC has been developing dynamically;
the business is first trying to protect itself from outside competi-
tion. Companies like IBM, Microsoft, and others, might attempt to
207sYnerGIstIC unIon oF It
invade the business market of ICAP. Thus, it is important that ETC
continues to produce a quality product and keep its competitive edge
over more limited competitors that are software-based organizations
only. The concept of a dynamic mission can be correlated to the fun-
damental principles of responsive organizational dynamism. In fact, it
seems rather obvious that organizations dealing with emerging tech-
nologies might need to modify their missions to parallel the acceler-
ated changes brought about by technological innovation. We certainly
see this case with ICAP, for which the market conditions became
volatile because of emerging electronic trading capacities. Why, then,
is it so difficult for organizations to realize that changing or modify-
ing their missions should not be considered that unusual? Perhaps the
approach of ICAP in starting a completely separate entity was correct.
However, it is interesting that this new organization was operating
without a consistent and concrete mission.
Another important concept that developed at ETC was that
technology was more of a commodity and that content (i.e., the dif-
ferent services offered to clientele) was more important. Indeed, as
McDermott often stated, “ I assume that the technology works, the
real issue is the way you intend to implement it; I want to see a com-
pany’ s business plan first.” Furthermore, ETC began to understand
that technology could be used to leverage ICAP businesses in areas
that they had never been able to consider before the advent of the
technology and the new product. McDermott knew that this was
a time, as Deasy often stated, to “ seize and succeed” the moment.
McDermott also realized that organizational learning practices were
critical for ideas to come from within the staff. He was careful not
to require staff to immediately present a formal new initiative, but
he allowed them to naturally develop a plan as the process became
mature. That is one of the reasons that ETC uses the word community
in its name. As he expressed it to me during a conversation:
Now that is not my mandate to grow into other areas of opportunity, my
initial responsibility is always to protect our businesses. However, I will
not let opportunities go by which can help the business grow, especially
things that we could never do as a voice broker. It has been very exciting
and I can see ICAP becoming a considerably larger company than we
have been historically because of our investment in technology.
208 InForMAtIon teChnoloGY
McDermott also was challenged to learn what his role would be as
a chief executive of a software technology organization. In the early
stages, he was insecure about his job because for the first time he
knew less than his workers about the business. Perhaps this provides
organizational learning practitioners with guidance on the best way
of getting the CEO engaged in the transformative process; that is,
getting the CEO to understand his or her role in an area in which,
typically, he or she does not have expertise. McDermott represented
an executive who reached that position coming up through the ranks.
Therefore, much of his day-to-day management was based on his
knowledge of the business—a business that he felt he knew as well as
anyone. With technology, and its effect as technological dynamism,
CEOs face more challenges, not only because they need to manage
an area they may know little about but also because of the dynamic
aspects of technology and the way it causes unpredictable and acceler-
ated change. McDermott realized this and focused his attention on
discovering what his role needed to be in this new business. There
was no question in McDermott’ s mind that he needed to know more
about technology, although he also recognized that management was
the fundamental responsibility he would have with this new entity:
[Although] I was insecure at the beginning I started to realize that it does
not take a genius to do my job. Management is management, and whether
you manage a securities brokering firm or you manage a deli or manage
a group of supermarkets or an IT or an electronic company, it is really
about management, and that is what I am finding out now. So, whether
I am the right person to bring ETC to the next level is irrelevant at this
time. What is more important is that I have the skills that are necessary to
manage the business issues as opposed to the technological ones.
However, McDermott did have to make some significant changes
to operate in a technology-based environment. ETC was now des-
tined to become a global organization. As a result, McDermott had
to create three senior executive positions to manage each of the three
major geographic areas of operation: North America, Europe, and
Asia. He went from having many indirect reports to having just a
few. He needed four or five key managers. He needed to learn to trust
that they were the right people, people who had the ability to nurture
209sYnerGIstIC unIon oF It
the parts of each of their respective divisions. “ What it leaves now is
being a true CEO,” he stated, “ and that means picking your people,
delegating the responsibility and accepting that they know the busi-
ness.” Thus, we see technological dynamism actually realigning the
reporting structure, and social discourse of the company.
My presentation in previous chapters focused on helping orga-
nizations transform and change. Most important in organizational
learning theories is the resistance to change that most workers have,
particularly when existing cultural norms are threatened. ICAP was
no exception to the challenges of change management. The most sig-
nificant threat at ICAP was the fear that the traditional voice bro-
ker was endangered. McDermott understood this fear factor and
presented electronic trading not as a replacement but rather, a sup-
plement to the voice broker. There was no question that there were
certain areas of the business that lent themselves more to electronic
trading; however, there are others that will never go electronic or at
least predominantly electronic. Principles of responsive organizational
dynamism suggest that accelerated change becomes part of the stra-
tegic and cultural structure of an organization. We see both of these
components at work in this case.
Strategically, ICAP was faced with a surge in business opportuni-
ties that were happening at an accelerated pace and were, for the most
part, unplanned, so there was little planned activity. The business was
feeling its way through its own development, and its CEO was pro-
viding management guidance, as opposed to specific solutions. ICAP
represents a high-velocity organization similar to those researched by
Eisenhardt and Bourgeois (1988), and supports their findings that a
democratic, less power-centralized management structure enhances
the performance of such a firm. From a cultural assimilation perspec-
tive, the strategic decisions are changing the culture and requiring new
structures and alignments. Such changes are bound to cause fears.
As a result of recognizing the inevitable changes that were becom-
ing realities, McDermott reviewed the roles and responsibilities of
his employees on the brokering side of the business. After careful
analysis, he realized that he could divide the brokers into three dif-
ferent divisions, which he branded as A, B, and C brokers. The A
brokers were those who were fixed on the relational aspect of their
jobs, so voice interaction was the only part of their work world. Such
210 InForMAtIon teChnoloGY
individuals could do things in the voice world that electronic means
could not reach. They were personal experts, if you will, who could
deal with clients requiring a human voice. Thus, the A broker would
exist as long as the broker wanted to work—and would always be
needed because a population of clients wants personal support over
the phone. This is similar to the opposition to the Internet in which
we find that some portion of the population will never use e-com-
merce because they prefer a live person. The B broker was called the
hybrid broker—an individual who could use both voice and electronic
means. Most important, these brokers were used to “ convert” voice-
based clients into electronic ones. As McDermott explained:
Every day I see a different electronic system that someone is trying to
sell in the marketplace. Some of these new technologies are attempting
to solve problems that do not exist. I have found that successful systems
address the content more than the technology. Having a relationship for
many of our customers is more important. And we can migrate those
relationships from voice to electronic or some sort of a hybrid combi-
nation. The B brokers will end up with servicing some combination of
these relationships or migrate themselves to the electronic system. So, I
believe they have nothing to fear.
The C brokers, on the other hand, represented the more average
voice brokers who would probably not have a future within the busi-
ness. They would be replaced by electronic trading because they did
not bring the personal specialization of the A broker. The plight of
the C broker did raise an important issue about change management
and technological dynamism: Change will cause disruption, which
can lead to the elimination of jobs. This only further supported the
fears that workers had when faced with dynamic environments. For
McDermott, this change would need to be openly discussed with the
community, especially for the A and B brokers, who in essence would
continue to play an important role in the future of the business. C
brokers needed to be counseled so that they could appropriately seek
alternate career plans. Thus, honesty brings forth trust, which inevi-
tably fosters the growth of organizational learning. Another perspec-
tive was that the A and B brokers understood the need for change
and recognized that not everyone could adapt to new cultures driven
211sYnerGIstIC unIon oF It
by strategic integration, so they understood why the C broker was
eliminated.
In Chapter 2, I discussed the dilemma of IT as a “ marginalized”
component of an organization. This case study provides an opportunity
to understand how the traditional IT staff at ICAP made the transi-
tion into the new company—a company in which they represented a
direct part of its success. As noted, ICAP considered the IT depart-
ment as a back-office support function. In the new organization, it
represented the nucleus or the base of all products and careers. Hence,
McDermott expected ETC employees to be technology proficient. No
longer were IT people just coders or hardware specialists—he saw tech-
nology people as lawyers, traders, and other businesspeople. He related
technology proficiency in a similar way to how his business used to
view a master’ s degree in business (MBA) in the late 1980s. This issue
provides further support for the cultural assimilation component of
responsive organizational dynamism. We see a situation in which the
discrepancy between who is and is not a technology person beginning
to dwindle in importance. While there is still clear need for expertise
and specialization, the organization as a whole has started the process
of educating itself on the ways in which technology affects every aspect
of its corporate mission, operations, and career development.
ICAP has not been immune to the challenges that have faced most
technology-driven organizations. As discussed in Chapter 2, IT proj-
ects typically face many problems in terms of their ability to complete
projects on time and within budget. ICAP was also challenged with
this dilemma. Indeed, ICAP had no formal process but focused on
the criterion of meeting the delivery date as the single most important
issue. As a result, McDermott was attempting to instill a new culture
committed to the importance of what he called the “ real date of deliv-
ery.” It was a challenge to change an existing culture that had difficulty
with providing accurate dates for delivery. As McDermott suggested:
I am learning that technology people know that there is no way that they
can deliver an order in the time requested, but they do not want to disap-
point us. I find that technology people are a different breed from the peo-
ple that I normally work with. Brokers are people looking for immediate
gratification and satisfaction. Technology people, on the other hand, are
always dedicated to the project regardless of its time commitment.
212 InForMAtIon teChnoloGY
McDermott was striving to attain a mix or blend of the traditional
culture with the technology culture and create a new hybrid organiza-
tion capable of developing realistic goals and target dates. This process
of attainment mirrors the results from the Ravell case, which resulted
in the formation of a new hybrid culture after IT and business staff
members were able to assimilate one another and find common needs
and uses for technology and the business.
McDermott also understood his role as a leader in the new orga-
nization. He realized early on that technology people are what he
called more “ individualistic” ; that is, they seemingly were reluctant to
take on responsibility of other people. They seemed, as McDermott
observed, “ to have greater pleasure in designing and creating some-
thing and they love solving problems.” This was different from what
CEOs experienced with MBAs, who were taught more to lead a group
as opposed to being taught to solve specific problems. Yet, the integra-
tion of both approaches can lead to important accomplishments that
may not be reachable while IT and non-IT are separated by depart-
mental barriers.
Ultimately, the cultural differences and the way they are managed
lead to issues surrounding the basis of judging new technologies for
future marketing consideration. McDermott understood that this was
a work in progress. He felt strongly that the issue was not technology,
but that it was the plan for using technology competitively. In other
words, McDermott was interested in the business model for the tech-
nology that defined its benefits to the business strategically. As he put
it, “ Tell me how you are going to make money, tell me what you can
do for me to make my life easier. That is what I am looking at!” While
McDermott felt that many people were surprised by his response, he
believed its reality was taken too much for granted. During the dot-
com era, too many investors and businesses assumed that technologi-
cal innovation would somehow lead to multiples of earnings—that
simply did not happen. Essentially, McDermott realized that good
technology was available in many places and that the best technology
is not necessarily the one that will provide businesses with the highest
levels of success.
Judging new technologies based on the quality of the business plan
is an effective method of emphasizing the importance of why the
entire organization needs to participate and understand technology.
213sYnerGIstIC unIon oF It
This inevitably leads to questions about the method in which ROI is,
or should be, measured. The actual measurement of ROI for ICAP
was remarkably simple yet effective. There were four methods of
determining ROI. The first and most significant was whether the
technology would increase volume of trades along the different prod-
uct lines. The second was the amount in dollars of the securities being
traded. That is, did technology provide a means for clients to do larger
dollar trades? The third factor could be an increase in the actual num-
ber of clients using the electronic system. The fourth might be allevi-
ating existing bottlenecks in the voice trading process, whether it was
a legal issue or the advantage provided by having electronic means.
We see here that some of the ROI factors are direct and monetary. As
expected methods, the first and second were very much direct mone-
tary ways to see the return for investing in electronic trading systems.
However, as Lucas (1999) reminds us, many benefits derived from IT
investments are indirect, and some are impossible to measure. We see
this with the third and fourth methods. Increasing the number of cli-
ents indirectly suggested more revenue, but did not guarantee it. An
even more abstract benefit was the improvement of throughput, what
is typically known as improved efficiency in operations.
While all of the accomplishments of ICAP and McDermott seem
straightforward, they were not accomplished without challenges;
perhaps the most significant was the approach, determination, and
commitment that were needed by the executive team. This chal-
lenge is often neglected in the literature on organizational learning.
Specifically, the executive board of ETC needed to understand what
was necessary in terms of funding to appropriately invest in the future
of technology. To do that, they needed to comprehend what e- business
was about and why it was important for a global business to make seri-
ous investments in it to survive. In this context, then, the executive
board needed to learn about technology as well and found themselves
in a rather difficult position. During this period, McDermott called
in an outside consultant who could provide a neutral and objective
opinion. Most important was to define the issue in lay terms so that
board members could correlate it with their traditional business mod-
els. Ultimately, the learning consisted of understanding that technol-
ogy and e-commerce were about expanding into more markets, ones
that ICAP could not reach using traditional approaches. There was a
214 InForMAtIon teChnoloGY
realization that ICAP was too focused on its existing client base, as
opposed to reaching out for new ones—and there was also the reverse
reality that a competitor would figure out a strategy to reach out to the
client base of ICAP. What is also implied in expanding one’ s client
base is that it means going outside one’ s existing product offerings.
This had to be carefully planned as ICAP did not want to venture
outside what it was—an intermediary brokering service. So, expan-
sion needed to be carefully planned and discussed first among the
executive members, then presented as a challenge to the senior man-
agement, and so on.
This process required some modifications to the organizational
learning process proposed by such scholars as Nonaka and Takeuchi
(1995). Specifically, their models of knowledge management do not
typically include the executive boards; thus, they are not considered a
part of the learning organization. The ICAP case study exposes the fact
that their exclusion can be a serious limitation, especially with respect
to the creation of responsive organizational dynamism. In previous
chapters, I presented a number of management models that could be
used to assist in developing and sustaining organizational learning.
They focused fundamentally on the concept of whether such manage-
ment should be top-down, bottom-up, or, as Nonaka and Takeuchi
suggest, “ middle-up-down.” I laid out my case for a combination of
all of them in a specific order and process that could maximize each
approach. However, none of these models really incorporates the out-
side executive boards that have been challenged to truly understand
what technology is about, their approach to management, and what
their overall participation should be in organizational learning.
Perhaps the most significant historical involvement of executive
boards was with the Year 2000 (Y2K) event. With this event, executive
boards mandated that their organizations address the potential tech-
nology crisis at the turn of the century. My CEO interviews verified
that, if anything, the Y2K crisis served to educate executive boards by
forcing them to focus on the issue. Boards became unusually involved
with the rest of the organization because independent accounting firms,
as outside objective consultants, were able to expose the risks for not
addressing the problem. The handling of e-commerce by ICAP was
in many ways similar but also suggests that executive boards should
not always wait for a crisis to occur before they get involved. They also
215sYnerGIstIC unIon oF It
must be an important component of organizational learning, particu-
larly in responsive organizational dynamism. While organizational
learning fosters the involvement of the entire community or workers,
it also needs advocates and supporters who control funding. In the case
of ICAP, organizational learning processes without the participation
of the executive board, ultimately would not have been successful. The
experience of ICAP also suggests that this educational and learning
process may need to come from independent and objective sources,
which integrates another component of organizational learning that
has not been effectively addressed: the role of outside consultants as a
part of a community of practice. Figure 8.6 depicts the addition of the
ICAP ETC executive board and outside consultants in the organiza-
tional learning management process.
The sequential activities that occurred among the different communi-
ties are shown in Table 8.1. While Table 8.1 shows the sequential steps
necessary to complete a transformation toward strategic integration and
cultural assimilation, the process is also very iterative. Specifically, this
means that organizations do not seamlessly move from one stage to
another without setbacks. Thus, transformation depends heavily on dis-
course as the main driver for ultimate organizational evolution.
Figure 8.7 shows a somewhat messier depiction of organizational
learning under the auspices of ROD. The changes brought on by
dynamic interactions foster top-down, middle-up-down, and bottom-
up knowledge management techniques—all occurring simultane-
ously. This level of complex discourse creates a number of overlapping
communities of practice that have similar, yet unique, objectives in
learning. These communities of practice overlap at certain levels as
shown in Figure 8.8.
As stated, organizational learning at the executive levels tends to
be ignored in the literature. At ICAP, an important community of
practice emerged that created a language discourse essential to its
overall success in dealing with technological dynamism, brought on
by technological innovation in electronic communications. Language
was critical at this level; ICAP is a U.K.-based organization and as
such has an international board. As McDermott explained:
As you know, from travelling anywhere around the world, cultures are
different. And even the main office for our company, ICAP in England,
216 InForMAtIon teChnoloGY
and even with the English, we are separated by a common language, as
we often say. There is a very, very different culture everywhere in the
world. I will tell you that information technology in our company is
separated from electronic trading—there is a difference.
Thus, McDermott’ s challenge was to establish a community
that could reach consensus not only on strategic issues but also
Advisement on e-commerce
business opportunities
Overall changes to
corporate mission
financial
commitments
approve major
organizational
changes—form ETC
Discourse and
learning at the
organizational
level, non-event
specific. ICAP
board focused on
impact of
technology on
trading operations.
Senior
management
team meets and
determines
strategy and
organization for
creating new
corporate entity.
Middle managers
determine how and
when operations
will be changed.
�is includes
personnel
changes and
development of
specific
implementation
schedules through
group discourse.
Operations
personnel work
with middle
management to
determine change
in organization
structure and duties
and responsibilies of
new and old positions.
Individual reflective
practice, tacit knowledge
of how to actually
implement changes at
the individual and
group levels
Buy in
modify
implementation plans
based on discourse and
knowledge of how the
business operates
Buy in
modify strategic plans
based on discourse and
knowledge of specific
areas of the business
Specific skills in
technology strategy
objective analysis
of business
realignment
education of
executive board
Initiator of change
knowledge and
strategic management
organizational change
agent
ICAP ETC
executive board
Independent
consultants
CEO
Senior
management
Middle
management
Operations
Figure 8.6 ICAP ETC management tiers.
217sYnerGIstIC unIon oF It
Table 8.1 ICAP—Steps to Transformation
STEP LEARNING ENTITY(S) LEARNING ACTIVITY
1 CEO Americas Initiates discourse at board level on
approaches to expanding electronic trading
business
2 Executive board Decides to create separate corporate entity
ETC to allow for the establishment of a new
culture
3 Outside consultant E-commerce discourse, ways in which to
expand the domain of the business
4 Executive board Discussion of corporate realignment of
mission, goals, and objectives
5 CEO/senior management Establishes strategic direction with senior
management
6 Senior management/middle
management
Meet to discuss and negotiate details of the
procedures to implement
7 Middle management/operations
communities
Meet with operations communities to discuss
impact on day-to-day processes and
procedures
Discourse initiated
Discourse on
“how” to
implement
Rollout of new
organization and
strategies
Interactive discussions
Questions and responsesExecutive
board
CEO
Americas
Senior
management
Middle
management
Operations
Operations adjustments
based on reflective practices
Adjustments as a result of
discourse with operations
community
New ideas and
adjustments
Objective advice
and education
Objective advice
and education
Meetings and
discussions on
day-to-day
operations
Consultants
Figure 8.7 ICAP—responsive organizational dynamism.
218 InForMAtIon teChnoloGY
on the very nomenclature applied to how technology was defined
and procedures adopted among the international organizations
within ICAP. That is why outside consultation could be effective
as it provided independent and objective input that could foster
the integration of culture-based concepts of technology, strategy,
and ROI. Key to understanding the role of executive communi-
ties of practice is their overall importance to organizational learn-
ing growth. Very often we have heard, “ Can we create productive
discourse if the executive team cannot discuss and agree on issues
themselves?” Effectively, ICAP created this community to ensure
consistency among all the levels within the business. Consistent
with the responsive organizational dynamism arc, learning in this
community was at the “ system” or organizational level, as opposed
to being based on specific events like Y2K. These concerns had
a broader context, and they affected both short- and long-term
issues of business strategy and culture.
Another community of practice was the operations manage-
ment team, which was the community responsible for transform-
ing strategy into a realistic plan of strategic implementation. This
team consisted of three levels (Figure 8.9). We see in this commu-
nity of practice that the CEO was common to both this commu-
nity and the executive community of practice. His participation in
both provided the consistency and discourse that pointed to three
valuable components:
1. The CEO could accurately communicate decisions reached at
the board level to the operations management team.
CEO
Americas
Executive
board
Consultants
Questions and responses
Objective advice
and education
Objective advice
and education
Interactive discussions
Discourse initiated
Figure 8.8 ICAP—community of practice.
219sYnerGIstIC unIon oF It
2. The operations team could provide important input and sug-
gestions to the CEO, who could then provide this informa-
tion to the executive community.
3. The CEO interacted in different ways between the two com-
munities of practice. This was critical because the way things
were discussed, the language used, and the processes of con-
sensus were different in each community.
The operations management community was not at the detailed
level of implementation; rather, it was at the conceptual one. It needed
to embrace the strategic and cultural outcomes discussed at the execu-
tive community, suggest modifications if applicable, and eventually
reach consensus within the community and with the executive team.
The operations management community, because of its conceptual
perspectives, used more organizational learning methods as opposed
to individual techniques. However, because of their relationship
with operations personnel, they did participate in individual reflec-
tive practices. Notwithstanding their conceptual nature, event-driven
issues were important for discussion. That is why middle management
needed to be part of this community, for without their input, concep-
tual foundations for implementing change may very well have flaws.
CEO
Americas
Senior
managementNew ideas and
adjustments
Middle
management
Rollout of new
organization and
strategies
Discourse on
“how” to
implement
Adjustments as a result of
discourse with operations
community
Figure 8.9 ICAP—community of practice interfaces.
220 InForMAtIon teChnoloGY
Middle management participated to represent the concrete pieces
and the realities for modifications to conceptual arguments. As such,
middle managers could indirectly affect the executive board commu-
nity since their input could require change in the operations man-
agement community, which in turn could foster the need for change
requests back to the board. This process provides the very essence of
why communities of practice need to work together, especially with
the dynamic changes that can occur from technological innovations.
The third community of practice at ICAP was at the operations
or implementation tier. It consisted of the community of staff that
needed to transition conceptual plans into concrete realities. To ensure
that conceptual ideas of implementation balanced with the concrete
events that needed to occur operationally, middle managers needed
to be part of both the operations management, and implementation
communities, as shown in Figure 8.10.
Because of the transitory nature of this community, it was important
that both organizational learning and individual learning occurred
simultaneously. Thus, it was the responsibility of middle managers
to provide the transition of organizational-based ideas to the event
and concrete level so that individuals understood what it ultimately
meant to the operations team. As one would expect, this level oper-
ated on individual attainment, yet through the creation of a commu-
nity of practice, ICAP could get its operations members to begin to
think more at the conceptual level. This provided management with
the opportunity to discuss conceptual and system-level ideas with
Middle
management
Operations
Meetings and
discussions on
day-to-day
operations
Operations adjustments
based on reflective practices
Figure 8.10 Middle-management community of practice at ICAP.
221sYnerGIstIC unIon oF It
operations personnel. Operations personnel could review them and,
under a managed and controlled process, could reach consensus. That
is, changes required by the implementation community could be rep-
resented to the operations management community through middle
management. If middle management could, through discourse and
language, reach consensus with the operations management commu-
nity, then the CEO could bring them forth to the executive commu-
nity for further discussion. We can see this common thread concept
among communities of practice as a logical process among tiers of
operations and management and one that can foster learning matura-
tion, as identified in the responsive organizational dynamism arc. This
is graphically shown in Figure 8.11.
Figure 8.11 shows the relationships among the three communi-
ties of practice at ICAP and how they interacted, especially through
upward feedback using common threads of communication. Thus,
multiple communities needed to be linked via common individu-
als to maintain threads of communication necessary to support
Executive community of practice
Operations management community of practice
CEO
Americas
CEO
AmericasNew ideas
and
adjustments
Implementation community of practice
Senior
management
Operations
Adjustments as
a result of
discourse
with operations
community
Middle
management
Middle
management
Executive
board Consultants
Figure 8.11 ICAP—COP common threads.
222 InForMAtIon teChnoloGY
responsive organizational dynamism and learning across organiza-
tional boundaries.
Another important observation is the absence of independent con-
sultants from the operations management and implementation com-
munities of practice. This does not suggest that consultants were not
needed or used by these communities. The independent consultant
in the executive community provides organizational-level learning, as
opposed to the consultant who is, for example, a specialist in database
design or training.
This case study provides an example of how an international firm
dealt with the effects of technology on its business. The CEO, Stephen
McDermott in this case, played an important role, using many forms
of responsive organizational dynamism, in managing the organiza-
tion through a transformation. His experience fostered the realiza-
tion that CEOs and their boards need to reinvent themselves on an
ongoing basis. Most important, this case study identified the number
of communities of practice that needed to participate in organiza-
tional transformation. The CEO continued to have an important role;
in many ways, McDermott offered some interesting advice for other
chief executives to consider:
1. The perfect time may or may not exist to deal with changes
brought on by technology. The CEO may need to just “ dive
in” and serve as a catalyst for change.
2. Stay on course with the fundamentals of business and do not
believe everything everyone tells you; make sure your busi-
ness model is solid.
3. Trust that your abilities to deal with technology issues are no
different from managing any other business issue.
As a result of the commitment and the process for adapting tech-
nology at ICAP, it has realized many benefits, such as the following:
• Protection of tacit knowledge : By incorporating the existing
trading system, ICAP was able to retain the years of expe-
rience and expertise of its people. As a result, ICAP devel-
oped an electronic system that better served the needs of
broker users; this ability gave it an advantage over competitor
systems.
223sYnerGIstIC unIon oF It
• Integrated use : The combination of the new system and its
compatibility with other ICAP legacy systems enabled the
organization to continue to service the core business while
increasing access for new clients. This resulted in a reduction
of costs and an increase in its user base.
• Transformation of tacit knowledge to explicit product knowledge :
By providing an infrastructure of learning and strategic inte-
gration, ICAP was able to bridge a wide range of its employ-
ees’ product knowledge, particularly of those outside IT with
a specific understanding of trading system design, and to
transform their tacit knowledge into explicit value that was
used to build on to the existing trading systems.
• Flexibility : Because multiple communities of practice were
formed, IT and non-IT cultures were able to assimilate. As a
result, ICAP was able to reduce its overall development time
and retain the functionality necessary for a hybrid voice and
electronic trading system.
• Expansion : Because of the assimilation of cultures, ICAP
was able to leverage its expertise so that the design of the
electronic system allowed it to be used with other third-party
trading systems. For example, it brought together another
trading system from ICAP in Europe and enabled concurrent
development in the United States and the United Kingdom.
• Evolution : By incorporating existing technology, ICAP con-
tinued to support the core business and gradually introduced
new enhancements and features to serve all of its entities.
• Knowledge creation : By developing the system internally,
ICAP was able to increase its tacit knowledge base and stay
current with new trends in the industry.
ICAP went on to evolve its organization as a result of its adop-
tion of technology and its implementation of responsive organiza-
tional dynamism. The company reinvented itself again. McDermott
became the chief operating officer (COO) for three business units
in the Americas; all specific business lines, yet linked by their inte-
grated technologies and assimilated cultures. In addition, ICAP
purchased a competitor electronic trading product and assimilated
these combined technologies into a new organization. Business
224 InForMAtIon teChnoloGY
revenues rose at that time from $350 million to over $1 billion four
years later. The company also had more than 2,800 staff members
and operated from 21 offices worldwide. Much has been attrib-
uted to ICAP’ s investment in electronic trading systems and other
emerging technologies.
Five Years Later
I returned to meet with Stephen McDermott almost five years after
our original case study. Many of the predictions about how technol-
ogy would affect the business had indeed become reality. In 2010,
technology at ICAP had become the dominant component of the
business. The C brokers had all but disappeared, with the organiza-
tion now consisting of two distinct divisions: voice brokers and elec-
tronic brokers. The company continued to expand by acquiring other
smaller competitors in the technology space. The electronic division
now consisted of three distinct divisions from these acquisitions, with
ETC just one of those divisions. In effect, the expansion led to more
specialization and leveraging of technology to capture larger parts of
various markets.
Perhaps the unseen reality was how quickly technology became a
commodity. As McDermott said to me, “ Everybody (our competi-
tors) can do it; it’ s now all about your business strategy.” While the
importance of strategy was always part of McDermott’ s position,
the transition from product value to market strategy was much more
transformative on the organization’ s design and how it approached
the market. For example, the additional regulatory controls on voice
brokering actually forced many brokers to move to an electronic
interface, which reduced liability between the buyer and the broker.
McDermott also emphasized how “ technology has created overnight
businesses,” forcing the organization to understand how technology
could provide new competitive advantages that otherwise did not
exist. Today, 50% of the trading dollars, some $2 trillion, occurs over
electronic technology-driven platforms. Undoubtedly, these dynamic
changes, brought on by technological dynamism, continue to chal-
lenge ICAP on how they strategically integrate new opportunities
and how the organization must adapt culturally with changes in indi-
vidual roles and responsibilities.
225sYnerGIstIC unIon oF It
HTC
HTC (a pseudoacronym) is a company that provides creative business
services and solutions. The case study involving HTC demonstrates
that changes can occur when technology reports to the appropriate
level in an organization. This case study offers the example of a com-
pany with a CEO who became an important catalyst in the successful
vitalization of IT. HTC is a company of approximately 700 employees
across 16 offices. The case involves studying the use of a new applica-
tion that directly affected some 200 staff people.
The company was faced with the challenge of providing accurate
billable time records to its clients. Initial client billings were based on
project estimates, which then needed to be reconciled with actual work
performed. This case turned out to be more complex than expected.
Estimates typically represented the amount of work to which a client
agreed. Underspending the budget agreed to by the client, however,
could lead to lost revenue opportunities for the firm. For example, if
a project was estimated at 20 hours, but the actual work took only
15, then most clients would seek an additional five hours of service
because they had already budgeted the total 20 hours. If the recon-
ciliation between hours budgeted and hours worked was significantly
delayed, clients might lose their window of opportunity to spend the
remaining five hours (in the example situation). Thus, the incapacity
to provide timely reporting of this information resulted in the actual
loss of revenue, as well as upset clients. If clients did not spend their
allocated budget, they stood to lose the amount of the unused portion
in their future budget allocations. Furthermore, clients had expecta-
tions that vendors were capable of providing accurate reporting, espe-
cially given that present-day technology could automate the recording
and reporting of this information. Finally, in times of a tight econ-
omy, businesses tend to manage expenditures more closely and insist
on more accurate record keeping than at other times.
The objective at HTC was to transform its services to better meet
the evolving changes of its clients’ business requirements. While the
requirement for a more timely and accurate billing system seems
straightforward, it became a greater challenge to actually implement
than it otherwise seemed.
The first obstacle for HTC to overcome was the clash between this
new requirement and the existing ethos, or culture of the business.
226 InForMAtIon teChnoloGY
HTC provided creative services; 200 of its staff members were artisti-
cally oriented and were uncomfortable with focusing on time-based
service tracking; they were typically engrossed in the creative per-
formance required by their clients. Although it would seem a simple
request to track time and enter it each day, this projected change in
business norms became a significant barrier to its actual implemen-
tation. Project managers became concerned that reporting require-
ments would adversely affect performance, and thus, inevitably hurt
the business. Efforts to use blunt force—do it or find another job —
were not considered a good long-term solution. Instead, the company
needed to seek a way to require the change while demonstrating the
value of focusing on time management.
Many senior managers had thought of meeting with key users to
help determine a workable solution, but they were cognizant of the
fact that such interactive processes with the staff do not always lead to
agreement on a dependable method of handling the problem. This is a
common concern among managers and researchers working in orga-
nizational behavior. While organizational learning theorists advocate
this mediating, interactive approach, it may not render the desired
results in time and can even backfire if staff members are not genu-
inely willing to solve the problem or if they attempt to make it seem
too difficult or a bad idea. The intervention of the CEO of HTC,
together with the change in time reporting methods, directly involv-
ing IT, made a significant difference in overcoming the obstacle.
IT History at HTC
When I first interviewed the CEO, I found that she had little direct
interaction with the activities of the IT department. IT reported to
the CFO, as in many companies, because it was seen as an opera-
tional support department. However, the CEO subsequently became
aware of certain shortfalls associated with IT and with its report-
ing structure. First, the IT department was not particularly liked
by other departments. Second, the department seemed incapable of
implementing software solutions that could directly help the busi-
ness. Third, the CFO did not possess the creativity beyond account-
ing functions to provide the necessary leadership needed to steer the
activities of IT in a more fruitful direction. As a result, the CEO
227sYnerGIstIC unIon oF It
decided that the IT department should report directly to her. She was
also concerned that IT needed a more senior manager and hired a new
chief technology officer (CTO).
Interactions of the CEO
My research involving 40 chief executives showed that many execu-
tives are unsure about what role they need to take with their chief IT
managers. However, the CEO of HTC took on the responsibility to
provide the financial support to get the project under way. First, the
CEO made it clear that a solution was necessary and that appropri-
ate funds would be furnished to get the project done. Second, the
new CTO was empowered to assess the needs of the business and the
staff, and to present a feasible solution for both business and cultural
adaptation needs.
The CEO was determined to help transform the creative-artistic
service business into one that would embrace the kinds of controls that
were becoming increasingly necessary to support clients. Addressing
the existing lag in collecting time records from employees, which
directly affected billing revenue, seemed like the logical first step for
engaging the IT department in the design and implementation of new
operating procedures and cultural behavior.
Because middle managers were focused on providing services to
their clients, they were less concerned with the collection of time
sheets. This need was a low priority of the creative workers of the firm.
Human resources (HR) had been involved in attempting to address
the problem, but their efforts had failed. Much of this difficulty was
attributed to an avoidance by middle managers of giving ultimatums
as a solution; that is, simply demanding that workers comply. Instead,
management subsequently became interested in a middle-ground
approach that could possibly help departments realize the need to
change and to help determine what the solution might be. The ini-
tial thinking of the CEO was to see if specialized technology could
be built that would (1) provide efficiency to the process of recording
time, and (2) create a form of controls that would require some level
of compliance.
With the involvement of the CEO, the embattled IT depart-
ment was given the authority to determine what technology could
228 InForMAtIon teChnoloGY
be employed to help the situation. The existing application that had
been developed by the IT department did not provide the kind of ease
of use and access that was needed by operations. Previous attempts
to develop a new system, without the intervention of the CEO, had
failed for a number of reasons. Management did not envision the
potential solution that software was capable of delivering. It was not
motivated in getting the requisite budget support; no one was in a
position to champion it, to allocate the needed budget. Ultimately,
management individuals were not convinced of the importance of
providing a better solution.
The Process
The new CTO determined that there was a technological solution
that could provide greater application flexibility, while maintaining its
necessary integrity, through the use of the existing e-mail system. The
application would require staff to enter their project time spent before
signing on to the e-mail system. While this procedure might be seen
as a punishment, it became the middle-ground solution for securing
compliance without dramatically dictating policy. There was initial
rejection of the procedure by some of the line managers, but it was
with the assistance of the CEO, who provided the necessary support
and enforcement, that the new procedure took hold. This enforcement
became crucial when certain groups asked to be excluded from the
process. The CEO made it clear that all departments were expected
to comply.
The application was developed in three months and went into pilot
implementation. The timely delivery of the application by the IT
department gave IT its first successful program implementation and
helped change the general view of IT among its company colleagues.
It was the first occasion in which IT had a leadership role in guiding
the company to a major behavioral transformation. Another positive
outcome that resulted from the transition occurred in the way that
resistance to change was managed by the CTO. Simply put, the cre-
ative staff was not open to a structured solution. The CTO’ s response
was to implement a warning system instead of immediately disallow-
ing e-mail access. This procedure was an important concession as it
229sYnerGIstIC unIon oF It
allowed staff and management to deal with the transition, to meet
them halfway.
Transformation from the Transition
After the pilot period, the application was implemented firm-wide.
The results of this new practice have created an interesting internal
transformation: IT is now intimately engaged in working on new
enhancements to the time-recording system. For instance, a “ digital
dashboard” is now used to measure performance against estimates.
More important, however, are the results of the new application. The
firm has shown substantial increases in revenue because its new time-
recording system enabled it to discover numerous areas in which it
was underbilling its clients. Its clients, on the other hand, are happier
to receive billing statements that can demonstrate more accurately
than before just how time was spent on their projects. Hence, the
IT-implemented solution proved beneficial not only to the client but
also to the firm.
Notwithstanding the ultimate value of utilizing appropriate tech-
nology and producing measurable outcomes, IT has also been able
to assist in developing and establishing a new culture in the firm.
Staff members are now more mindful and have a greater sense of cor-
porate-norm responsibility than they did before. They have a clearer
understanding of the impact that recording their time will have and of
how this step ultimately contributes to the well-being of the business.
Furthermore, the positive results of the new system have increased
attention on IT spending. The CEO and other managers seek new
ways in which technology can be made to help them; this mindset has
been stressed further down to operating departments. The methods
of IT evaluation have also evolved. There is now a greater clarification
of technology benefits, a better articulation of technology problems,
less trial and error, and more time spent on understanding how to use
the technology better.
Another important result from this project has been the cascad-
ing effect of the financial impact. The increased profits have required
greater infrastructure capacity. A new department was created with
five new business managers whose responsibility it is to analyze and
interpret the time reports so that line managers, in turn, can think of
230 InForMAtIon teChnoloGY
ways to generate greater profit through increased services. The project,
in essence, has merged the creative performance of the firm with new
business initiatives, resulting in a higher ROI.
In analyzing the HTC case study, we see many organizational
learning techniques that were required to form a new community
that could assimilate multiple cultures. However, while the organiza-
tion saw the need, it could not create a process without an advocate.
This champion was the CEO, who had the ability to make the salient
organizational changes and act as a catalyst for the natural processes
that HTC hoped to achieve. This case also provides direction on the
importance of having the right resource to lead IT. At HTC, this
person was the CTO; in actuality, this has little bearing on the over-
all role and responsibilities that were needed at HTC. At HTC, it
became more apparent to the CEO that she had the wrong individual
running the technology management of her firm. Only the CEO in
this situation was able to foster the initial steps necessary to start what
turned out to be a more democratic evolution of using technology in
the business.
Companies that adapt to technological dynamism find that the
existing leadership and infrastructure may need to be enhanced or
replaced as well as reorganized, particularly in terms of reporting
structure. This case supports the notion that strategic integration may
indeed create the need for more cultural assimilation. One question
to ask, is why the CEO waited so long to make the changes. This was
not a situation of a new CTO who inherited resources. Indeed, the
former CTO was part of her regime. We must remember that CEOs
typically concentrate on driving revenue. They hope that what are
considered “ back-end” support issues will be handled by other senior
managers. Furthermore, support structures are measured differently
and from a specific frame of reference. I have found that CEOs inter-
vene in supporter departments only when there are major complaints
that threaten productivity, customer support, sales, and so on. The
other threat is cost, so CEOs will seek to make supporter departments
more efficient. These activities are consistent with my earlier findings
regarding the measurement and role of supporter departments.
In the case of HTC, the CEO became more involved because of
the customer service problems, which inevitably threatened revenues.
231sYnerGIstIC unIon oF It
On her review of the situation, she recognized three major flaws in
the operation:
• The CFO was not in a position to lead the organizational
changes necessary to assimilate a creative-based department.
• Technology established a new strategy (strategic integration),
which necessitated certain behavioral changes within the
organization (cultural assimilation). The creative department
was also key to make the organizational transition possible.
• The current CTO did not have the management and business
skills that were necessary to facilitate the integration of IT
with the rest of the organization.
HTC provides us with an interesting case of what we have defined
as responsive organizational dynamism, and it bears some parallels to
the Ravell study. First, like Ravell, the learning process was triggered
by a major event. Second, the CTO did not dictate assimilation but
rather provided facilitation and support. Unlike Ravell, the CEO of
the organization was the critical driver to initiate the project. Because
of the CEO’ s particular involvement, organizational learning started
at the top and was thus system oriented. At the same time, the CTO
understood that individual event-driven learning using reflective
practices was critical to accomplish organizational transformation. In
essence, the CTO was the intermediary between organizational-level
and individual-level learning. Figure 8.12 depicts this relationship.
Five Years Later
HTC has been challenged because of the massive changes that adver-
tising companies have faced over this timeframe, particularly with
the difficulty of finding new advertising revenue sources for their
clients. The CEO has remained active in technology matters, and
there has also been turnover in the CTO role at the company. The
CEO has been challenged to find the right fit—a person who can
understand not only the technology but also the advertising business.
With media companies taking over much of the advertising space, the
CEO clearly recognizes the need to have a technology-driven market
strategy. Most important is the dilemma of how to transform what
232 InForMAtIon teChnoloGY
was once a “ paper” advertising business to what has become a lower-
cost media market. “ Advertising companies need to do more business
just to keep the same revenue stream and that is a big challenge in
today’ s volatile market,” the CEO stated. The time-recording system
has gone through other changes to provide what are known as value
added services , not necessarily tied to time effort, but rather, the value
of the output itself.
The experience at HTC shows the importance of executive partici-
pation, not just sponsorship. Many technology projects have assumed
the need for executive sponsorship. It is clear to me that this position
is obsolete. If the CEO at HTC had not become involved in the prob-
lem five years ago, then the organization would not be in the position
to embrace the newest technology dynamism affecting the industry.
So, the lessons learned from this case, as well as from the Ravell case,
are that all levels of the organization must be involved, and that exec-
utives must not be sacred. Responsive organizational dynamism, and
the use of organizational learning methods to develop staff, remains
key concepts for adapting to market changes and ensuring economic
survival.
Organizational and system
level learning
Organizational and
individual-level learning
Individual learning
Learning
facilitator
CTO
CEO
Middle
management
Creative
operations
Figure 8.12 HTC—Role of the CTO as an intermediary.
233sYnerGIstIC unIon oF It
Summary
This chapter has provided three case studies that show the ways
technology and organizational learning operate and lead to results
through performance. The Siemens example provided us with an
opportunity to see a technology executive formulate relationships,
form multiple communities of practice, and create an infrastructure
to support responsive organizational dynamism. This case provides a
method in which IT can offer a means of handling technology as new
information and, through the formation of communities of practice, it
can generate new knowledge that leads to organizational transforma-
tion and performance.
The case study regarding ICAP again shows why technology, as an
independent variable, provides an opportunity, if taken, for an inter-
national firm to move into a new competitive space and improve its
competitive advantage. ICAP was only successful because it under-
stood the need for organizational learning, communities of practice,
and the important role of the CEO in facilitating change. We also
saw why independent consultants and executive boards need to par-
ticipate. ICAP symbolizes the ways in which technology can change
organizational structures and cultural formations. Such changes are
at the very heart of why we need to understand responsive organi-
zational dynamism. The creation of a new firm, ETC, shows us the
importance of these changes. Finally, it provides us with an example
of how technology can come to the forefront of an organization and
became the major driver of performance.
HTC, on the other hand, described two additional features of how
responsive organizational dynamism can change internal processes
that lead to direct returns. The CEO, as in the ICAP case, played an
important, yet different, role. This case showed that the CTO could
also be used to facilitate organizational learning, becoming the nego-
tiator and coordinator between the CEO, IT department, and cre-
ative user departments.
All three of these cases reflect the importance of recognizing that
most technology information exists outside the organization and needs
to be integrated into existing cultures. This result is consistent with the
findings of Probst et al. (1998), which show that long-term sustained
competitive advantage must include the “ incorporation and integration
234 InForMAtIon teChnoloGY
of information available outside the borders of the company” (p. 247).
The reality is that technology, as an independent and outside variable,
challenges organizations in their abilities to absorb external informa-
tion, assimilate it into their cultures, and inevitably apply it to their
commercial activities as a function of their existing knowledge base.
These case studies show that knowledge creation most often does
not get created solely by individuals. It is by using communities of
practice that knowledge makes its way into the very routines of the
organization. Indeed, organizational learning must focus on the
transformation of individual skills into organizational processes that
generate measurable outcomes. Probst et al. (1998) also shows that the
development of organizational knowledge is mediated via multiple
levels. Walsh (1995) further supports Probst et al.’ s findings that there
are three structures of knowledge development in an organization.
The first is at the individual level; interpretation is fostered through
reflective practices that eventually lead to personal transformation and
increased individual knowledge. The second structure is at the group
level; individual knowledge of the group is combined into a consen-
sus, leading to a shared belief system. The third structure resides at the
organizational level; knowledge emanates from the shared beliefs and
the consensus of the groups, which creates organizational knowledge.
It is important to recognize, however, that organizational knowledge
is not established or created by combining individual knowledge. This
is a common error, particularly among organizational learning prac-
titioners. Organizational knowledge must be accomplished through
social discourse and common language interactions so that knowl-
edge can be a consensus among the communities of practice.
Each of the case studies supported the formation of tiers of learning
and knowledge. The individuals in these cases all created multiple lay-
ers that led to structures similar to those suggested by scholars. What
makes these cases so valuable is that technology represented the exter-
nal knowledge. Technological dynamism forced the multiple struc-
tures from individual-based learning to organizational-level learning,
and the unique interactions among the communities in each example
generated knowledge leading to measurable performance outcomes.
Thus, as Probst and Bü chel (1996, p. 245) conclude, “ Organizational
learning is an increase in organizational knowledge base, which leads
to the enhancement of problem-solving potential of a company.”
235sYnerGIstIC unIon oF It
However, these case studies also provide important information
about the process of the interactions. Many tiered structures tend
to be viewed as a sequential process. I have presented theories sug-
gesting that knowledge management is conditioned either from the
top-down, middle-up-down, or bottom-up. It has been my posi-
tion that none of these processes should be seen as set procedures
or methodologies. In each of these cases, as well as in the Ravell
case, the flow of knowledge occurs differently and, in some ways,
uniquely to the culture and setting of the organization. This suggests
that each organization must derive its own process, adhering more to
the concept of learning, management, and outcomes, as opposed to a
standard system of how and when they need to be applied. Table 8.2
summarizes the different approaches of organizational learning of
the three case studies.
Such is the challenge of leaders who aspire to create the learning
organization. Technology plays an important role because, in reality,
it tests the very notions of organizational learning theories. It also
creates many opportunities to measure organizational learning, and
its impact on performance. Indeed, technology is the variable that
provides the most opportunity to instill organizational learning, and
knowledge management in a global community.
Table 8.2 Summary of Organizational Learning Approaches
SUBJECT SIEMENS ICAP/ETC HTC
Knowledge
management
participation
CIO as
middle-up-down
Top-down from CEO
and bottom-up
from operations
Top-down from CEO
and middle-up-down
from CTO
Community of
practices
President’ s Council
CFO
CIO advisory board
Executive Board
Operations
Management
Implementation
CEO/CTO
CTO operations
Participating entities Presidents
CFOs
Global CIO
Corporate CIOs
Regional CIOs
Operating CIOs
Central CIOs
Executive board
Outside
consultants
CEO
Senior management
Middle management
Operations
CEO
CTO
Middle management
Creative operations
Common thread Corporate CIO CEO
Senior management
Middle management
CTO
236 InForMAtIon teChnoloGY
The case studies also provided an understanding of the
transformational process and the complexities of the relationships
between the different learning levels. It is not a single entity that
allows a company to be competitive but the combination of knowl-
edge at each of the different tiers. The knowledge that exists through-
out a company is typically composed of three components: processes,
technology, and organization (Kanevsky & Housel, 1998). I find that,
of these three components, technology is more variable than the oth-
ers and, as stated many times in this book, at a dynamic and unpre-
dictable fashion (that condition, called technological dynamism).
Furthermore, the technology component has direct effects on the
other two. What does this mean? Essentially, technology is at the
core of organizational learning and knowledge creation.
This chapter has shown the different ways in which technology has
been valued and how, through organizational learning, tacit knowl-
edge is transformed into explicit knowledge, and used for competitive
advantage. We have seen that not all of this value creation can be
directly attributed to technology; in fact, this is rarely the case. Most
value derived from technology is indirect, and it must be recognized
by management as maximizing outcomes. Two of the case studies
looked at the varying roles and responsibilities of the CEO. I believe
their involvement was critical. Indeed, the conclusions reached from
the Ravell case showed further support that the absence of the CEO
will limit results. Furthermore, the CEO was crucial to sustaining
organizational learning and the responsive organizational dynamism
infrastructure.
Much has been written about the need to link learning to knowl-
edge and knowledge to performance. This process can sometimes be
referred to as a value chain. Kanevsky and Housel (1998) created what
they call a “ learning-knowledge-value spiral,” comprised of six spe-
cific steps to creating value from learning and ultimately, changing
product or process descriptions, as shown in Figure 8.13.
I have modified Figure 8.13 to include “ technology” ; that is, how
technology affects learning, learning affects knowledge, and so on.
Table 8.3 is a matrix that reflects the specific results, in each phase,
for the three case studies.
Table 8.3 reflects the ultimate contribution that technology made
to the learning-knowledge-value chain. I have also notated the ROI
237sYnerGIstIC unIon oF It
Ta
bl
e
8
.3
IT
C
on
tri
bu
tio
n
to
th
e
Le
ar
ni
ng
-K
no
wl
ed
ge
-V
al
ue
C
ha
in
CO
M
PA
NY
TE
CH
NO
LO
GY
LE
AR
NI
NG
GE
NE
RA
TE
D
KN
OW
LE
DG
E
PR
OC
ES
S
PR
OD
UC
T
VA
LU
E
Si
em
en
s
E-
bu
si
ne
ss
Co
m
m
un
iti
es
o
f
pr
ac
tic
e
Co
ns
en
su
s a
cr
os
s m
ul
tip
le
co
m
m
un
iti
es
on
h
ow
to
re
la
te
ta
cit
kn
ow
led
ge
a
bo
ut
te
ch
no
log
y t
o s
tra
te
gi
c
bu
sin
es
s p
ro
ce
ss
es
.
90
-D
ay
“
re
in
ve
nt
io
n”
lif
e-
cy
cl
e
m
et
ho
d.
Co
ns
ol
id
at
ed
e-
co
m
m
er
ce
W
eb
si
te
s
pr
ov
id
in
g
co
ns
is
te
nc
y
of
p
ro
du
ct
a
nd
s
er
vi
ce
of
fe
rin
gs
.
Le
ve
ra
gi
ng
o
f s
am
e
cl
ie
nt
s;
p
ro
vi
di
ng
m
ul
tip
le
p
ro
du
ct
of
fe
rin
gs
to
s
am
e
cl
ie
nt
b
as
e.
RO
I:
In
di
re
ct
IC
AP
El
ec
tro
ni
c
tra
di
ng
CE
O/
ex
ec
ut
iv
e
co
m
m
itt
ee
Ab
ili
ty
to
p
ro
vi
de
a
nd
in
te
gr
at
e
bu
si
ne
ss
a
nd
te
ch
no
lo
gy
k
no
wl
ed
ge
to
cr
ea
te
n
ew
p
ro
du
ct
.
Es
ta
bl
is
h
ne
w
co
m
pa
ny
, E
TC
, t
o
su
pp
or
t c
ul
tu
ra
l
as
si
m
ila
tio
n
an
d
ev
ol
ut
io
n.
El
ec
tro
ni
c
tra
di
ng
.
Cr
ea
te
d
m
os
t
co
m
pe
tit
iv
e
pr
od
uc
t i
n
th
e
fin
an
ci
al
in
du
st
ry.
Le
ve
ra
gi
ng
in
de
pe
nd
en
t
co
ns
ul
ta
nt
s.
In
fil
tra
tio
n
in
to
n
ew
m
ar
ke
ts
.
Gr
ou
p
le
ar
ni
ng
.
RO
I:
Di
re
ct
.
M
ul
tip
le
c
om
m
un
iti
es
of
p
ra
ct
ic
e.
HT
C
E-
m
ai
l
CE
O
at
or
ga
ni
za
tio
na
l-l
ev
el
;
in
di
vi
du
al
le
ar
ni
ng
us
in
g
re
fle
ct
iv
e
pr
ac
tic
es
.
Un
de
rs
ta
nd
in
g
ho
w
to
in
te
gr
at
e
IT
d
ep
ar
tm
en
t
wi
th
c
re
at
iv
e
m
an
ag
em
en
t
gr
ou
p.
Es
ta
bl
is
h
ne
w
pr
oc
ed
ur
es
fo
r u
si
ng
e-
m
ai
l t
o
re
co
rd
c
lie
nt
bi
lla
bl
e
ho
ur
s.
Ne
w
cl
ie
nt
b
ill
in
g
sy
st
em
.
Cl
ie
nt
s
ha
pp
y.
M
or
e
co
m
pe
tit
iv
e.
Ad
di
tio
na
l r
ev
en
ue
s.
RO
I:
Di
re
ct
.
238 InForMAtIon teChnoloGY
generated from each investment. It is interesting that two of the three
cases generated identifiable direct revenue streams from their invest-
ment in technology.
This chapter has laid the foundation for Chapter 9, which focuses
on the ways IT can maximize its relationship with the community
and contribute to organizational learning. To accomplish this objec-
tive, IT must begin to establish best practices.
Change in
product/process
description
Learning
Value
MarketCompetition
Product
Knowledge
Process
Figure 8.13 The learning-knowledge-value cycle. (From Kanevsky, V., et al. (Eds.), Knowing in
Firms: Understanding, Managing and Measuring Knowledge , Sage, London, 1998, pp. 240– 252.)
239
9
foRming a CybeR
seCuRiTy CulTuRe
Introduction
Much has been written regarding the importance of how companies
deal with cyber threats. While most organizations have focused on
the technical ramifications of how to avoid being compromised, few
have invested in how senior management needs to make security a
priority. This chapter discusses the salient issues that executives must
address and how to develop a strategy to deal with the various types
of cyber attack that could devastate the reputation and revenues of any
business or organization. The response to the cyber dilemma requires
evolving institutional behavior patterns using organizational learning
concepts.
History
From a historical perspective we have seen an interesting evolution
of the types and acceleration of attacks on business entities. Prior to
1990, few organizations were concerned with information security
except for the government, military, banks and credit card companies.
In 1994, with the birth of the commercial Internet, a higher volume of
attacks occurred and in 2001 the first nation-state sponsored attacks
emerged. These attacks resulted, in 1997, in the development of com-
mercial firewalls and malware. By 2013, however, the increase in
attacks reached greater complexity with the Target credit card breach,
Home Depot’ s compromise of its payment system, and JP Morgan’ s
exposure that affected 76 million customers and seven million busi-
nesses. These events resulted in an escalation of fear, particularly in
the areas of sabotage, theft of intellectual property, and stealing of
money. Figure 9.1 shows the changing pace of cyber security
240 InForMAtIon teChnoloGY
Pr
e-
19
90
19
94
19
97
20
00
20
01
20
14
Bi
rt
h
of
co
m
m
er
ci
al
In
te
rn
et
Fi
re
w
al
ls
an
d
m
al
w
ar
e
Y2
K
In
cr
ea
se
d
nu
m
be
r o
f a
tt
ac
ks
w
it
h
gr
ea
te
r c
om
pl
ex
it
y
Fe
ar
fa
ct
or
es
ca
la
te
s.
S
ab
ot
ag
e,
th
ef
t o
f i
nt
el
le
ct
ua
l
pr
op
er
ty
a
nd
m
on
ey
be
co
m
e
a
co
ns
ta
nt
th
re
at
. D
ai
ly
h
ea
dl
in
e
ri
sk
is
a
n
ew
re
al
it
y.
C
om
m
er
ci
al
iz
ed
c
yb
er
se
cu
rit
y:
19
94
: N
et
sc
ap
e
D
ev
el
op
s s
ec
ur
e
so
ck
et
s
la
ye
r e
nc
ry
pt
io
n,
co
m
m
er
ci
al
ly
a
va
ila
bl
e
se
cu
ri
ty
so
ftw
ar
e,
to
se
cu
re
o
nl
in
e
tr
an
sa
ct
io
ns
.
Fe
w
g
ro
up
s c
on
ce
rn
ed
w
ith
in
fo
rm
at
io
n
se
cu
rit
y
ex
ce
pt
go
ve
rn
m
en
t,
m
ili
ta
ry
,
ba
nk
s a
nd
c
re
di
t c
ar
d
co
m
pa
ni
es
.
9/
11
Ev
ol
ut
io
n
of
c
yb
er
a
tt
ac
ks
:
H
ig
he
r v
ol
um
e
of
at
ta
ck
s:
20
01
:
N
at
io
n
st
at
e-
sp
on
so
re
d
at
ta
ck
s
em
er
ge
in
a
m
ea
ni
ng
fu
l w
ay
.
20
13
–2
01
4:
T
ar
ge
t h
as
7
0
m
ill
io
n
cu
st
om
er
s’
cr
ed
it
ca
rd
s b
re
ac
he
d.
20
13
: B
oo
z
A
lle
n
em
pl
oy
ee
a
nd
c
on
tr
ac
to
r
fo
r t
he
N
SA
, E
dw
ar
d
Sn
ow
de
n
st
ea
ls
an
d
le
ak
s
de
ta
ils
o
f s
ev
er
al
to
p-
se
cr
et
U
.S
. a
nd
B
rit
ish
g
ov
er
nm
en
t
m
as
s s
ur
ve
ill
an
ce
p
ro
gr
am
s
to
th
e
pr
es
s.
Ti
m
el
in
e
of
e
ve
nt
s
20
14
: H
om
e
D
ep
ot
su
ffe
rs
a
6
-m
on
th
b
re
ac
h
of
it
s
pa
ym
en
t s
ys
te
m
a
ffe
ct
in
g
m
or
e
th
an
5
3
m
ill
io
n
cr
ed
it
an
d
de
bi
t c
ar
ds
.
20
14
:
JP
M
or
ga
n
C
ha
se
su
ffe
rs
a
b
re
ac
h
aff
ec
tin
g
76
m
ill
io
n
cu
st
om
er
s a
nd
se
ve
n
m
ill
io
n
bu
sin
es
se
s.
Fi
gu
re
9
.1
T
he
c
ha
ng
in
g
pa
ce
o
f c
yb
er
s
ec
ur
ity
. (
Fr
om
R
us
se
ll
Re
yn
ol
ds
A
ss
oc
ia
te
s
20
14
p
re
se
nt
at
io
n.
)
241ForMInG A CYber seCurItY Culture
The conventional wisdom among cyber experts is that no business
can be compromise proof from attacks. Thus, leaders need to realize
that there must be (1) other ways beyond just developing new anti-
software to ward off attacks, and (2) internal and external strategies to
deal with an attack when it occurs. These challenges in cyber security
management can be categorized into three fundamental components:
• Learning how to educate and present to the board of directors
• Creating new and evolving security cultures
• Understanding what it means organizationally to be
compromised
Each of these components is summarized below
Talking to the Board
Board members need to understand the possible cyber attack expo-
sures of the business. They certainly need regular communication
from those executives responsible for protecting the organization.
Seasoned security executives can articulate the positive processes that
are in place, but without overstating too much confidence since there
is always risk of being compromised. That is, while there may be expo-
sures, C-level managers should not hit the panic button and scare the
board. Typically, fear only instills a lack of confidence by the board in
the organization’ s leadership. Most important is to always relate secu-
rity to business objectives and, above all, avoid “ tech” terms during
meetings. Another important topic of discussion is how third-party
vendors are being managed. Indeed, so many breaches have been
caused by a lack of oversight of legacy applications that are controlled
by third-party vendors. Finally, managers should always compare the
state of security with that of the company’ s competitors.
Establishing a Security Culture
The predominant exposure to a cyber attack often comes from care-
less behaviors of the organization’ s employees. The first step to avoid
poor employee cyber behaviors is to have regular communication with
staff and establish a set of best practices that will clearly protect the
business. However, mandating conformance is difficult and research
242 InForMAtIon teChnoloGY
has consistently supported that evolutionary culture change is best
accomplished through relationship building, leadership by influence
(as opposed to power-centralized management), and ultimately, a
presence at most staff meetings. Individual leadership remains the
most important variable when transforming the behaviors and prac-
tices of any organization.
Understanding What It Means to Be Compromised
Every organization should have a plan of what to do when security
is breached. The first step in the plan is to develop a “ risk” culture.
What this simply means is that an organization cannot maximize
protection of all parts of its systems equally. Therefore, some parts of a
company’ s system might be more protected against cyber attacks than
others. For example, organizations should maximize the protection
of key company scientific and technical data first. Control of network
access will likely vary depending on the type of exposure that might
result from a breach. Another approach is to develop consistent best
practices among all contractors and suppliers and to track the move-
ment of these third parties (e.g., if they are merged/sold, disrupted
in service, or even breached indirectly). Finally, technology execu-
tives should pay close attention to Cloud computing alternatives and
develop ongoing reviews of possible threat exposures in these third-
party service architectures.
Cyber Security Dynamism and Responsive Organizational Dynamism
The new events and interactions brought about by cyber security
threats can be related to the symptoms of the dynamism that has
been the basis of ROD discussed earlier in this book. Here, however,
the digital world manifests itself in a similar dynamism that I will
call cyber dynamism .
Managing cyber dynamism, therefore, is a way of managing the
negative effects of a particular technology threat. As in ROD, cyber
strategic integration and cyber cultural assimilation remain as distinct
categories, that present themselves in response to cyber dynamism.
Figure 9.2 shows the components of cyber ROD.
243ForMInG A CYber seCurItY Culture
Cyber Strategic Integration
Cyber strategic integration is a process that firms need to use to address
the business impact of cyber attacks on its organizational processes.
Complications posed by cyber dynamism, via the process of strategic
integration, occurs when several new cyber attacks overlap and create a
myriad of problems in various phases of an organization’ s ability to oper-
ate. Cyber attacks can also affect consumer confidence, which in turn
hurts a business’s ability to attract new orders. Furthermore, the problem
can be compounded by reductions in productivity, which are complicated
to track and to represent to management. Thus, it is important that orga-
nizations find ways to develop strategies to deal with cyber threats such as:
1. How to reduce occurrences by instituting aggressive organi-
zation structures that review existing exposures in systems.
Cyber attacks as
an independent
variable
Organizational
dynamism
Requires
How to formulate risk-
related strategies to deal
with cyber attacks
Symptoms and
implications
Cyber
cultural
assimilation
Requires
Cyber
strategic
integration
Figure 9.2 Cyber responsive organizational dynamism. (From Langer, A., Information Technology
and Organizational Learning: Managing Behavioral Change through Technology and Education , CRC
Press, Boca Raton, FL, 2011.)
244 InForMAtIon teChnoloGY
2. What new threats exist, which may require ongoing research
and collaborations with third-party strategic alliances?
3. What new processes might be needed to combat new cyber
dynamisms based on new threat capabilities?
4. Creating systems architectures that can recover when a cyber
breach occurs.
In order to realize these objectives, executives must be able to
• Create dynamic internal processes that can function on a
daily basis, to deal with understanding the potential fit of new
cyber attacks and their overall impact to the local department
within the business, that is, to provide for change at the grass-
roots level of the organization.
• Monitor cyber risk investments and determine modifications
to the current life cycle of idea-to-reality.
• Address the weaknesses in the organization in terms of how
to deal with new threats, should they occur, and how to better
protect the key business operations.
• Provide a mechanism that both enables the organization to
deal with accelerated change caused by cyber threats and that
integrates them into a new cycle of processing and handling
change.
• Establish an integrated approach that ties cyber risk account-
ability to other measurable outcomes integrating acceptable
methods of the organization.
The combination of evolving cyber threats with accelerated and
changing consumer demands has also created a business revolution that
best defines the imperative of the strategic integration component of
cyber ROD. Without action directed toward new strategic integration
focused on cyber security, organizations will lose competitive advan-
tage, which will ultimately affect profits. Most experts see the danger
of breaches from cyber attacks as the mechanism that will ultimately
require the integrated business processes to be realigned, thus provid-
ing value to consumers and modifying the customer- vendor relation-
ship. The driving force behind this realignment emanates from cyber
dynamisms, which serve as the principle accelerator of the change in
transactions across all businesses.
245ForMInG A CYber seCurItY Culture
Cyber Cultural Assimilation
Cyber cultural assimilation is a process that addresses the organiza-
tional aspects of how the security department is internally organized,
its relationship with IT, and how it is integrated within the organiza-
tion as a whole. As with technology dynamism, cyber dynamism is
not limited only to cyber strategic issues, but cultural ones as well. A
cyber culture is one that can respond to emerging cyber attacks, in
an optimally informed way, and one that understands the impact on
business performance and reputation.
The acceleration factors of cyber attacks require more dynamic
activity within and among departments, which cannot be accom-
plished through discrete communications between groups. Instead,
the need for diverse groups to engage in more integrated discourse
and to share varying levels of cyber security knowledge, as well as
business-end perspectives, requires new organizational structures that
will give birth to a new and evolving business social culture.
In order to facilitate cyber cultural assimilation, organizations must
have their staffs be more comfortable with a digital world that contin-
ues to be compromised by outside threats. The first question becomes
one of finding the best structure to support a broad assimilation of
knowledge about any given cyber threat. The second is about how that
knowledge can best be utilized by the organization to develop both
risk efforts and attack resilience. Business managers therefore need
to consider cyber security and include the cyber staff in all decision-
making processes. Specifically, cyber assimilation must become fun-
damental to the cultural evolution.
While many scholars and managers suggest the need to have a
specific entity responsible for cyber security governance; one that is
to be placed within the organization’ s operating structure, such an
approach creates a fundamental problem. It does not allow staff and
managers the opportunity to assimilate cyber security-driven change
and understand how to design a culture that can operate under ROD.
In other words, the issue of governance is misinterpreted as a problem
of structural positioning or hierarchy when it is really one of cultural
assimilation. As a result, many business solutions to cyber security
issues often lean toward the prescriptive instead of the analytical in
addressing the real problem.
246 InForMAtIon teChnoloGY
Summary
This section has made the argument that organizations need to excel
in providing both strategic and cultural initiatives to reduce exposure
to cyber threats and ultimate security breaches. Executives must design
their workforce to meet the accelerated threats brought on by cyber
dynamisms. Organizations today need to adapt their staff to operate
under the auspices of ROD by creating processes that can determine
the strategic exposure of new emerging cyber threats and by establish-
ing a culture that is more “ defense ready.” Most executives across indus-
tries recognize that cyber security has become one of the most powerful
variables to maintaining and expanding company markets.
Organizational Learning and Application Development
Behavioral change, leading to a more resilient cyber culture, is just
one of the challenges in maximizing protection in organizations.
Another important factor is how to design more resilient applications
that are better equipped to protect against threats; that is, a decision
that needs to address exposure coupled with risk. The general con-
sensus is that no system can be 100% protected and that this requires
important decisions when analysts are designing applications and sys-
tems. Indeed, security access is not just limited to getting into the sys-
tem, but applies to the individual application level as well. How then
do analysts participate in the process of designing secure applications
through good design? We know that many cyber security architec-
tures are designed from the office of the chief information security
officer (CISO), a new and emerging role in organizations. The CISO
role, often independent of the chief information officer (CIO), became
significant as a result of the early threats from the Internet, the 9/11
attacks and most recently the abundant number of system compro-
mises experienced by companies such as JP Morgan Chase, SONY,
Home Depot, and Target, to name just a few.
The challenge of cyber security reaches well beyond just archi-
tecture. It must address third-party vendor products that are part of
the supply chain of automation used by firms, not to mention access
to legacy applications that likely do not have the necessary securities
built into the architecture of these older, less resilient technologies. This
247ForMInG A CYber seCurItY Culture
challenge has established the need for an enterprise cyber security solu-
tion that addresses the need of the entire organization. This approach
would then target third- party vendor design and compliance. Thus,
cyber security architecture requires integration with a firm’ s Software
Development Life Cycle (SDLC), particularly within steps that include
strategic design, engineering, and operations. The objective is to use a
framework that works with all of these components.
Cyber Security Risk
When designing against cyber security attacks, as stated above, there
is no 100% protection assurance. Thus, risks must be factored into
the decision-making process. A number of security experts often ask
business executives the question, “ How much security do you want,
and what are you willing to spend to achieve that security?”
Certainly, we see a much higher tolerance for increased cost given the
recent significance of companies that have been compromised. This sec-
tion provides guidance on how to determine appropriate security risks.
Security risk is typically discussed in the form of threats. Threats
can be categorized as presented by Schoenfield (2015):
1. Threat agent: Where is the threat coming from, and who is
making the attack?
2. Threat goals: What does the agent hope to gain?
3. Threat capability: What threat methodology, or type of
approach is the agent possibly going to use?
4. Threat work factor: How much effort is the agent willing to
put in to get into the system?
5. Threat risk tolerance: What legal chances is the agent willing
to take to achieve his or her goals?
Table 9.1 is shown as a guideline.
Depending on the threat and its associated risks and work factors,
it will provide important input to the security design, especially at the
application design level. Such application securities in design typically
include:
1. The user interface (sign in screen, access to specific parts of
the application).
248 InForMAtIon teChnoloGY
2. Command-line interface (interactivity) in online systems.
3. Inter-application communications. How data and password
information are passed, and stored, among applications across
systems.
Risk Responsibility
Schoenfield (2015) suggests that someone in the organization is
assigned the role of the “ risk owner.” There may be many risk owners
and, as a result, this role could have complex effects on the way sys-
tems are designed. For example, the top risk owner in most organiza-
tions today is associated with the CISO. However, many firms also
employ a chief risk officer (CRO). This role’ s responsibilities vary.
But risk analysis at the application design level requires different
governance. Application security risk needs involvement from the
business and the consumer and needs to be integrated within the risk
standards of the firm. Specifically, multiple levels of security often
require users to reenter secure information. While this may maximize
safety, it can negatively impact the user experience and the robust-
ness of the system interface in general. Performance can obviously
also be sacrificed, given the multiple layers of validation. There is no
quick answer to this dilemma other than the reality that more secu-
rity checkpoints will reduce user and consumer satisfaction unless
cyber security algorithms become more invisible and sophisticated.
However, even this approach would likely reduce protection. As with
all analyst design challenges, the IT team, business users, and now
the consumer must all be part of the decisions on how much security
is required.
As my colleague at Columbia University, Steven Bellovin, states
in his new book, Thinking Security , security is about a mindset. This
mindset to me relates to how we establish security cultures that can
Table 9.1: Threat Analysis
THREAT AGENT GOALS RISK TOLERANCE WORK FACTOR METHODS
Cyber criminals Financial Low Low to medium Known and proven
Source : Schoenfield, B.S.E., Securing Systems: Applied Security Architecture and Threat Models ,
CRC Press, Boca Raton, FL, 2015.
249ForMInG A CYber seCurItY Culture
enable the analyst to define organizational security as it relates to new
and existing systems. If we get the analyst position to participate in
setting security goals in our applications, some key questions accord-
ing to Bellovin (2015) are:
1. What are the economics to protect systems?
2. What is the best protection you can get for the amount of
money you want to spend?
3. Can you save more lives by spending that money?
4. What should you protect?
5. Can you estimate what it will take to protect your assets?
6. Should you protect the network or the host?
7. Is your Cloud secure enough?
8. Do you guess at the likelihood and cost of a penetration?
9. How do you evaluate your assets?
10. Are you thinking like the enemy?
The key to analysis and design in cyber security is recognizing that
it is dynamic; the attackers are adaptive and somewhat unpredictable.
This dynamism requires constant architectural change, accompanied
with increased complexity of how systems become compromised.
Thus, analysts must be involved at the conceptual model, which
includes business definitions, business processes and enterprise stan-
dards. However, the analysts must also be engaged with the logical
design, which comprises two sub-models:
1. Logical architecture : Depicts the relationships of different data
domains and functionalities required to manage each type of
information in the system.
2. Component model : Reflects each of the sub-models and appli-
cations that provide various functions in the system. The
component model may also include third-part vendor prod-
ucts that interface with the system. The component model
coincides, in many ways, with the process of decomposition.
In summary, the ROD interface with cyber security is more com-
plex than many managers believe. Security is relative, not absolute,
and thus leaders must be closely aligned with how internal cultures
must evolve with changes environments.
250 InForMAtIon teChnoloGY
Driver /Supporter Implications
Security has traditionally been viewed as a support function in most
organizations, particularly when it is managed by IT staff. However,
the recent developments in cyber threats suggest, as with other aspects
of technology, that security too has a driver side.
To excel in the role of security driver, leaders must:
• Have capabilities, budgets and staffing levels, using
benchmarks.
• Align even closer with users and business partners.
• Have close relationships with third parties.
• Extend responsibilities to include the growing challenges in
the mobile workforce.
• Manage virtualized environments and third-party ecosystems.
• Find and/or develop cyber security talent and human capital.
• Have a strategy to integrate millennials with baby boomer
and Gen X managers.
251
10
DigiTal TRansfoRmaTion
anD Changes in
ConsumeR behavioR
Introduction
Digital transformation is one of the most significant activities of the
early twenty-first century. Digital transformation is defined as “ the
changes associated with the applications of digital technology in all
aspects of human society” (Stolterman & Fors, 2004, p. 689). From a
business perspective, digital transformation enables organizations to
implement new types of innovations and to rethink business processes
that can take advantage of technology. From this perspective, digital
transformation involves a type of reengineering, but one that is not
limited to rethinking just how systems work together, but rather, that
extends to the entire business itself. Some see digital transformation
as the elimination of paper in organizations. Others see it as revamp-
ing a business to meet the demands of a digital economy. This chapter
provides a link between digital transformation and what I call “ digital
reengineering.” To explain this better, think of process reengineering
as the generation that brought together systems in the way that they
talked to one another— that is, the integration of legacy systems with
new application that used more robust software applications.
The advent of digital transformation requires the entire organization
to meet the digital demands of their consumers. For some companies, the
consumer is another company (B2B, or business-to-business), that is, the
consumer is a provider to another company that inevitably supports a con-
sumer. For other businesses, their consumer is indeed the ultimate buyer.
I will discuss the differences in these two types of consumer concepts later
in this chapter. What is important from an IT perspective is that reengi-
neering is no longer limited to just the needs of the internal user, but rather
the needs of the businesses consumer as well. So, systems must change,
252 InForMAtIon teChnoloGY
as necessary, with the changes in consumer behavior. The challenge with
doing this, of course, is that consumer needs are harder to obtain and
understand, and can differ significantly among groups, depending on
variables, such as ethnicity, age, and gender, to name just a few.
As a result, IT managers need to interact with the consumer more
directly and in partnership with their business colleagues. The con-
sumer represents a new type of user for IT staff. The consumer, in
effect, is the buyer of the organization’ s products and services. The
challenge becomes how to get IT more engaged with the buyer com-
munity, which could require IT to be engaged in multiple parts of
the business that deals with the consumer. Below are six approaches,
which are not mutually exclusive of each other:
1. Sales/Marketing : These individuals sell to the company’ s buy-
ers. Thus, they have a good sense of what customers are look-
ing for, what things they like about the business, and what
they dislike. The power of the sales and marketing team is
their ability to drive realistic requirements that directly impact
revenue opportunities. The limitation of this resource is that
it still relies on an internal perspective of the consumer; that
is, how the sales and marketing staff perceive the consumer’ s
needs.
2. Thirdparty market analysis/reporting : There are outside
resources available that examine and report on market trends
within various industry sectors. Such organizations typically
have massive databases of information and, using various
search and analysis tools, can provide a better understand-
ing of the behavior patterns of an organization’ s consumers.
These third parties can also provide reports that show how the
organization stacks up against its competition and why con-
sumers may be choosing alternative products. Unfortunately,
if the data is inaccurate it likely will result in false generaliza-
tions about consumer behavior, so it is critical that IT digital
leaders ensure proper review of the data integrity.
3. Predictive analytics : This is a hot topic in today’ s competitive
landscape for businesses. Predictive analytics is the process
of feeding off large data sets (big data) and predicting future
253dIGItAl trAnsForMAtIon
behavior patterns. Predictive analytics approaches are usually
handled internally with assistance from third-party products
or consulting services. The limitation is one of risk— the risk
that the prediction does not occur as planned.
4. Consumer support departments: Internal teams and external
vendors (outsourced managed service) have a good pulse
on consumer preferences because they interact with them.
More specifically, these department respond to questions,
hande problems and get feedback from consumers on a reg-
ular basis. These support departments typically depend on
applications to help the buyer. As a result, they are an excel-
lent resource for providing up-to-date things that the sys-
tem does not provide consumers. Unfortunately, consumer
support organizations limit their needs to what they expe-
rience as opposed to what might be future trends of their
consumers.
5. Surveys: IT and the business can design surveys (question-
naires) and send them to consumers for feedback. Using
surveys can be of significant value in that the questions can
target specific issues that the organization wants to address.
Survey design and administration can be handled by third-
party firms, which may have an advantage in that the ques-
tions are being forwarded from an independent source and
one that does not identify the interested company. On the
other hand, this might be considered a negative— it all
depends on what the organization is seeking to obtain from
the buyer.
6. Focus groups: This approach is similar to the use of a survey.
Focus groups are commonly used to understand consumer
behavior patterns and preferences. They are often conducted
by outside firms. The differences between the focus group
and a survey are (1) surveys are very quantitative based and
use scoring mechanisms (Likert scales) to evaluate outcomes.
Consumers sometimes may misinterpret the question thus
resulting in distorted feedback, and (2) focus groups are more
qualitative and allow IT digital leaders to engage with the
consumer in two-way dialogues.
254 InForMAtIon teChnoloGY
Figure 10.1 reflects a graphic depiction of the sources for under-
standing consumer behaviors and needs.
Table 10.1 further articulates the methods and deliverables that IT
digital leaders should consider when developing system strategies.
Requirements without Users and without Input
Could it be possible to develop digital strategies and requirements for
a system without user input or even consumer opinions? Could this be
a reality for future design of strategic systems?
Perhaps we need to take a step back historically and think about
trends that have changed the competitive landscape. Digital trans-
formation may indeed be the most powerful agent of change in the
history of business.
Product
requirements
Sales/marketing
Staff
competitive analysis
Surveys
Internal/external
targeted consumers
Consumer support
departments
Internal support
groups, third-party
call centers,
shared services
organization
�ird-party studies
and databases
Trends
Data analysts
Predictive analytics
Focus groups
Internal/external
consumer sessions
Figure 10.1 Sources for understanding consumer behavior.
255dIGItAl trAnsForMAtIon
We have seen large companies lose their edge. IBM’ s fall as the
leading technology firm in the 1990s is an excellent example, when
Microsoft overtook them. Yet Google was able to take the lead away
from Microsoft, particularly in relation to analytical consumer com-
puting. And what about the comeback Apple made with its new array
Table 10.1 Langer’ s Methods and Deliverables for Assessing Consumer Needs
ANALYST’ S
SOURCES METHODS DELIVERABLES
Sales/
Marketing
Interviews Should be conducted in a similar way to typical end user
interviews. Work closely with senior sales staff. Set up
interviews with key business stakeholders.
Win/loss sales
reviews
Review the results of sales efforts. Many firms hold formal
win/loss review meetings that may convey important
limitations of current applications and system
capabilities.
Third-Party
Databases
Document
reports
reviews
Obtain summaries of the trends in consumer behavior and
pinpoint shortfalls that might exist in current applications
and systems.
Data analysis Perform targeted analytics on databases to uncover trends
not readily conveyed in available reports.
Predictive
analytics
Interrogate data by using analytic formulas that may
enable predictive trends in consumer behavior.
Support
Department
Interviews Interview key support department personnel (internal and
third party) to identify possible application deficiencies.
Data/reports Review call logs and recorded calls between consumers
and support personnel to expose possible system
deficiencies.
Surveys Internal and
external
questionnaires
Work with internal departments to determine application
issues when they support consumers. Use similar surveys
with select populations of customers to validate and
fine-tune internal survey results.
Use similar surveys targeted to consumers who are not
customers and compare results. Differences between
existing customer base and non-customers may expose
new trends in consumer needs.
Focus Groups Hold internal
and external
sessions
Internal focus groups can be facilitated by marketing
personnel. Select survey results, that had unexpected
results or mixed feedback can be reviewed. Internal
attendees should come from operations management and
sales. External focus groups should be facilitated by a
third-party vendor and held at independent sites.
Discussions with customers should be compared with
internal focus group results. Consumer focus groups
should be facilitated by professional third-party firms.
256 InForMAtIon teChnoloGY
of smart phone-related products? The question is, Why and how do
these shifts in competitive advantage occur so quickly?
Technology continues to generate change and that change is
typically referred to today as a “ digital disruption.” The challenge
in disruption is the inability to predict what consumers want and
need; furthermore, the consumer may not know! The challenge,
then, is for IT digital leaders to forecast the changes that are
brought about by technology disruptions. So, digital transforma-
tion is more about predicting consumer behavior and providing
new products and services, which we hope consumers will want.
This is a significant challenge for IT leaders, of course, given that
the profession was built on the notion that good specifications
accurately depicted what users want. Langer (1997) originally
defined this as the “ Concept of the Logical Equivalent.” So, we
may have created an oxymoron— how do we develop systems that
the user cannot specify? Furthermore, requirements that depict
consumer behavior are now further complicated by the globaliza-
tion of business. Which consumer behavior are we attempting to
satisfy and across what societal cultural norms? The reality is that
new software applications will need to be built with some uncer-
tainty. That is, some business rules may be vague and risks will
need to be part of the process of system functionality. To see an
example of designing systems based on uncertainty, we need only
to analyze the evolution of the electronic spreadsheet. The first
electronic spreadsheet, called VisiCalc, was introduced by a com-
pany called VisiCorp. It was designed for the Apple II and eventu-
ally the IBM personal computer. The electronic spreadsheet was
not designed based on consumer input per se, rather on perceived
needs by visionary designers who saw a need for a generic calcula-
tor and mathematical worksheet. VisiCorp took a risk by offer-
ing a product to the market that consumers would find useful. Of
course, history shows that it was a very good risk. The electronic
spreadsheet, which is now dominated by Microsoft’ s Excel product
has gone through multiple product generations. The inventors of
the electronic spreadsheet had a vision and the market responded
favorably. Although VisiCorp’ s vision of the market need was cor-
rect, the first version was hardly 100% accurate of what consumers
would want in a spreadsheet. For example, additional features, such
257dIGItAl trAnsForMAtIon
as a database interface, three-dimensional spreadsheets to support
budgeting and forward referencing, are all examples of responses
from consumers that resulted in new product enhancements.
Allen and Morton (1994) established an excellent graphic depic-
tion of the relationship between technology advancements and mar-
ket needs (Figure 10.2)
Figure 10.2 shows an interesting life cycle of how product innovations
relate to the creation of new products and services. The diagram reflects
that innovations can occur as a result of new technology capabilities or
inventions that establish new markets— like the electronic spreadsheet.
On the other hand, the market can demand more features and functions
the technology organizations or developers need to respond to that— like
the upgrades made over the years to spreadsheet applications. Responding
to market needs are what most organizations have practiced over the past
60 years, usually working with their end user populations (those internal
users that supported the actual consumer). The digital revolution; how-
ever, is placing more emphasis on “ generic” applications that resemble the
object paradigm (one that requires applications to be able to fit into any
business application). This trend will drive new and more advanced object-
driven applications. These applications will reside in a more robust object
functioning library that can dynamically link these modules together to
form specific applications that can support mul consumer devices (what is
now being called the “ Internet of Things” ).
Another useful approach to dealing with consumer preferences is
Porter’ s Five Forces Framework. Porter’s framework consists of the
following five components:
1. Competitors : What is the number of competitors in the market
and what is the organization’ s position within the market?
Technology
Market
Innovation
Figure 10.2 Technology, innovation, and market needs.
258 InForMAtIon teChnoloGY
2. New entrants : What companies can come into the organiza-
tion’ s space and provide competition?
3. Substitutes : What products or services can replace what you do?
4. Buyers : What alternatives do buyers have? How close and
tight is the relationship between the buyer and seller?
5. Suppliers : What is the number of suppliers that are available,
which can affect the relationship with the buyer and also
determine price levels?
Porter’ s framework is graphically depicted in Figure 10.3.
Cadle et al. (2014) provide an approach to using Porter’ s model as
part of the analysis and design process. Their approach is integrated
with Langer’ s Analysis Consumer Methods in Table 10.2.
Concepts of the S-Curve and Digital
Transformation Analysis and Design
Digital transformation will also be associated with the behavior of the
S-curve. The S-curve has been a long-standing economic graph that
depicts the life cycle of a product or service. The S-curve is shown in
Figure 10.4
New
entrants
Suppliers
Consumer
support dept.
Buyers
Industry
competitors
(sales and
marketing)
Substitutes
New products
or services
Sales and
marketing
Figure 10.3 Porter’ s Five Forces Framework.
259dIGItAl trAnsForMAtIon
The left and lower portion of the S-curve represents a growing
market opportunity that is likely volatile and exists where demand
exceeds supply. As a result, the market opportunity is large and prices
for the product are high. Thus, businesses should seek to capture as
much of the market share at this time before competitors catch up.
This requires the business to take more risk and assumes that the mar-
ket will continue to demand the product. The shape of the S-curve
suggests the life of this opportunity (the length of the x-axis repre-
sents the lifespan of the product).
As the market approaches the middle of the center of the S-curve,
demand begins to equal supply. Prices start to drop and the market, in
general, becomes less volatile and more predictable. The drop in price
reflects the presence of more competitors. As a product or service
approaches the top of the S, supply begins to exceed demand. Prices
begin to fall and the market is said to have reached maturity. The
uniqueness of the product or service is now approaching commodity.
Table 10.2 Langer’ s Analysis Consumer Methods
PORTER’ S FIVE FORCES CADEL ET AL’ S APPROACH LANGER’ S SOURCES OF INPUT
Industry competitors How strong is your market
share?
Third-party market studies
New entrants New threats Third-party market studies
Surveys and focus groups
Suppliers Price sensitivity and closeness
of relationship.
Consumer support and end user
departments
Buyers Alternative choices and brand
equity.
Sales/marketing team
Substitutes Consumer alternatives Surveys and focus groups
Sales and marketing team
Third-party studies
Figure 10.4 The S-curve.
260 InForMAtIon teChnoloGY
Typically, suppliers will attempt to produce new features and func-
tions to extend the life of the curve as shown in Figure 10.5
Establishing a new S-curve, then, extends the competitive life of
the product or service. Once the top of the S-curve is reached, the
product or service has reached the commodity level, where supply is
much greater than demand. Here, the product or service has likely
reached the end of its useful competitive life and should either be
replaced with a new solution or considered for outsourcing to a third-
party who can deliver the product at a very low price.
Langer’ s Driver/Supporter depicts the life cycle of any application
or product as shown in Figure 10.6
Organizational Learning and the S-Curve
When designing a new application or system, the status of that
product’ s S-curve should be carefully correlated to the source of the
Figure 10.5 Extended S-curve.
Mini loop technology enhancementsTechnology
driver
Evaluation
cycle
Driver
maturation
Support
status
Replacement or
outsource
Economies
of scale
Figure 10.6 Langer’ s drive/supporter life cycle.
261dIGItAl trAnsForMAtIon
requirements. Table 10.3 reflects the corresponding market sources
and associated risk factors relating to the dependability of require-
ments based on the state of the consumer’ s market. Leaders engaged
in this process obviously need to have an abstract perspective to sup-
port a visionary and risk-oriented strategy. Table 10.3 includes the
associated complexity of staff needed to deal with each period in the
S-curve.
Communities of Practice
As stated in Chapter 4, Communities of Practice (COP) have been
traditionally used as a method of bringing together people in orga-
nizations with similar talents, responsibilities and/or interests. Such
communities can be effectively used to obtain valuable information
about the way things work and what is required to run business opera-
tions. Getting such information strongly correlates to the challenges of
obtaining dependable information from the consumer market. I dis-
cussed the use of surveys and focus groups earlier in this chapter, but
COP is an alternative approach to bringing together similar types of
consumers grouped by their interests and needs. In digital transforma-
tion we find yet another means of obtaining requirements by engaging
in, and contributing to, the practices of specific consumer communities.
This means that working with COP offers another way of developing
relations with consumers to better understand their needs. Using this
Table 10.3 S-Curve, Application Requirement Sources, and Risk
S-CURVE STATUS
ANALYSIS INPUT
SOURCE RISK FACTOR
Early S-curve Consumer High; market volatility and uncertainty.
High S-curve Consumer Lower; market is less uncertain as product becomes
more mature.
End users Medium; business users have experience with
consumers and can provide reasonable requirements.
Crest of the
S-curve
End users Low; business users have more experience as product
becomes mature.
Consumer High; might consider new features and functions to
keep product more competitive. Attempt to establish
new S-curve.
End of S-curve End user None; seek to replace product or consider third-party
product to replace what is now a legacy application.
Also think of outsourcing application.
262 InForMAtIon teChnoloGY
approach inside an organization, as we saw in Chapter 4, provides a
means of better learning about issues by using a sustained method of
remaining interconnected with specific business user groups, which can
define what the organization really knows and contributes to the busi-
ness that is typically not documented. IT digital leaders need to become
engaged in learning if they are to truly understand what is needed to
develop more effective and accurate software applications.
It seems logical that COP can provide the mechanism to assist IT
digital leaders with an understanding of how business users and con-
sumers behave and interact. Indeed, the analyst can target the behavior
of the community and its need to consider what new organizational
structures can better support emerging technologies. I have, in many
ways, already established and presented what should be called the
“ community of IT digital leaders” and its need to understand how to
restructure, in order to meet the needs of the digital economy. This new
era does not lend itself to the traditional approaches to IT strategy, but
rather to a more risk-based process that can deal with the realignment
of business operations integrated with different consumer relationships.
The relationship, then, between COP and digital transformation is
significant, given that future IT applications will heavily rely on infor-
mal inputs. While there may be attempts to computerize knowledge
using predictive analytics software and big data, it will not be able
to provide all of the risk-associated behaviors of users and consum-
ers. That is, a “ structured” approach to creating predictive behavior
reporting, is typically difficult to establish and maintain. Ultimately,
the dynamism from digital transformations creates too many uncer-
tainties to be handled by sophisticated automated applications on how
organizations will react to digital change variables. So, COP, along
with these predictive analytics applications, provides a more thorough
umbrella of how to deal with the ongoing and unpredictable interac-
tions established by emerging digital technologies.
The IT Leader in the Digital Transformation Era
When we discuss the digital world and its multitude of effects on how
business is conducted, one must ask how this impacts the profession
of IT Leader. This section attempts to address the perceived evolution
of the role.
263dIGItAl trAnsForMAtIon
1. The IT leader must become more innovative. While the
business has the problem of keeping up with changes in
their markets, IT needs to provide more solutions. Many
of these solutions will not be absolute and likely will have
short shelf lives. Risk is fundamental. As a result, IT lead-
ers must truly become “ business” leaders by exploring new
ideas from the outside and continually considering how
to implement the needs of the company’ s consumers. As
a result, the business analyst will emerge as an idea bro-
ker (Robertson & Robertson, 2012) by constantly pursuing
external ideas and transforming them into automated and
competitive solutions. These ideas will have a failure rate,
which means that companies will need to produce more
applications than they will inevitably implement. This will
certainly require organizations to spend more on software
development.
2. Quality requirements will be even more complex. In order to
keep in equilibrium with the S-curve the balance between
quality and production will be a constant negotiation.
Because applications will have shorter life cycles and there
is pressure to provide competitive solutions, products will
need to sense market needs and respond to them quicker. As
a result, fixes and enhancements to applications will become
more inherent in the development cycle after products go
live in the market. Thus, the object paradigm will become
even more fundamental to better software development
because it provides more readily tested reusable applications
and routines.
3. Dynamic interaction among users and business teams will
require the creation of multiple layers of communities of prac-
tice. Organizations involved in this dynamic process must
have autonomy and purpose (Narayan, 2015).
4. Application analysis, design, and development must be treated
and managed as a living process; that is, it never ends until the
product is obsolete (supporter end). So, products must con-
tinually develop to maturity.
5. Organizations should never outsource a driver technology
until it reaches supporter status.
264 InForMAtIon teChnoloGY
How Technology Disrupts Firms and Industries
The world economy is transforming rapidly from an analogue to a
digital-based technology-driven society. This transformation requires
businesses to move from a transactional relationship to one that that
is “ interactional” (Ernst & Young, 2012). However, this analogue to
digital transformation, while essential for a business to survive in the
twenty-first century, is difficult to accomplish. Langer’ s (2011) theory
of responsive organizational dynamism (ROD), as discussed earlier in
this book, is modified to show that successful adaptation of new digi-
tal technologies called Digital Dynamisms requires cultural assimila-
tion of the people that comprise the organization.
Dynamism and Digital Disruption
The effects of digital dynamism can also be defined as a form of
disruption or what is now being referred to as digital disruption .
Specifically, the big question facing many enterprises is around how
they can anticipate the unexpected threats brought on by technologi-
cal advances that can devastate their business. There are typically two
disruption factors:
1. A new approach to providing products and services to the
consumer.
2. A strategy not previously feasible, now made possible using
new technological capabilities.
Indeed, disruption occurs when a new approach meets the right
conditions. Because technology shortens the time it takes to reach
consumers, the changes are occurring at an accelerated and exponen-
tial pace. As an example, the table below shows the significant accel-
eration of the time it takes to reach 50 million consumers:
Radio 38 years
Television 13 years
Internet 4 years
Facebook 3.5 years
Twitter 9 months
Instagram 6 months
Poké mon GO 19 days
265dIGItAl trAnsForMAtIon
The speed of which we can accelerate change has an inverse effect
on the length of time the effect lasts. We use the S-curve to show how
digital disruption shortens the competitive life of new products and
services. Figure 10.7 represents how the S-curve is shrinking along
the x-axis, which measures the length or time period of the product/
service life.
Figure 10.7 essentially reflects that the life of a product or service is
shrinking, thus enterprises have less time to capture a market oppor-
tunity and far less time to enjoy the length of its competitive suc-
cess. As a result, business leaders are facing a world that is changing
at an accelerating rate and trying to cope with understanding how
new waves of “ disruptive” technologies will affect their business.
Ultimately, digital disruption shifts the way competitive forces deliver
services, requires change in the way operations are managed and mea-
sured, and shortens the life of any given product or service success.
Critical Components of “ Digital” Organization
A study conducted by Westerman et al. (2014), who interviewed 157
executives in fifty large companies, found four capabilities that were
key to successful digital transformation:
1. A unified digital platform : Integration of the organization’ s
data and processes across its department silos is critical. One
reason why web-based companies gain advantage over tradi-
tional competitors is their ability to use analytics and customer
personalization from central and integrated sources. Thus,
the first step toward a successful digital transformation is for
companies to invest in establishing central repositories of data
and common applications that can access the information.
Figure 10.7 The shrinking S-curve.
266 InForMAtIon teChnoloGY
This centralization of digital data is key to competing globally
since firms must be able to move data to multiple locations
and use that data in different contexts.
2. Solution delivery : Many traditional IT departments are not
geared to integrating new processes into their legacy opera-
tions. A number of firms have addressed this problem by
establishing independent “ innovation centers” designed to
initiate new digital ideas that are more customer solution ori-
ented. These centers typically focus on how new mobile and
social media technologies can be launched without disturb-
ing the core technology systems that support the enterprise.
Some of these initiatives include partnerships with high-tech
vendors; however, a number of executives have shown concern
that such alliances might result in dependencies because of
the lack of knowledge inside the organization.
3. Analytics capabilities : Companies need to ensure that their
data can be used for predictive analytics purposes. Predictive
analytics provide actors with a better understanding of their
consumer’ s behaviors and allow them to formulate competi-
tive strategies over their competitors. Companies that inte-
grate data better from their transactional systems can make
more “ informed and better decisions” and formulate strate-
gies to take advantage of customer preferences and thus, turn
them into business opportunities. An example is an insurance
company initiative that concentrates on products that meet
customer trends determined by examining their historical
transactions across various divisions of the business. Analytics
also helps organizations to develop risk models that can assist
them to formulate accurate portfolios.
4. Business and IT integration : While the integration of the IT
department with the business has been discussed for decades,
few companies have achieved a desired outcome (Langer,
2016). The need for digital transformation has now made this
integration essential for success and to avoid becoming a vic-
tim of disruption. True IT and business integration means
more than just combining processes and decision making; but
rather, the actual movement of personnel into business units
so they can be culturally assimilated (Langer & Yorks, 2013).
267dIGItAl trAnsForMAtIon
Assimilating Digital Technology Operationally and Culturally
When considering how to design an organization structure that
can implement digital technologies, firms must concentrate on how
to culturally assimilate a new architecture. The importance of the
architecture first affects the strategic integration component of
ROD. Indeed, the actor-oriented architecture must be designed to
be agile enough to react to increased changes in market demands.
The consumerization of technology, defined as changes in technol-
ogy brought on by increased consumer knowledge of how digital
assets can reduce costs and increase competitive advantage, have
created a continual reduction in the length of any new competitive
products or services life. Thus, consumerization has increased what
Eisenhardt and Bourgeouse (1988) define as “ high-velocity” market
conditions.
This dilemma drives the challenge of how organizations will cope
to avoid the negative effects of digital disruption. There are four over-
all components that appear to be critical factors of autonomy from
disruption:
1. Companies must recognize that speed and comfort of service
can be more important than just the cost: our experience is
that enterprises who offer multiple choices that allow con-
sumers to choose from varying levels of service options are
more competitive. The more personal the service option, the
higher the cost. Examples can be seen in the airline indus-
try where passengers have options for better seats at a higher
price, or a new option being offered by entertainment parks
that now provide less wait time on shorter lines, for higher
paying customers. These two examples match the price with
a desired service and firms that do not offer creative pricing
options are prime for disruption.
2. Empower your workforce to try new ideas without over con-
trols. Companies are finding that many young employees have
new service ideas but are blocked from trying them because
of the “ old guard” in their management reporting lines. Line
managers need to be educated on how to allow their staffs to
quickly enact new processes, even though some of them may
not be effective.
268 InForMAtIon teChnoloGY
3. Allow employees and customer to have choice of devices.
Traditionally IT departments desire to create environments
where employees adhere to standard hardware and software
structures. Indeed, standard structures make it easier for IT
to support internal users and provide better security across
systems. However, as technology has evolved, the relation
between hardware and software, especially in mobile devices,
has become more specialized. For example, Apple smart-
phones have proprietary hardware architectures that in many
cases require different versions of application software as well
as different security considerations than its major competi-
tor, Samsung. With the consumerization of technology, these
IT departments must now support multiple devices because
both their customers and employees are free to select them.
Therefore, it is important to allow staff to freely integrate
company applications with their personal device choices.
4. Similar to (3), organizations who force staff to adhere to strict
processes and support structures are exposed to digital dis-
ruption. Organizational structures that rely on technological
innovation must be able to integrate new digital opportuni-
ties seamlessly into their current production and support pro-
cesses. Specifically, this means having the ability to be agile
enough to provide services using different digital capabilities
and from different geographical locations.
Conclusion
This chapter has provided a number of different and complex aspects
of digital transformation, its effects on how organizations are struc-
tured and how they need to compete to survive in the future. The
technology executive is, by default, the key person to lead these digi-
tal transformation initiatives because of the technical requirements
that are at the center of successfully completing these projects. As
such, these executives must also focus on their own transformation as
leaders that allows them to help form the strategic goals to meet the
dynamic changes in consumer behavior.
269
11
inTegRaTing geneRaTion y
employees To aCCeleRaTe
CompeTiTive aDvanTage
Introduction
This chapter focuses on Gen Y employees who are also known as
“ digital natives” and “ millennials.” Gen Y employees possess the
attributes to assist companies in transforming their workforce to
meet the accelerated change in the competitive landscape. Most
executives across industries recognize that digital technologies are
the most powerful variable to maintaining and expanding company
markets. Gen Y employees provide a natural fit for dealing with
emerging digital technologies. However, success with integrating
Gen Y employees is contingent upon baby boomer and Gen X man-
agement to adapt new leadership philosophies and procedures suited
to meet the expectations and needs of these new workers. Ignoring
the unique needs of Gen Y employees will likely result in an incon-
gruent organization that suffers high turnover of young employees
who will seek more entrepreneurial environments.
I established in Chapter 10 that digital transformation is at the
core of change and competitive survival in the twenty-first cen-
tury. Chapter 10 did not address the changes in personnel that
are quickly becoming major issues at today’ s global firms. While I
offered changes to organizational structures, I did not address the
mixture of different generations that are at the fabric of any typical
organization. This chapter is designed to discuss how these multiple
generations need to “ learn” how to work together to form productive
and effective organizations that can compete in the digital economy.
Furthermore, this chapter will address how access to human capital
will change in the future and the different types of relationships
that individuals will have with employers. For example, the “ gig”
270 InForMAtIon teChnoloGY
economy will use non-traditional outside workers who will provide
sources of talent for shorter-term employment needs. Indeed, the
gig economy will require HR and IT leaders to form new and intri-
cate employee relationships.
As discussed in Chapter 10, companies need to transform their
business from analogue to one that uses digital technologies. Such
transformation requires moving from a transactional relation-
ship with customers to one that is more “ interactional” (Ernst &
Young, 2012). Completing an analogue to digital transformation,
while essential for a business to survive in the twenty-first century,
is difficult to accomplish. Responsive organizational dynamism
(ROD) showed us that successful adaptation of new digital tech-
nologies requires strategic integration and cultural assimilation of
the people that comprise the organization. As stated earlier, these
components of ROD can be categorized as the essential roles and
responsibilities of the organization that are necessary to utilize
new technological inventions that can strategically be integrated
within a business entity. The purpose here is to explore why Gen
Y employees need to be integrated with baby boomers and Gen X
staff to effectively enhance the success of digital transformation
initiatives.
The Employment Challenge in the Digital Era
Capgemini and MIT (2013) research shows that organizations need
new operating models to meet the demands of a digital-driven era.
Digital tools have provided leaders with ways to connect at an unprec-
edented scale. Digital technology has allowed companies to invade
other spaces previously protected by a business’ s “ asset specificities”
(Tushman & Anderson, 1997), which are defined as advantages
enjoyed by companies because of their location, product access, and
delivery capabilities. Digital technologies allow those specificities to
be neutralized and thus, change the previous competitive balances
among market players. Furthermore, digital technology acceler-
ates this process, meaning that changes in market share occur very
quickly. The research offers five key indicators that support successful
digital transformation in a firm:
271InteGrAtInG GenerAtIon Y
1. A company’ s strategic vision is only as effective as the people
behind it. Thus, winning the minds of all levels of the organi-
zation is required.
2. To become digital is to be digital. Companies must have a
“ one-team culture” and raise their employees’ digital IQ.
3. A company must address the scarcity of talented resources
and look more to using Gen Y individuals because they have
a more natural adaptation to take on the challenges of digital
transformation.
4. Resistant managers are impediments to progress and can
actually stop digital transformation.
5. Digital leadership starts at the top.
As stated in Chapter 10, Eisenhardt and Bourgeouis (1988) first
defined dynamic changing markets as being “ high-velocity.” Their
research shows that high-velocity conditions existed in the technol-
ogy industry during the early 1980s in Silicon Valley, in the United
States. They found that competitive advantage was highly dependent
on the quality of people that worked at those firms. Specifically, they
concluded that workers who were capable of dealing with change and
less subjected to a centralized totalitarian management structure out-
performed those that had more traditional hierarchical organizational
structures. While “ high-velocity” during the 1980s was unusual, dig-
ital disruption in the twenty-first century has made it a market norm.
The combination of evolving digital business drivers with acceler-
ated and changing customer demands has created a business revolution
that best defines the imperative of the strategic integration component
of ROD. The changing and accelerated way businesses deal with their
customers and vendors requires a new strategic integration to become
a reality, rather than remain a concept without action. Most experts
see digital technology as the mechanism that will require business
realignment to create new customer experiences. The driving force
behind this realignment emanates from digital technologies, which
serve as the principle accelerator of the change in transactions across
all business units. The general need to optimize human resources
forces organizations to rethink and to realign business processes, in
order to gain access to new business markets, which are weakening
the existing “ asset specificities” of the once dominant market leaders.
272 InForMAtIon teChnoloGY
Gen Y Population Attributes
Gen Y or digital natives are those people who are accustomed to the
attributes of living in a digital world and are 18– 35 years old. Gen Y
employees are more comfortable with accelerated life changes, par-
ticularly change brought on by new technologies. Such individuals,
according to a number of commercial and academic research studies
(Johnson Controls, 2010; Capgemini, 2013; Cisco, 2012; Saxena &
Jain, 2012), have attributes and expectations in the workplace that
support environments that are flexible, offer mobility, and provide
collaborative and unconventional relationships. Specifically, millen-
nial workers
• want access to dedicated team spaces where they can have
emotional engagements in a socialized atmosphere;
• require their own space; that is, are not supportive of a “ hotel-
ing” existence where they do not have a permanent office or
workspace;
• need a flexible life/work balance;
• prefer a workplace that supports formal and informal collab-
orative engagement.
Research has further confirmed that 79% of Gen Y workers pre-
fer mobile jobs, 40% want to drive to work, and female millennials
need more flexibility at work than their male counterparts. As a result
of this data, businesses will need to compete to recruit and develop
skilled Gen Y workers who now represent 25% of the workforce. In
India, while Gen Y represents more than 50% of the working popula-
tion, the required talent needed by businesses is extremely scarce.
Advantages of Employing Millennials to Support Digital Transformation
As stated, Gen Y adults appear to have many identities and capabilities
that fit well in a digital-driven business world. Indeed, Gen Y peo-
ple are consumers, colleagues, employees, managers, and innovators
(Johnson Controls, 2010). They possess attributes that align with the
requirements to be an entrepreneur, a person with technology savvy
and creativity, someone who works well in a mobile environment, and
is non-conformant enough to drive change in an organization. Thus,
273InteGrAtInG GenerAtIon Y
the presence of Gen Y personnel can help organizations to restrat-
egize their competitive position and to retain key talent (Saxena &
Jain, 2012). Furthermore, Gen Y brings a more impressive array of
academic credentials than their predecessors.
Most important is Gen Y’ s ability to deal better with market
change— which inevitably affects organizational change. That is, the
digital world market will constantly require changes in organizational
structure to accommodate its consumer needs. A major reason for Gen
Y’ s willingness to change is its natural alignment with a company’ s
customers. Swadzba (2010) posits that we are approaching the end of
what he called the “ work era” and moving into a new age based on
consumption. Millennials are more apt to see the value of their jobs
from their own consumption needs. Thus, they see employment as
an act of consumption (Jonas & Kortenius, 2014). Gen Y employees
therefore allow employers to acquire the necessary talent that can lead
to better consumer reputation, reduced turnover of resources and, ulti-
mately, increased customer satisfaction (Bakanauskiené et al., 2011).
Yet another advantage of Gen Y employees is their ability to transform
organizations that operate on a departmental basis into one that is
based more on function; an essential requirement in a digital economy.
Integration of Gen Y with Baby Boomers and Gen X
The prediction is that 76 million baby boomers (born 1946– 1964)
and Gen X workers (born 1965– 1984) will be retiring over the next
15 years. The question for many corporate talent executives is how to
manage the transition in a major multigenerational workforce. Baby
boomers alone still inhabit the most powerful leadership positions in
the world. Currently, the average age of CEOs is 56, and 65% of all
corporate leaders are baby boomers. Essentially, corporations need to
produce career paths that will be attractive to millennials. Thus, the
older generation needs to
• Acknowledge some of their preconceived perceptions of cur-
rent work ethics that are simply not relevant in today’ s com-
plex environments.
• Allow Gen Y to escalate in ranks to satisfy their ambitions
and sense of entitlement.
274 InForMAtIon teChnoloGY
• Implement more flexible work schedules, offer telecommut-
ing, and develop a stronger focus on social responsibility.
• Support more advanced uses of technology, especially those
used by Gen Yers in their personal lives.
• Employ more mentors to help Gen Y employees to better
understand the reasons for existing constraints in the organi-
zations where they work.
• Provide more complex employee orientations, more timely
personnel reviews, and in general more frequent feedback
needed by Gen Y individuals.
• Establish programs that improve the verbal communications
skills of Gen Y workers that are typically more comfortable
with nonverbal text-based methods of communication.
• Implement more continual learning and rotational programs
that support a vertical growth path for younger employees.
In summary, it is up to the baby boomer and Gen X leaders to
modify their styles of management to fit the needs of their younger
Gen Y employees. The challenge to accomplish this objective is com-
plicated, given the wide variances on how these three generations
think, plan, take risks, and most important, learn.
Designing the Digital Enterprise
Zogby completed an interactive poll of 4,811 people on perceptions
of different generations. 42% of the respondents stated that baby
boomers would be remembered for their focus on consumerism and
self-indulgence. Gen Y, on the other hand, are considered more self-
interested, entitled narcissists who want to spend all their time post-
ing “ selfies” to Facebook. However, other facts offer an expanded
perception of these two generations, as shown in Table 11.1
Research completed by Ernst and Young (2013) offers additional
comparisons among the three generations as follows:
1. Gen Y individuals are moving into management positions
faster due to retirements, lack of corporate succession plan-
ning, and their natural ability to use technology at work.
Table 11.2 shows percentage comparisons between 2008
and 2013.
275InteGrAtInG GenerAtIon Y
The acceleration of growth to management positions among
Gen Y individuals can be further illuminated in Table 11.3 by
comparing the prior five-year period from 2003 to 2007.
2. While responders of the survey felt Gen X were better
equipped to manage than Gen Y, the number of Gen Y man-
agers is expected to double by 2020 due to continued retire-
ments. Another interesting result of the research relates to
Gen Y expectations from their employers when they become
managers. Specifically Gen Y managers expect (1) an oppor-
tunity to have a mentor, (2) to receive sponsorship, (3) to have
more career-related experiences, and (4) to receive training to
build their professional skills.
3. Seventy-five percent of respondents that identified themselves
as managers agree that managing the multiple generations is
a significant challenge. This was attributed to different work
expectations and the lack of comfort with younger employees
managing older employees.
Table 11.4 provides additional differences among the three
generations:
Table 11.1 Baby Boomers versus Gen Y
BABY BOOMERS GEN Y
Married later and less children Not as aligned to political parties
Spend lavishly More civically engaged
More active and selfless Socially active
Fought against social injustice, supported civil
rights, and defied the Vietnam War
Cheerfully optimistic
Had more higher education access More concerned with quality of life than
material gain
Table 11.2 Management Roles 2008– 2013
Baby boomer (ages 49– 67) 19%
Gen X (ages 33– 48) 38%
Gen Y (18– 32) 87%
Table 11.3 Management Roles 2003– 2007
Baby boomer (ages 49– 67) 23%
Gen X (ages 33– 48) 30%
Gen Y (18– 32) 12%
276 InForMAtIon teChnoloGY
Assimilating Gen Y Talent from Underserved
and Socially Excluded Populations
The outsourcing of jobs outside of local communities to countries with
lower employment costs has continued to grow during the early part
of the twenty-first century. This phenomenon has led to significant
social and economic problems, especially in the United States and
in Western Europe as jobs continue to migrate to foreign countries
where there are lower labor costs and education systems that provide
Table 11.4 Baby Boomer, Gen X and Gen Y Compared
BABY BOOMERS GEN X GEN Y
Seek employment in large
established companies that
provide dependable
employment.
Established companies no
longer a guarantee for
lifetime employment. Many
jobs begin to go offshore.
Seek multiple experiences with
heavy emphasis on social
good and global experiences.
Re-evaluation of offshoring
strategies.
Process of promotion is well
defined, hierarchical and
structured, eventually leading
to promotion and higher
earnings—concept of
waiting your turn.
Process of promotion still
hierarchical, but based more
on skills and individual
accomplishments. Master’s
degree now preferred for
many promotions.
Less patience with hierarchical
promotion policies. More
reliance on predictive
analytics as the basis for
decision making.
Undergraduate degree
preferred but not mandatory.
Undergraduate degree required
for most professional job
opportunities.
More focus on specific skills.
Multiple strategies developed
on how to meet shortages of
talent. Higher education is
expensive and concerns
increase about the value of
graduate knowledge and
abilities.
Plan career preferably with one
company and retire.
Acceptance of a gradual
process of growth that was
slow to change. Successful
employees assimilated into
existing organizational
structures by following the
rules.
Employees begin to change
jobs more often, given growth
in the technology industry,
and opportunities to increase
compensation and accelerate
promotion by switching jobs.
Emergence of a “gig” economy,
and the rise of multiple
employment relationships
Entrepreneurism was seen as
an external option for those
individuals desiring wealth
and independence and
willing to take risks.
Corporate executives’
compensation dramatically
increases, no longer requiring
starting businesses as the
basis for wealth.
Entrepreneurism promoted
in Higher Education as the
basis for economic growth,
given the loss of jobs in
the U.S.
277InteGrAtInG GenerAtIon Y
more of the skills needed by corporations. Most impacted by the loss
of jobs have been the underserved or socially excluded Gen Y youth
populations. Indeed, the European average for young adult unem-
ployment (aged 15– 25) in 2013 was nearly 25%, almost twice the
rate for their adult counterparts (Dolado, 2015). Much of the loss of
local jobs can be attributed to expansion of the globalized economy,
which has been accelerated by continued technological advancements
(Wabike, 2014). Thus, the effects of technology gains have negatively
impacted efforts toward social inclusion and social equality.
Langer, in 2003, established an organization called Workforce
Opportunity Services (WOS), as a means of utilizing a form of action
research using adult development theory to solve employment problems
caused by outsourcing. Langer’ s approach is based on the belief that
socially excluded youth can be trained and prepared for jobs in areas such
as information technology that would typically be outsourced to lower
labor markets. WOS has developed a talent-finding model that has suc-
cessfully placed over 1400 young individuals in such jobs. Results of over
12 years of operation and research have shown that talented youth in
disadvantaged communities do exist and that such talent can economi-
cally and socially contribute to companies (Langer, 2013). The following
section describes the Langer Workforce Maturity Arc (LWMA), pres-
ents data on its effectiveness as a transformative learning instrument,
and discusses how the model can be used as an effective way of recruit-
ing Gen Y talent from underserved and socially excluded populations.
Langer Workforce Maturity Arc
The Langer Workforce Maturity Arc (LWMA) was developed to help
evaluate socially excluded youth preparation to succeed in the workplace.
The LWMA, initially known as the Inner-City Workplace Literacy Arc:
charts the progression of underserved or ‘ excluded’ individuals along
defined stages of development in workplace culture and skills in relation
to multiple dimensions of workplace literacy such as cognitive growth
and self-reflection. When one is mapped in relation to the other (work-
place culture in relation to stages of literacy assimilation), an Arc is
created. LWMA traces the assimilation of workplace norms, a form of
individual development. (Langer, 2003: 18)
278 InForMAtIon teChnoloGY
The LWMA addresses one of the major challenges confronting an
organization’ s HR group: to find talent from diverse local populations
that can successfully respond to evolving business norms, especially those
related to electronic and digital technologies. The LWMA provides a
method for measuring the assimilation of workplace cultural norms and
thus, can be used to meet the mounting demands of an increasingly
global, dynamic, and multicultural workplace. Furthermore, if organi-
zations are to attain acceptable quality of work from diverse employees,
assimilation of socially or economically excluded populations must be
evaluated based on (1) if and how individuals adopt workplace cultural
norms, and (2) how they become integrated into the business (Langer,
2003). Understanding the relationship between workplace assimila-
tion and its development can provide important information on how
to secure the work ethic, dignity, solidarity, culture, cognition, and
self-esteem of individuals from disadvantaged communities, and their
salient contributions to the digital age.
Theoretical Constructs of the LWMA
The LWMA encompasses sectors of workplace literacy and stages of lit
eracy development , and the arc charts business acculturation require-
ments as they pertain to disadvantaged young adult learners. The
relationship between workplace assimilation and literacy is a chal-
lenging subject. A specific form of literacy can be defined as a social
practice that requires specific skills and knowledge (Rassool, 1999).
In this instance, workplace literacy addresses the effects of workplace
practices and culture on the social experiences of people in their work-
day, as well as their everyday lives. We need to better understand how
individual literacy in the workplace, which subordinates individuality
to the demands of an organization, is formulated for diverse groups
(Newman, 1999). Most important, are the ways in which one learns
how to behave effectively in the workplace— the knowledge, skill, and
attitude sets required by business generally, as well as by a specific
organization. This is particularly important in disadvantaged commu-
nities, which are marginalized from the experiences of more affluent
communities in terms of access to high-quality education, informa-
tion technologies, job opportunities, and workplace socialization. For
example, Friedman et al. (2014) postulate that the active involvement
279InteGrAtInG GenerAtIon Y
of parents in the lives of their children greatly impacts a student’ s
chances of success. It is the absence of this activism that contributes
to a system of social exclusion of youth. Prior to determining what
directions to pursue in educational pedagogies and infrastructures, it
is necessary to understand what workplace literacy requirements are
present and how they can be developed for disadvantaged youth in the
absence of the active support from families and friends.
The LWMA assesses individual development in six distinct sectors
of workplace literacy:
1. Cognition : Knowledge and skills required to learn and com-
plete job duties in the business world, including computational
skills; ability to read, comprehend, and retain written infor-
mation quickly; remembering and executing oral instructions;
and critically examining data.
2. Technology : An aptitude for operating various electronic and
digital technologies.
3. Business culture : Knowledge and practice of proper etiquette
in the workplace including dress codes, telephone and in-per-
son interactions, punctuality, completing work and meeting
deadlines, conflict resolution, deference and other protocols
associated with supervisors and hierarchies.
4. Socioeconomic values : Ability to articulate and act upon main-
stream business values, which shape the work ethic. Such val-
ues include independent initiative, dedication, integrity, and
personal identification with career goals. Values are associated
with a person’ s appreciation for intellectual life, cultural sen-
sitivity to others, and sensitivity for how others view their role
in the workplace. Individuals understand that they should
make decisions based on principles and evidence rather than
personal interests.
5. Community and ethnic solidarity : Commitment to the educa-
tion and professional advancement of persons in ethnic minor-
ity groups and underserved communities. Individuals can use
their ethnicity to explore the liberating capacities offered in
the workplace without sacrificing their identity (i.e., they can
assimilate workplace norms without abandoning cultural,
ethnic, or self-defining principles and beliefs).
280 InForMAtIon teChnoloGY
6. Selfesteem : The view that personal and professional success
work in tandem, and the belief in one’ s capacity to succeed
in both arenas. This includes a devotion to learning and self-
improvement. Individuals with high self-esteem are reflective
about themselves and their potential in business. They accept
the realities of the business world in which they work and
can comfortably confirm their business disposition, indepen-
dently of others’ valuations.
Each stage in the course of an individual’ s workplace development
reflects an underlying principle that guides the process of adopting
workplace norms and behavior. The LWMA is a classificatory scheme
that identifies progressive stages in the assimilated uses of workplace
literacy. It reflects the perspective that an effective workplace partici-
pant is able to move through increasingly complex levels of thinking
and to develop independence of thought and judgment (Knefelkamp,
1999). The profile of an individual who assimilates workplace norms
can be characterized in five developmental stages:
1. Concept recognition : The first stage represents the capacity to
learn, conceptualize, and articulate key issues related to the
six sectors of workplace literacy. Concept recognition provides
the basis for becoming adaptive to all workplace requirements.
2. Multiple workplace perspectives : This refers to the ability to
integrate points of view from different colleagues at various
levels of the workplace hierarchy. By using multiple perspec-
tives, the individual is in a position to augment his or her
workplace literacy.
3. Comprehension of business processes : Individuals increase their
understanding of workplace cooperation, competition, and
advancement as they build on their recognition of busi-
ness concepts and workplace perspectives. They increasingly
understand the organization as a system of interconnected
parts.
4. Workplace competence : As assimilation and competence increase,
the individual learns not only on how to perform a particu-
lar job adequately but how to conduct oneself professionally
within the workplace and larger business environment.
281InteGrAtInG GenerAtIon Y
5. Professional independence : Individuals demonstrate the ability
to employ all sectors of workplace literacy to compete effec-
tively in corporate labor markets. They obtain more respon-
sible jobs through successful interviewing and workplace
performance and demonstrate leadership abilities, leading to
greater independence in career pursuits. Professionally inde-
pendent individuals are motivated and can use their skills for
creative purposes (Langer, 2009).
The LWMA is a rubric that charts an individual’ s development
across the six sectors of workplace literacy. Each cell within the matrix
represents a particular stage of development relative to that sector of
workplace literacy, and each cell contains definitions that can be used
to identify where a particular individual stands in his or her develop-
ment of workplace literacy.
The LWMA and Action Research
While the LWMA serves as a framework for measuring growth,
the model also uses reflection-with-action methods, a component
of action research theory, as the primary vehicle for assisting young
adults to develop the necessary labor market skills to compete for a
job and inevitably achieve some level of professional independence
(that is, the ability to work for many employers because of achiev-
ing required market skills). Reflection-with-action is used as a rubric
STAGES OF WORKPLACE LITERACY
SECTORS OF
WORKPLACE
LITERACY
CONCEPT
RECOGNITION
MULTIPLE
WORKPLACE
PERSPECTIVES
COMPREHENSION
OF BUSINESS
PROCESSES
WORKPLACE
COMPETENCE
PROFESSIONAL
INDEPENDENCE
Cognition
Technology
Business
Culture
Socio-
Economic
Values
Community
and Ethnic
Solidarity
Self-Esteem
282 InForMAtIon teChnoloGY
for a variety of methods, involving reflection in relation to learning
activities. Reflection has received a number of definitions from differ-
ent sources in the literature. Here, “ reflection-with-action” carries the
resonance of Schö n’ s (1983) twin constructs: “ reflection-on-action”
and “ reflection-in-action,” which emphasize (respectively) reflec-
tion in retrospect and reflection to determine what actions to take in
the present or immediate future (Langer, 2003). Dewey (1933) and
Hullfish and Smith (1978) also suggest that the use of reflection sup-
ports an implied purpose. Their formulation suggests the possibility
of reflection that is future oriented; what we might call “ reflection-to-
action.” These are methodological orientations covered by the rubric.
Reflection-with-action is critical to the educational and workplace
assimilation process of Gen Y. While many people reflect, it is in
being reflective that people bring about “ an orientation to their every-
day lives” (Moon, 2000). The LWMA incorporates reflection-with-
action methods as fundamental strategies for facilitating development
and assimilation. These methods are also implemented interactively,
for example in mentoring, reflective learning journals, and group dis-
cussions. Indeed, as stated by De Jong (2014), “ Social exclusion is
multi-dimensional, ranging from unemployment, barriers to educa-
tion and health care, and marginalized living circumstances” (p. 94).
Ultimately, teaching socially excluded youth to reflect-with-action is
the practice that will help them mature across the LWMA stages
and inevitably, achieve levels of inclusion in the labor market and in
citizenship.
Implications for New Pathways for Digital Talent
The salient implications of the LWMA, as a method of discover-
ing and managing disadvantaged Gen Y youth in communities,
can be categorized across three frames: demographic shifts in talent
resources, economic sustainability and integration and trust among
vested local interest groups.
Demographic Shifts in Talent Resources
The LWMA can be used as a predictive analytic tool for capturing
and cultivating the abilities in the new generation of digital natives
283InteGrAtInG GenerAtIon Y
from disadvantaged local communities. This young talent has the
advantage of more exposure to technologies, which senior workers
had to learn later in their careers. This puts them ahead of the curve
with respect to basic digital skills. Having the capacity to employ tal-
ent locally and provide incentives for these individuals to advance can
alleviate the significant strain placed on firms who suffer from high
turnover in outsourced positions. Investing in viable Gen Y under-
served youth can help firms close the skills gap that is prevalent in the
emerging labor force.
Economic Sustainability
As globalization ebbs and flows, cities need to establish themselves
as global centers, careful not to slip into market obsolescence, espe-
cially when facing difficulties in labor force supply chains. In order to
alleviate the difficulty in supplying industry-ready professionals to a
city only recently maturing into the IT-centric business world, firms
need to adapt to an “ on-demand” gig approach. The value drawn from
this paradigm lies in its cyclical nature. By obtaining localized human
capital at a lower cost, firms can generate a fundable supply chain of
talent and diversity as markets change over time.
Integration and Trust
Porter and Kramer (2011) postulate that companies need to formulate
a new method of integrating business profits and societal responsi-
bilities. They state, “ the solution lies in the principle of shared value,
which involves creating economic value in a way that also creates value
for society by addressing its needs and challenges” (p. 64). Porter and
Kramer suggest that companies need to alter corporate performance
to include social progress. The LWMA provides the mechanism, the-
ory, and measurement that is consistent with this direction and pro-
vides the vehicle that establishes a shared partnership of trust among
business, education, and community needs. Each of the interested
parties experiences progress toward its financial and social objectives.
Specifically, companies are able to attract diverse and socially excluded
local talent, and have the constituents trained specifically for its needs
and for an economic return that fits its corporate models. As a result,
284 InForMAtIon teChnoloGY
the community adds jobs, which reduces crime rates and increases tax
revenue. The funding corporation then establishes an ecosystem that
provides a shared value of performance that underserved and excluded
youth bring to the business.
Global Implications for Sources of Talent
The increasing social exclusion of Gen Y youth is a growing prob-
lem in almost every country. Questions remain about how to establish
systemic solutions that can create sustainable and scalable programs
that provide equity in access to education for this population. This
access to education is undoubtedly increasing employability, which
indirectly contributes to better citizenship for underserved youth.
Indeed, there is a widening gap between the “ haves” and the “ have-
nots” throughout the world. Firms can use tools like the LWMA to
provide a model that can improve educational attainment of under-
served youth by establishing skill-based certificates with universities,
coupled with a different employment-to-hire model. The results have
shown that students accelerate in these types of programs and ulti-
mately, find more success in labor market assimilation. The data sug-
gests that traditional degree programs that require full-time study at
university as the primary preparation for labor market employment
may not be the most appropriate approach to solving the growing
social inequality issue among youth.
Conclusion
This chapter has made the argument that Gen Y employees are “ digi-
tal natives” who have the attributes to assist companies to transform
their workforce and meet the accelerated change in the competitive
landscape. Organizations today need to adapt their staff to operate
under the auspices of ROD by creating processes that can determine
the strategic value of new emerging technologies and establish a cul-
ture that is more “ change ready.” Most executives across industries
recognize that digital technologies are the most powerful variable to
maintaining and expanding company markets.
Gen Y employees provide a natural fit for dealing with emerg-
ing digital technologies. However, success with integrating Gen Y
285InteGrAtInG GenerAtIon Y
employees is contingent upon baby boomer and Gen X manage-
ment adapting new leadership philosophies and procedures that are
suited to meet the expectations and needs of millennials. Ignoring the
unique needs of Gen Y employees will likely result in an incongruent
organization that suffers high turnover of young employees who will
ultimately seek a more entrepreneurial environment. Firms should
consider investing in non-traditional Gen Y youth from underserved
and socially excluded populations as alternate sources of talent.
287
12
TowaRD besT pRaCTiCes
Introduction
The previous chapters provided the foundation for the formation of
“ best practices” to implement and sustain responsive organizational
dynamism (ROD). First, it is important to define what we mean by
best practices and specify which components comprise that definition.
Best practices are defined as generally accepted ways of doing spe-
cific functions or processes by a particular profession or industry. Best
practices, in the context of ROD, are a set of processes, behaviors, and
organizational structures that tend to provide successful foundations
to implement and sustain organizational learning. I defined respon-
sive organizational dynamism as the disposition of a company to
respond at the organizational level to the volatility of advancing tech-
nologies— ones that challenge the organization to manage a constant
state of dynamic and unpredictable change. Second, best practices are
those that need to be attributed to multiple communities of practice
as well as to the different professions or disciplines within a learning
organization.
However, these multiple tiers of best practices need to be integrated
and to operate with one another to be considered under the rubric.
Indeed, best practices contained solely within a discipline or com-
munity are limited in their ability to operate on an organization-wide
level. It is the objective of this chapter, therefore, to formulate a set of
distinctive yet integrated best practices that can establish and support
ROD through organizational learning. Each component of the set of
best practices needs to be accompanied by its own maturity arc, which
defines and describes the stages of development and the dimensions
that comprise best practices. Each stage defines a linear path of con-
tinued progress until a set of best practices is reached. In this way,
organizations can assess where they are in terms of best practices and
determine what they need to do to progress. Ultimately, each maturity
288 InForMAtIon teChnoloGY
arc will represent a subset of the overall set of best practices for the
organization.
The discipline that lays the foundation for ROD is information
technology (IT). Therefore, the role of the chief IT executive needs to
be at the base of organizational best practices. As such, I start build-
ing the organizational best practices model with the chief IT execu-
tive at the core.
Chief IT Executive
I use the title “ chief IT executive” to name the most senior IT indi-
vidual in an organization. Because of the lack of best practices in this
profession, a number of different titles are used to describe this job.
While these titles are distinct among themselves, I have found that
they are not consistently followed in organizations. However, it is
important to understand these titles and their distinctions, particu-
larly because an organizational learning practitioner will encounter
them in practice. These titles and roles are listed and discussed next:
Chief information officer (CIO) : This individual is usually the most
senior IT executive in an organization, although not every
organization has such a person. The CIO is not necessarily
the most technical of people or even someone who has come
through the “ ranks” of IT. Instead, this individual is consid-
ered an executive who understands how technology needs to
be integrated within the organization. CIOs typically have
other general IT executives and managers who report directly
to them. As shown in the Siemens case study, there can be a
number of alternate levels of CIOs, from corporate CIOs to
local CIOs of a company division. For the purposes of this
discussion, I look at the corporate CIO, who is considered part
of the senior executive management team. My research on
chief executive officer (CEO) perceptions of technology and
business strategy showed that only a small percentage of CIOs
report directly to the CEO of their organization, so it would
be incorrect to generalize that they report to the most senior
executive. In most cases, the CIO reports to the chief oper-
ating officer (COO) or the chief financial officer (CFO). As
289towArd best prACtICes
stated, the role of the CIO is to manage information so that it
can be used for business needs and strategy. Technology, then,
is considered a valuable part of knowledge management from a
strategic perspective as opposed to just a technical one.
Chief technology officer (CTO) : This individual, unlike the CIO, is
very much a senior technical person. The role of the CTO is to
ensure that the organization is using the best and most cost-
effective technology to achieve its goals. One could argue that
the CTO holds more of a research-and-development type of
position. In many organizations, the CTO reports directly to the
CIO and is seen as a component of the overall IT infrastructure.
However, some companies, like Ravell and HTC, only have a
CTO and view technology more from the technical perspective.
Chief knowledge officer (CKO) and chief digital officer (CDO) : This
role derives from library management organizations because of
the relevance of the word knowledge and/or data. It also com-
petes somewhat with the CIO’ s role when organizations view
technology from a perspective that relates more to knowledge.
In larger organizations, the CKO/CDO may report directly
to the CIO. In its purest role, the CKO/CDO is responsible
for developing an overall infrastructure for managing knowl-
edge, including intellectual capital, sharing of information,
and worker communication. Based on this description, the
CKO/CDO is not necessarily associated with technology but
is more often considered part of the technology infrastructure
due to the relevance of knowledge and data to technology.
To define best practices for this function, it is necessary to under-
stand the current information and statistics about what these people
do and how they do it. Most of the statistical data about the roles and
responsibilities of chief IT executives are reported under the auspices of
the CIO. According to an article by Jerry Gregoire in CIO magazine in
March 2002, 63% of IT executives held the title CIO, while 13% were
CTOs; there were few to no specific statistics available on the title of
CKO and CDO, however, the CDO role has become more relevant
over the past five years given the importance of social media and digital
transformations. This report further supported the claim that there is
limited use of the CKO title and function in organizations at this time.
290 InForMAtIon teChnoloGY
From a structural point of view, 63% of IT organizations are cen-
trally structured, while 23% are decentralized with a central reporting
structure. However, 14% are decentralized without any central head-
quarters or reporting structure. From a spending perspective, orga-
nizations spend most of their budgets on integrating technology into
existing applications and daily processing (36% of budget). Twenty-
six percent is related to investments in emerging or new technologies,
24% is based on investing in e-commerce activities, and 24% is spent
on customer relationship management (CRM), which is defined as
applications that engage in assisting organizations to better under-
stand and support their customer base. Twenty-five percent is spent
on staff development and retention.
Compensation of IT chief executives still comes predominantly
from base salary, as opposed to bonus or equity positions with the
company. This suggests that their role is not generally viewed as top
management or partner-level in the business. This opinion was sup-
ported by the results of my CEO study, discussed in Chapter 2. The
issue of executive seniority can be determined by whether the chief
IT executive is corporate driven or business unit driven. This means
that some executives have corporate-wide responsibilities as opposed
to a specific area or business unit. The issue of where IT depart-
ments provide value to the organization was discussed in Chapter 3,
which showed that there are indeed different ways to manage and
structure the role of IT. However, in general, corporate IT execu-
tives are responsible for IT infrastructure, shared technology services,
and global technology architecture, while business unit CIOs con-
centrate on strategically understanding how to use applications and
processes to support their business units. This is graphically depicted
in Figure 12.1.
From a best practices perspective, the following list has historically
suggested what chief IT executives should be doing. The list empha-
sizes team building, coaching, motivating, and mentoring as tech-
niques for implementing these best practices.
Strategic thinking : Must understand the business strategy and
competitive landscape of the company to apply technology in
the most valuable way to the organization.
291towArd best prACtICes
Industry expertise : Must have the ability to understand the prod-
uct and services that the company produces.
Create and manage change : Must have the ability to create
change, through technology, in the operating and business
processes of the organization to gain efficiency and competi-
tive advantage.
Communications : Must have the ability to communicate ideas,
to give direction, to listen, to negotiate, to persuade, and to
resolve conflicts. Executives must also be able to translate
technical information to those who are not technologically
literate or are outside IT and need to be comfortable speaking
in public forums and in front of other executives.
Relationship building : Must have the ability to interface with
peers, superiors, and customers, by establishing and main-
taining strong rapport, bond, and trust between individuals.
Business knowledge : Must have the ability to develop strong busi-
ness acumen and having peripheral vision across all functional
areas of the business.
Technology proficiency : Must have the knowledge to identify
appropriate technologies that are the most pragmatic for the
business, can be delivered quickly at the lowest cost, produce
an impact on the bottom line (ROI), and have longevity.
Local applications
Standard IT applications
Shared IT technology
services
Public available infrastructure
(Internet, portals, etc.)
IT infrastructure
Business level CIOs
Corporate CIOs
Figure 12.1 Business-level versus corporate-level CIOs.
292 InForMAtIon teChnoloGY
Leadership : Must be a visionary person, inspirational, influential,
creative, fair, and open minded with individuals within and
outside the organization.
Management skills : Must have the ability to direct and supervise
people, projects, resources, budget, and vendors.
Hiring and retention : Must have the ability to recognize, culti-
vate, and retain IT talent.
While this list is not exhaustive, it provides a general perspec-
tive, one that appears generic; that is, many management positions
in an organization might contain similar requirements. A survey of
500 CIOs conducted by CIO magazine (March 2002) rated the top
three most important concerns among this community in terms of
importance:
1. Communications: 70%
2. Business understanding: 58%
3. Strategic thinking: 46%
What is interesting about this statistic is that only 10% of CIOs
identified technical proficiency as critical for their jobs. This find-
ing supports the notion that CIOs need to familiarize themselves
with business issues, as opposed to just technical ones. Furthermore,
the majority of a CIO’ s time today has been recorded as spent com-
municating with other business executives (33%) and managing IT
staffs (28%). Other common activities reported in the survey were as
follows:
• Operating the baseline infrastructure and applications
• Acting as technology visionary
• Implementing IT portions of new business initiatives
• Designing infrastructure and manage infrastructure projects
• Allocating technology resources
• Measuring and communicate results
• Serving as the company spokesperson on IT-related matters
• Selecting and managing product and service providers
• Recruiting, retaining, and developing IT staff
• Participating in company and business unit strategy
development
293towArd best prACtICes
These results further confirm that chief IT executives define best
practices, based on understanding and supporting business strategy.
This survey also reported common barriers that chief IT executives
have to being successful. The overarching barrier that most IT execu-
tives face is the constant struggle between the business expectation to
drive change and improve processes, and the need to reduce costs and
complete projects faster. The detailed list of reported problems by rank
was as follows:
1. Lack of key staff, skill sets, and retention: 40%
2. Inadequate budgets and prioritizing: 37%
3. Shortage of time for strategic thinking: 31%
4. Volatile market conditions: 22%
5. Ineffective communications with users: 18%
6. Poor vendor support and service levels, and quality: 16%
7. Overwhelming pace of technological change: 14%
8. Disconnection with executive peers: 12%
9. Difficulty proving the value of IT: 10%
10. Counterproductive office politics: 6%
Chief IT executives also felt that their roles were ultimately influ-
enced by two leading factors: (1) changes in the nature and capabilities
of technology, and (2) changes in the business environment, includ-
ing marketplace, competitive, and regulatory pressures. This can be
graphically viewed in Figure 12.2.
Figure 12.2 has a striking similarity to Figure 3.1 outlining ROD.
That diagram represented technology as an independent variable cre-
ating the need for ROD, which is composed of strategic integration
and cultural assimilation, as shown in Figure 12.3.
Figure 12.3 shows many similarities to Figure 12.2. The difference
between these two diagrams defines what is missing from many best
practices: the inclusion of organizational learning practices that would
enable chief IT executives to better manage business and technology
issues. In effect, if organizational learning techniques were included,
they could reduce many barriers between business and IT. Thus, the
solution to providing best practices for the IT community rests with
the inclusion of organizational learning along the constructs of ROD.
The inclusion of organizational learning is crucial because the best
practices, as reported among the community of chief IT executives, has
294 InForMAtIon teChnoloGY
Technology
drivers
Business
environment
Strategic
options
Im
posesInspires
Figure 12.2 Chief IT executives— factors influencing strategic options.
Technology
creates
Symptoms and
implications
Responsive
organizational
dynamism
Acceleration of events that
require different
infrastructures and
organizational processes
Requires
Strategic
integration
Cultural
assimilation
Organization
structures
(system)
Individual
actions
Renegotiation of
relationship
Organizational learning techniques
Figure 12.3 ROD and organizational learning techniques.
295towArd best prACtICes
not produced the performance outcomes sought by chief executives.
I refer to Chapter 2, in which I first defined the IT dilemma. While
many IT initiatives are credible, they often fall short of including
critical business issues. As a result, IT project goals are not completely
attained. This suggests that the problem is more related to the process
and details of how to better implement good ideas. As further sup-
port for this position, the Concours Group (an international executive
managing consulting organization) published a list of emerging roles
and responsibilities that chief IT executives will need to undertake as
part of their jobs in the near future (Cash & Pearlson, 2004):
Shared services leader : More companies are moving to the shared
services model for corporate staff functions. CIOs’ experi-
ences may be invaluable in developing and managing these
organizations.
Executive account manager : More companies today are involv-
ing the CIO in the management of relationships between the
company and its customers.
Process leader : As companies move toward organizing around
major business processes, a CIO is in a good role to temporar-
ily lead this effort since applications and databases are among
the business resources that must be revamped to implement
process management.
Innovation leader : A CIO is starting to act as the innovation
leader of the corporation when a company is seeking to
achieve substantial improvements in process performance or
operational efficiencies, or to implement IT, since innovation
may center on the application of IT.
Supply chain executive: Purchasing, warehousing, and transpor-
tation are among the most information-intensive activities
undertaken by a business. As companies look to improve these
overall processes, the CIO may become the most knowledge-
able executive about the supply chain.
Information architect : Companies are recognizing the benefit of
a consolidated view of customers, vendors, employees, and so
on. CIOs are finding themselves taking on the leadership role
of information architect by cultivating commitment and con-
sensus around this challenging task.
296 InForMAtIon teChnoloGY
Change leader : CIOs are playing an increasingly important role
in business change management. Their role is in either direct
change leadership (developing new business models) or, more
often, indirect; that is, change process is behind the scenes
(get other leaders to think about new possibilities).
Business process outsourcing leader : CIOs tend to have some of
the most extensive experience in company outsourcing. This
makes them a logical internal consultant and management
practice leader in business process outsourcing.
These issues all suggest that the role of the chief IT executive is
growing and that the need for these executives to become better inte-
grated with the rest of their organizations is crucial for their success.
Much more relevant, though, is the need for ROD and the role that
the chief IT executive has as a member of the overall community. To
create best practices that embrace organizational learning and foster
ROD, a chief IT executive maturity arc needs to be developed that
includes the industry best practices presented here integrated with
organizational learning components.
The chief IT executive best practices arc is an instrument for assess-
ing the business maturity of chief IT executives. The arc may evaluate
a chief IT executive’ s business leadership using a grid that measures
competencies ranging from essential knowledge in technology to
more complex uses of technology in critical business thinking. Thus,
the chief IT executive best practices arc provides executives with a
method of integrating technology knowledge and business by present-
ing a structured approach of self-assessment and defined milestones.
The model measures five principal facets of a technology executive:
cognitive, organization culture, management values, business ethics,
and executive presence. Each dimension or sector is measured in five
stages of maturation that guide the chief IT executive’ s growth. The
first facet calls for becoming reflectively aware about one’ s existing knowl-
edge of technology and what it can do for the organization. The second
calls for other centeredness , in which chief IT executives become aware
of the multiplicity of technology perspectives available (e.g., other busi-
ness views of how technology can benefit the organization). The third is
comprehension of the technology process , in which a chief IT executive can
begin to merge technology issues with business concepts and functions.
297towArd best prACtICes
The fourth is stable technology integration , meaning that the chief
IT executive understands how technology can be used and is resilient
to nonauthentic sources of business knowledge. Stage 4 represents an
ongoing implementation of both technology and business concepts.
The fifth stage is technology leadership , in which chief IT executives
have reached a stage at which their judgment on using technology and
business is independent and can be used to self-educate from within.
Thus, as chief IT executives grow in knowledge of technology and
business, they can become increasingly more other centered, inte-
grated, stable, and autonomous with the way they use their business
minds and express their executive leadership and character.
Definitions of Maturity Stages and Dimension Variables
in the Chief IT Executive Best Practices Arc
Maturity Stages
1. Technology competence and recognition : This first stage repre-
sents the chief IT executive’ s capacity to learn, conceptualize,
and articulate key issues relating to cognitive technological
skills, organization culture/etiquette, management value sys-
tems, business ethics, and executive presence needed to be a
successful chief IT executive in business.
2. Multiplicity of technology perspectives : This stage indicates the
chief IT executive’ s ability to integrate multiple points of view
about technology from others in various levels of workplace
hierarchies. Using these new perspectives, the chief IT execu-
tive augments his or her skills with the technology necessary
for career success, expands his or her management value sys-
tem, is increasingly motivated to act ethically, and enhances
his or her executive presence.
3. Comprehension of technology process : Maturing chief IT executives
accumulate increased understanding of workplace cooperation,
competition, and advancement as they gain new cognitive skills
about technology and a facility with business culture/etiquette,
expand their management value system, perform business/
workplace actions to improve ethics about business and technol-
ogy, and develop effective levels of executive presence.
298 InForMAtIon teChnoloGY
4. Stable technology integration : Chief IT executives achieve inte-
gration with the business community when they have levels
of cognitive and technological ability, organization etiquette/
culture, management values, business ethics, and execu-
tive presence appropriate for performing job duties not only
adequately but also competitively with peers and even higher-
ranking executives in the workplace hierarchy.
5. Technology leadership : Leadership is attained by the chief IT
executive when he or she can employ cognitive and tech-
nological skills, organization etiquette, management, a
sense of business ethics, and a sense of executive presence
to compete effectively for executive positions. This chief IT
executive is capable of obtaining increasingly executive-level
positions through successful communication and workplace
performance.
Performance Dimensions
1. Technology cognition : Concerns skills specifically related
to learning, applying, and creating resources in IT, which
include the necessary knowledge of complex operations. This
dimension essentially establishes the CIO as technically pro-
ficient and forms a basis for movement to more complex and
mature stages of development.
2. Organizational culture : The knowledge and practice of proper
etiquette in organizational settings with regard to dress, tele-
phone and in-person interactions, punctuality, work comple-
tion, conflict resolution, deference, and other protocols in
workplace hierarchies.
3. Management values : Measures the individual’ s ability to articu-
late and act on mainstream organizational values credited with
shaping the work ethic— independent initiative, dedication,
honesty, and personal identification with career goals, based
on the organization’ s philosophy of management protocol.
4. Business ethics : Reflects the individual’ s commitment to the
education and professional advancement of other employees
in technology.
299towArd best prACtICes
5. Executive presence : Involves the chief IT executive’ s view of
the role of an executive in business and the capacity to succeed
in tandem with other executives. Aspects include a devotion
to learning and self-improvement, self-evaluation, the ability
to acknowledge and resolve business conflicts, and resilience
when faced with personal and professional challenges.
Figure 12.4 shows a graphical view of the chief IT executive best
practices arc. Each cell in the arc provides the condition for assess-
ment. The complete arc is provided in Table 12.1.
Chief Executive Officer
When attempting to define CEO best practices, one is challenged with
the myriad material that attempts to determine the broad, yet impor-
tant, role of the CEO. As with most best practices, they are typically
based on trends and percentages of what most CEOs do— assuming,
of course, that the companies they work for are successful. That is, if
their organization is successful, then their practices must be as well.
This type of associative thinking leads to what scholars often term
false generalizations. Indeed, these types of inadequate methods lead
to false judgments that foster business trends, which are misinter-
preted as best practices. Reputation is what would better define these
trends, which usually after a period of time can become ineffective
Developmental dimensions of maturing
Dimension skill
Technology
competence
and recognition
Multiplicity
of technology
perspectives
Comprehension
of technology
process
Stable
technology
integration
Technology
leadership
Technology
cognition
Organization
culture
Management
values
Business
ethics
Executive
presence
Figure 12. 4 Chief IT executive best practices arc – conditions for assessment.
300 InForMAtIon teChnoloGY
Table 12.1 The Chief IT Executive Best Practices Arc
DIMENSION VARIABLE
TECHNOLOGY COMPETENCE AND
RECOGNITION
MULTIPLICITY OF TECHNOLOGY
PERSPECTIVES
Technology Cognition Understands how technology
operates in business. Has mastered
how systems are developed,
hardware interfaces, and the
software development life cycle.
Has mastery of hardware,
compilers, run-time systems. Has
core competencies in distributed
processing, database development,
object-oriented component
architecture, and project
management. Is competent with
main platform operating systems
such as UNIX, WINDOWS, and MAC.
Has the core ability to relate
technology concepts to other
business experiences. Can also
make decisions about what
technology is best suited for a
particular project and organization.
Can be taught how to expand the
use of technology and can apply it
to other business situations.
Understands that technology
can have multiple
perspectives. Able to analyze
what are valid vs. invalid
opinions about business uses
of technology. Can create
objective ideas from multiple
technology views without
getting stuck on individual
biases. An ability to identify
and draw upon multiple
perspectives available from
business sources about
technology. Developing a
discriminating ability with
respect to choices available.
Realistic and objective
judgment, as demonstrated by
the applicability of the
technology material drawn for
a particular project or task
and tied to functional/
pragmatic results.
Organization Culture Understands that technology can be
viewed by other organizations in
different ways. Uses technology as
a medium of communication.
Understands that certain
technological solutions, Web pages,
and training methods may not fit
all business needs and preferences
of the business. Has the ability to
recommend/suggest technological
solutions to suite other business
needs and preferences
Seeks to use technology as a
vehicle to learn more about
organization cultures and
mindsets. Strives to care
about what others are
communicating and embraces
these opinions. Tries to
understand and respect
technologies that differ from
own. Understands basic
technological needs of others.
(Continued)
301towArd best prACtICes
Table 12.1 (Continued) The Chief IT Executive Best Practices Arc
COMPREHENSION OF
TECHNOLOGY PROCESS
STABLE TECHNOLOGY
INTEGRATION TECHNOLOGY LEADERSHIP
Has the ability to relate
various technical concepts
and organize them with
non-technical business
issues. Can operate with
both automated and manual
business solutions. Can use
technology to expand
reasoning, logic, and what-if
scenarios. Ability to use the
logic of computer programs
to integrate the elements of
non-technological tasks and
business problems. Ability to
discern the templates that
technology has to offer in
order to approach everyday
business problems. This
involves the hypothetical
(inductive/deductive) logical
business skill.
Knowledge of technology is
concrete, accurate, and precise,
broad and resistant to interference
from non-authentic business
sources. Ability to resist or recover
from proposed technology that is
not realistic— and can recover
resiliently.
Methods and judgment in a
multidimensional business
world is independent, critical
discernment. Knowledge of
technology and skills in
technology can be transferred
and can be used to
self-educate within and
outside of technology. Can
use technology for creative
purposes to solve business
challenges and integrate with
executive management views.
Can deal with multiple
dimensions of criticism
about technology. Can
develop relationships
(cooperative) that are
dynamic and based on
written communication and
oral discourse. Ability to
create business relations
outside of technology
departments. Has an
appreciation of cyberspace
as a communication
space— a place wide open to
dialogue (spontaneous), to
give and take, or other than
voyeuristic, one-sidedness.
Ability to produce in
teamwork situations, rather
than solely in isolation.
Loyalty and fidelity to relations in
multiple organizations.
Commitment to criticism and
acceptance of multiple levels of
distance and local business
relationships. Ability to sustain
non-traditional types of inputs
from multiple sources.
Can utilize and integrate
multidimensions of business
solutions in a self-reliant way.
Developing alone if necessary
using other technical
resources. Can dynamically
select types of interdependent
and dependent organizational
relationships. Ability to
operate within multiple
dimensions of business
cultures, which may demand
self-reliance, independence of
initiative, and interactive
communications.
(Continued)
302 InForMAtIon teChnoloGY
Table 12.1 (Continued) The Chief IT Executive Best Practices Arc
DIMENSION VARIABLE
TECHNOLOGY COMPETENCE AND
RECOGNITION
MULTIPLICITY OF
TECHNOLOGY PERSPECTIVES
Management Values Technology and cultural sensitivity.
Global communication, education,
and workplace use of technology
can be problematic— subject to
false generalizations and
preconceived notions. Awareness
of assumptions about how
technology will be viewed by other
organizations and about biases
about types of technology (MAC vs.
PC).
Can appreciate need to obtain
multiple sources of
information and opinion. The
acceptance of multi-
dimensional values in human
character.
Business Ethics Using technology with honesty re:
privacy of access and information.
Development of ethical policies
governing business uses of the
Internet, research, intellectual
property rights and plagiarism.
The use of information in a fair
way— comparison of facts
against equal sources of
business information.
Compassion for business
information for which sources
are limited because of
inequality of technology
access. Compassion for
sharing information with
other business units from a
sense of inequality.
Executive Presence Has accurate perception of one’ s
own potential and capabilities in
relation to technology in the
business— the technologically
realizable executive self.
Understands how other
executives can view self from
virtual and multiple
perspectives. Understands or
has awareness of the
construction of self that occurs
in business. Focuses on views
of other executives in multiple
settings. Understands that the
self (through technology) is
open for more fluid
constructions, able to
incorporate diverse views in
multiple settings.
(Continued)
303towArd best prACtICes
and unpopular. We must also remember the human element of suc-
cess; certain individuals succeed based on natural instincts and talent,
hard work and drive, and so on. These components of success should
not be confused with theories that are scalable and replicable to prac-
tice; that is, what best practices need to accomplish.
This section focuses on technology best practices of the CEO. These
best practices are based on my research as well as other positions and
Table 12.1 (Continued) The Chief IT Executive Best Practices Arc
COMPREHENSION OF
TECHNOLOGY PROCESS
STAB LE TECHNOLOGY
INTEGRATION TECHNOLOGY LEADERSHIP
Can operate within multiple
dimensions of value systems
and can prioritize multi-
tasking events that are
consistent with value priorities.
Ability to assign value to new
and diverse technology
alternatives—integrating
them within a system of
pre-existing business and
technology values.
Testing value systems in new ways
due to technology is integrated
with long-term values and goals
for business achievement. Some
concepts are naturally persistent
and endure despite new arenas in
the technological era.
Use of technology and business
are based on formed
principles as opposed to
dynamic influences or
impulses. Formed principles
establish the basis for
navigating through, or
negotiating the diversity of
business influences and
impulses.
Consistent values displayed on
multiple business
communications,
deliverables of content, and
dedication to authenticity.
Maintains consistency in
integrating values within
technology business issues.
Technology is a commitment in all
aspects of value systems,
including agility in managing
multiple business commitments.
Commitment to greater openness
of mind to altering traditional and
non-technological methods.
Technological creativity with
self-defined principles and
beliefs. Risk-taking in
technology-based ventures.
Utilizing technology to expand
one’ s arenas of business
freedom. Exploring the
business-liberating
capacities of technology.
Operationalizes technology to
unify multiple components of
the self and understands its
appropriate behaviors in
varying executive situations.
Has regulated an identity of self
from a multiplicity of executive
venues. Methods of business
interaction creates positive value
systems that generate confidence
about operating in multiple
business communities.
Acceptance and belief in a
multidimensional business
world of the self. Can determine
comfortably the authenticity of
other executives and their view
of the self. Can confirm
disposition independently from
others’ valuations, both
internally and from other
organization cultures. Beliefs
direct and control
multidimensional executive
growth.
304 InForMAtIon teChnoloGY
facts that provide a defendable context of how and why they appear to be
effective. However, as with the chief IT executive model, best practices
cannot be attained without an arc that integrates mature organizational
learning and developmental theories. Many of the CEO best practices
reconcile with my interviews with CEOs and, in particular, with the two
CEO case studies (of ICAP and HTC) discussed in Chapter 8. Other
published definitions and support are referenced in my presentation.
In February 2002, Hackett Benchmarking, a part of Answerthink
Corporation, issued its best practices for IT. Its documentation stated:
In compiling its 2002 best practices trend data, Hackett evaluated the
effectiveness (quality and value) and efficiency (cost and productivity)
of the information technology function across five performance dimen-
sions: strategic alignment with the business; ability to partner with
internal and external customers; use of technology; organization; and
processes.*
The findings, as they apply to the CEO function, provide the fol-
lowing generalizations:
• There was an 85% increase in the number of CIOs who
reported directly to the CEO. This increase would suggest
that CEOs need to directly manage the CIO function because
of its importance to business strategy.
• CEOs supporting outsourcing did not receive the cost-cut-
ting results they had hoped for. In fact, most broke even. This
suggests that CEOs should not view outsourcing as a cost-
cutting measure, but rather foster its use if there are identifi-
able business benefits.
* Hackett Benchmarking has tracked the performance of nearly 2,000 complex,
global organizations and identified key differentiators between world-class and aver-
age companies, across a diverse set of industries. In addition to information tech-
nology, staff functions studied include finance, human resources, procurement, and
strategic decision making, among others. Study participants comprised 80% of the
Dow Jones Industrials, two-thirds of the Fortune 100, and 60% of the Dow Jones
Global Titans Index. Among the IT study participants are Agilent Technologies,
Alcoa, Capital One Financial Corporation, Honeywell International, Metropolitan
Life Insurance, SAP America, and TRW. (From PR Newswire, February 2002.)
305towArd best prACtICes
• CEOs have found that IT organizations that have centralized
operations save more money, have fewer help-line calls than
decentralized organizations, and do not sacrifice service qual-
ity. This suggests that the CEOs should consider less busi-
ness-specific support structures, especially when they conduct
their business at multiple locations.
• CEOs are increasingly depending on the CIO for advice
on business improvements using technology. As a result,
their view is that IT professionals need advanced business
degrees.
• CEOs should know that consistent use of IT standards has
enabled firms to trim IT development costs by 41%, which
has reduced costs for end-user support and training opera-
tions by 17%.
• CEOs need to increase support for risk management. Only
77% of average companies maintained disaster recovery plans.
As we can see from these generalizations, they are essentially
based on what CEOs are doing, and what they have experienced.
Unfortunately, this survey addressed little about what CEOs know
and exactly what their role should be with respect to overall man-
agement, participation, and learning of technology. These “ best
practices” are particularly lacking in the area of organizational
learning and the abilities of the firm to respond to changing condi-
tions as opposed to searching for general solutions. Let us look at
each of these generalizations and discuss what they lack in terms of
organizational learning.
CIO Direct Reporting to the CEO
The fact that more CIOs are reporting directly to the CEO shows an
escalation of their importance. But, what is more relevant as a best
practice is what that relationship is about. Some report about how
often they meet. What is more important is the content of the inter-
actions. What should the CEOs know, how should the CEOs con-
duct themselves? What management and learning techniques do they
apply? How do they measure results? My CEO interview research
exposed the fact that many CEOs simply did not know what they
306 InForMAtIon teChnoloGY
needed to do to better manage the CIO and what they needed to
know in general about technology.
Outsourcing
Outsourcing can be a tricky endeavor. In Chapter 3, I introduced
the concept of technology as a driver and a supporter. I presented a
model that shows how emerging technologies are initially drivers and
need to be evaluated and measured using similar models embraced by
marketing-oriented communities. I then showed how, through matu-
ration, emerging technologies become supporters, behaving more as a
commodity within the organization. I explained that only then can a
technology be considered for outsourcing because supporter operations
are measured by their economies of scale, reduced costs, increased
productivity, or both (efficiency). Figure 12.5 shows that cycle.
Thus, what is missing from the survey information is the knowl-
edge of where such technologies were with respect to this technology
life cycle. Knowing this dramatically affects what the CEO should be
expecting and what organizational learning concepts and factors are
needed to maximize benefit to the organization.
Centralization versus Decentralization of IT
The entire question of how IT should or should not be organized
must be based on a business that implements ROD. ROD includes
the component called cultural assimilation, which provides a process,
Mini loop technology enhancementsTechnology
driver
Evaluation
cycle
Driver
maturation
Support
status
Replacement or
outsource
Economies
of scale
Figure 12.5 Driver-to-supporter life cycle.
307towArd best prACtICes
using organizational learning, to help businesses determine the best
IT structure. To simply assume that centralizing operations saves
money is far too narrow; the organization may need to spend more
money on a specific technology in the short term to get long-term
benefits. Where is this included in the best practices formula? My
research has shown that more mature uses of technology in organiza-
tions require more decentralization of IT personnel within the busi-
ness units. The later stages of IT organizational structure at Ravell
supported this position.
CIO Needs Advanced Degrees
I am not sure that anyone could ever disagree with the value of
advanced degrees. Nevertheless, the survey failed to provide content
on what type of degree would be most appropriate. It also neglected to
address the issue of what may need to be learned at the undergradu-
ate level. Finally, what forms of education should be provided on the
job? What exactly are the shortfalls that CIOs need to know about
business? And, equally important is the consideration of what educa-
tion and learning is needed by CEOs and whether they should be so
dependent on advice from their CIOs.
Need for Standards
The need for standards is something that most organizations would
support. Yet, the Siemens case study showed us that too much con-
trol and standardization can prove ineffective. The Siemens model
allowed local organizations to use technology that was specific to
their business as long as it could be supported locally. The real chal-
lenge is to have CEOs who understand the complexity of IT stan-
dards. They also need to be cognizant that standards might be limited
to the structure of their specific organization structure, its business,
and its geographical locations.
Risk Management
The survey suggested that CEOs need to support risk management
because their backup recovery procedures may be inadequate. The
308 InForMAtIon teChnoloGY
question is whether the problem stems from a lack of support or from
a lack of knowledge about the topic. Is this something that the chief
IT executive needs to know, or is it just about the CEO’ s unwilling-
ness to spend enough funds? The best practices component of risk
management must be broader and answer these questions.
By contrast to the survey, we may consider a report issued by
Darwin Research (“ A CRM Success Story,” November 1, 2002),
which cited the recommended best practices of Christopher Milliken,
CEO of Boise Cascade Office Products. He offered the kind of in-
depth view of best practices that I feel is needed to be consistent with
my research on ROD. Milliken participated in the implementation of
a large-scale CRM system needed to give his customers a good reason
to choose Boise. The project required an investment of more than $20
million. Its objective was to provide customers with better service. At
the time of the investment, Milliken had no idea what his ROI would
be, only that the project was necessary to distinguish Boise Cascade
from myriad competitors in the same industry.
After the successful implementation of the project, Milliken was
now in a position to offer his own thoughts about technology-related
best practices that a CEO might want to consider. He came up with
these six:
1. The CEO must commit to a technology project : Milliken was keen
to express the reasons why the CRM project was important;
he was intimately involved with its design, and made it clear
that he had to be consulted, should there be any delays in the
project schedule. KPMG (a major consulting firm) was also
hired as a consultant to help implement the schedule and was
held to the same level of excellence. What Milliken accom-
plished, significantly, was to show his interest in the project
and his willingness to stay involved at the executive level.
Milliken’ s best practice here lies in his commitment, which
is consistent with that of McDermott from ICAP and the
CEO from HTC. They both realized, as Milliken did, that
the CEO must have an active role in the project and not just
allow the management team to get it done. Milliken, as did
McDermott and the CEO of HTC, issued specific perfor-
mance-related requirements to his employees and consultants.
309towArd best prACtICes
His participation sent a valuable message: The CEO is part
of the supporting effort for the project and is also part of
the learning process of the organization. Indeed, the situa-
tion that Milliken faced and resolved (i.e., to jump in without
knowing the expected returns of the project) is exemplary of
the core tenets of ROD, which require the ability for an orga-
nization to operate with dynamic and unpredictable change
brought about by technology. In this case, the technology was
crucial to distinguishing Boise Cascade, in the same way that
electronic trading was for ICAP, and the billing system was
for HTC. Yet, all three of these situations required a cer-
tain behavior and practice from the company CEO. Thus,
the most important best practice lies in the commitment and
learning to the learning organization format.
2. Think business first, then technology : To understand why a tech-
nology is needed, there must first be a supporting business
plan; that is, the business plan must drive the technology or
support its use. This best practice concept is consistent with
my research. Indeed, Dana Deasy from Siemens realized it
after a three-year investment in e-business, and McDermott
clearly advocated the importance of a business plan over
embellished technology. Another interesting and important
result of the business plan was that it called for the creation
of a centralized CRM system. Therefore, it became necessary
to consolidate the separate business units at Boise into one
corporate entity— providing central support and focus. This
is another example of how ROD operates. The CRM project,
through a validation process in a business plan, provided the
strategic integration component of ROD. The strategy then
influenced cultural assimilation and required a reorganization
to implement the strategy or the new CRM system.
Furthermore, Boise Cascade allowed its staff to experiment in
the project, to make mistakes, without criticizing them. They
were, in effect, implementing the driver-related concepts of
technology. These driver concepts must be similar to the way
organizations support their marketing activities, by which
they accept a higher error ratio than when implementing a
supporter activity. The CEO wanted everyone to give it their
310 InForMAtIon teChnoloGY
best and to learn from the experience. This position is a key
best practice for the CEO; it promotes organizational learn-
ing throughout the business.
3. Handcuff business and technology leaders to each other : Milliken
understood that technology projects often fail because of a
lack of communication between IT and other business enti-
ties. The project represented many of the IT dilemmas that I
discussed in Chapter 2, particularly relating to the new CRM
system and its integration with existing legacy applications
and, at the same time, creating a culture that could imple-
ment the business strategy. To address this, Milliken first
appointed a new CIO to foster better communication. He
also selected a joint project leader from the business side, thus
creating a joint project leadership team. What Milliken did
was to form a new community of practice that did not exist
before the project. The project, as with Ravell, represented
an event that fostered the creation of organizational learning
opportunities. As with ICAP, Milliken’ s company enlisted
the support of executive-level consultants to help finalize the
business plan and marketing strategy, as well as assist with
change management. What exactly did Milliken do that rep-
resents a best practice? From an organizational learning per-
spective, he created communities of practice between IT and
the business. That then is a true best practice for a CEO.
4. Get the show on the road : There was a not-to-be-questioned
deadline that was instituted by Milliken. As I noted in
Chapter 4, this type of management seems undemocratic,
but it should not be confused with being nonparticipatory.
Someone had to get this going and set expectations. In this
case, both IT and business users were set to make things
happen. Senior management endorsed the project and openly
stated that it represented what could be a one-time opportu-
nity to “ do something of great magnitude” (Dragoon, 2002).
From a best practices perspective, this means that the CEO
can and should provide the leadership to get projects done
and that part of that leadership could be setting strategic
dates. However, CEOs should not confuse this leadership
with power-centralized management over IT-related projects.
311towArd best prACtICes
Communities of practice still need to be the driving force for
inevitable success in ROD. Another important factor was
Milliken’ s decision to create dual management over the proj-
ect. Thus, Milliken was able to create an environment that
required discourse between IT and the business.
5. Win over the masses for massive changes : As stated, the business
plan called for a reorganization of other business units. This
also required executives to rethink job descriptions and titles
in relation to new processes. It also eliminated six redundant
management-level jobs. Milliken engaged employees in a
massive “ external-internal” marketing campaign. Employees
participated in ad campaigns, and brochures were created for
all staff. A video was also produced that defined the benefits
to Boise Cascade customers. In essence, Milliken was com-
mitted to communication and training. Similar to my experi-
ence at Ravell, not everyone is comfortable with change, and
resistance in the ranks is inevitable. As a result, the educa-
tion and training programs at Boise were not enough. What
was lacking was true organizational learning and knowledge
management. There are two best practices that were defined
from this experience. First, the CEO needs to engage in
actively showing the importance that technology has to the
organization, not only from an economic perspective, but also
from a staff development point of view. The second best prac-
tice comes from the example of what Boise Cascade did not
do enough of: provide organizational transformation through
knowledge management, reflective practices, and commu-
nities of practice. This suggests that CEOs need to better
understand and incorporate organizational learning concepts,
so that they can be the catalyst for change as they are in other
areas of the business. We saw support for this concept from
both ICAP and HTC, where the actions of the CEO came
from an organizational learning perspective.
6. Know that technology projects never end : ROD assumes, by def-
inition, that technology is a variable, albeit an insistent one.
Milliken’ s experience further supported this notion, in that
he realized that Boise Cascade must continue to assess the
impact of the CRM application. Another way of saying this
312 InForMAtIon teChnoloGY
is that the technology will continue to be viewed as a means to
transform the business on an ongoing basis. Indeed, Milliken
was planning to spend another $10 million on the next phase.
So, from a best practices perspective, CEOs must recognize
that technology investment never ends, but it moves to other
phases of maturation, similar to the driver/supporter life
cycle. Finally, the buy-in to this reality ensures the recogni-
tion of organizational dynamism.
Based on the case studies and research presented thus far in this
book, I can now formulate a list of 11 key planks that represent the
core of what constitutes a technology CEO’ s set of best practices:
1. The chief IT executive should report directly to the CEO.
2. CEOs should be actively committed to technology on an
ongoing basis, as opposed to project-by-project involvement.
3. CEOs should be willing to be management catalysts to sup-
port new technology-driven projects. They, in effect, need to
sometimes play the role of technology champion.
4. CEOs should focus on business concepts and plans to drive
technology. In other words, technology should not drive the
business.
5. CEOs should use consultants to provide objective input to
emerging technology projects.
6. CEOs should establish organizational infrastructures that
foster the creation of communities of practice. They need
to create joint ownership of IT issues by fostering discourse
between IT, business managers, and staff.
7. CEOs may need to take control of certain aspects of tech-
nology investments, such as setting milestones and holding
management and staff to making critical project dates.
8. CEOs need to foster cultural assimilation, which may lead to
reorganization, since technology changes processes.
9. CEOs need to understand organizational learning and knowl-
edge management theories and participate in organizational
transformation.
10. CEOs need to understand how the technology life cycle
behaves, with specific attention to the transition from driver
activities to supporter functions. To that end, CEOs need to
313towArd best prACtICes
understand the short- and long-term investments that need to
be made in technology.
11. CEOs should create organizations that can effectively oper-
ate within technological dynamism. This process will educate
management and staff to handle the dynamic and unpredict-
able effects of emerging technologies. It will also foster the
development of both middle-up-down and bottom-up man-
agement of technology.
The issue is now to provide a linear development model for CEOs
that enables them to measure where they are in relation to ROD and
the best practices outlined.
The CEO Best Practices Technology Arc
Similar to the chief IT executive arc, the CEO best practices arc is
an instrument for assessing the technology best practices of CEOs.
The arc evaluates a CEO’ s strategic uses of technology and leader-
ship by using a grid that charts competencies ranging from conceptual
knowledge about technology to more complex uses of technology and
business and how they are integrated in strategic business planning.
As with all arc models, the CEO version measures five principal
stages of a CEO’ s maturity with respect to business applications of
technology: conceptual, structural, executive values, executive eth-
ics, and executive leadership. Each dimension or sector is measured
in five stages of maturation that guide the CEO’ s executive growth
managing technological dynamism. The first stage is being reflec-
tively aware about their conceptual knowledge of technology and
what it can do for the organization. The second is other centered-
ness, by which CEOs become aware of the multiplicity of business
uses of technology and the different views that can exist inside and
outside the organization. The third is integration of business use of
technology; a CEO can begin to combine how business plans foster
the need for technology. The fourth is implementation of business/
technology process, meaning that the CEO understands how busi-
ness applications and technology are used together and is resilient
to nonauthentic sources of emerging technologies. Stage four rep-
resents an ongoing commitment to implementing both technology
314 InForMAtIon teChnoloGY
and business applications. The fifth refers to strategic uses of tech-
nology; CEOs have reached a stage at which their judgment on
using technology and business is independent and can be used to
self-educate. Thus, as CEOs grow in knowledge of business uses
of technology, they can become increasingly more understanding of
the multiplicity of uses, can become more integrated in how they
conceptualize technology, can manage its implementation from an
executive position, and can apply strategies to support new applica-
tions of technology in the organization.
Definitions of Maturity Stages and Dimension Variables
in the CEO Technology Best Practices Arc
Maturity Stages
1. Conceptual knowledge of technology : This first stage represents
the CEO’ s capacity to learn, conceptualize, and articulate key
issues relating to business uses of technology, organizational
structures available, executive value methods, executive ethi-
cal issues surrounding technology, and leadership alternatives
that are needed to be successful with technology applications.
2. Multiplicity of business perspectives of technology : This stage
indicates the CEO’ s ability to integrate multiple points of
view from management, staff, and consultants about technol-
ogy applications in business. Using these new perspectives,
the CEO augments his or her conceptual skills with technol-
ogy, has an expanded view of what organizational structures
might work best, expands his or her executive values about
technology uses, is increasingly aware of the ethical dilemmas
with technology, and enhances his or her leadership abilities.
3. Integration of business uses of technology : Maturing CEOs accu-
mulate increased understanding of how technology can sup-
port the business, provide more competitive advantage, and
have a more integrated understanding of how to use their
conceptual skills about technology, of the alternative organi-
zational structures available, of how to combine their business
executive value and ethical systems, and how to develop effec-
tive levels of executive leadership.
315towArd best prACtICes
4. Implementation of business/technology process : CEOs achieve
integration when they can regularly apply their conceptual
knowledge of technology, organization structures, executive
values and ethics about technology, and executive leadership,
appropriate for performing their job duties, not only ade-
quately, but at a level that provides a competitive advantage
for the organization.
5. Strategic uses of technology : Leadership is attained by the
CEO when he or she can employ conceptual skills, develop
new organizational structures as necessary, establish new
values and ethics that are appropriate for the organization,
and create a sense of executive presence to lead the organiza-
tion strategically. This CEO is capable of having new vision
about how business and technology can be expanded into
new endeavors.
Performance Dimensions
1. Technology concepts : Concerns conceptual skills, specifically
related to understanding how technology can be used in the
business. This dimension essentially establishes the CEO
as technically proficient, conceptually, and forms a basis for
movement to more complex and mature stages of business/
technology development.
2. Organizational structures : The knowledge of the alternative
organizational structures that can support the application
of emerging technology in corporate settings with regard to
roles, responsibilities, career paths, and organizational report-
ing alternatives.
3. Executive values : Measures the CEO’ s ability to articulate and
act on mainstream technological values credited with shaping
the work ethic: independent initiative, dedication, honesty,
and personal identification with career goals, based on the
philosophy of the management protocol of the organization.
4. Executive ethics : Reflects the CEO’ s commitment to the edu-
cation and professional advancement of the behavior of the
organization as it relates to business uses of technology.
316 InForMAtIon teChnoloGY
5. Executive leadership : Involves the CEO’ s view of the role of
an executive in business, and the capacity to succeed in tan-
dem with his or her organizational resources. Aspects include
a devotion to organizational learning and self-improvement,
self-evaluation, the ability to acknowledge and resolve busi-
ness/technology conflicts, and resilience when faced with per-
sonal and professional challenges.
Figure 12.6 shows a graphic view of the CEO technology best
practices arc. Each cell in the arc provides the condition for assess-
ment. The complete arc is provided in Table 12.2.
Middle Management
Middle management, which comprises a number of tiers, is perhaps
the most challenging of best practices to define. In Chapter 3, I strati-
fied the different types of positions that make up middle managers
into three tiers: directors, line managers, and supervisors. What is
most important at this point is to determine the set of technology
best practices for managers so that they can effectively operate under
ROD. That is, technology best practices must be designed to contain
the insights and skills for effective management of technology. This
must include
1. Working with IT personnel
2. Providing valuable input to the executive management team,
including the CEO
3. Participating and developing a technology strategy within
their business units
4. Effectively managing project resources, including technical
staff
5. Leading innovative groups in their departments
6. Incorporating technology into new products and services
7. Developing proactive methods of dealing with changes in
technology
8. Investigating how technology can improve competitive
advantage.
317towArd best prACtICes
D
ev
el
op
m
en
ta
l d
im
en
sio
ns
o
f m
at
ur
in
g
D
im
en
sio
n
sk
ill
C
on
ce
pt
ua
l
kn
ow
le
dg
e
of
te
ch
no
lo
gy
M
ul
tip
lic
ity
o
f
bu
sin
es
s
pe
rs
pe
ct
iv
es
o
f
te
ch
no
lo
gy
In
te
gr
at
io
n
of
bu
sin
es
s u
se
s o
f
te
ch
no
lo
gy
Im
pl
em
en
ta
tio
n
of
bu
sin
es
s/
te
ch
no
lo
gy
pr
oc
es
s
St
ra
te
gi
c
us
es
of
te
ch
no
lo
gy
Te
ch
no
lo
gy
c
og
ni
tio
n
O
rg
an
iz
at
io
na
l
st
r u
ct
ur
es
Ex
ec
ut
iv
es
v
al
ue
s
Ex
ec
ut
iv
es
e
th
ic
s
Ex
ec
ut
iv
e
le
ad
er
sh
ip
Fi
gu
re
1
2.
6
C
EO
te
ch
no
lo
gy
b
es
t p
ra
ct
ic
es
a
rc
.
318 InForMAtIon teChnoloGY
Table 12.2 CEO Technology Best Practices Arc—Detail
DIMENSION VARIABLE
CONCEPTUAL KNOWLEDGE OF
TECHNOLOGY
MULTIPLICITY OF BUSINESS
PERSPECTIVES OF
TECHNOLOGY
Technology Concept Understands concepts and
definitions about technology
and how it relates to
business. Has conceptual
knowledge of the software
development life cycle.
Understands high-level
concepts about distributed
processing, database
development, and project
management. Understands
the definition and role of
operating systems such as
UNIX, WINDOWS, and MAC.
Has the ability to relate
technology concepts to other
business experiences.
Understands that different
technology may be required
for a particular project and
organization. Can
conceptualize how to expand
the use of technology and
apply it to business
situations.
Seeks to manage by appreciating
that technology can have
multiple perspectives. Able to
manage a process that requires
validation about different
opinions about business uses of
technology. Can manage the
different objective ideas from
multiple technology views
without getting stuck on
personal biases. Has an ability
to identify and draw upon
multiple perspectives available
from business sources about
technology, particularly from
independent sources. Developing
a discriminating ability to create
an infrastructure that can
operate with multiple views.
Committed to creating an
organization that can learn
through realistic and objective
judgment, as demonstrated by
the applicability of the
technology material drawn for a
particular project or task and
tied to business outcomes.
Organizational Structures Understands that technology
can be viewed by other
organizations in different
ways and may need different
organizational structures.
Can use technology as a
medium of communication.
Understands that certain
technologies may need to be
managed differently and
need specific types of
structures and expertise. Has
the ability to comprehend
recommend/suggested
technological solutions to
suite business needs and
preferences.
Seeks to manage technology as a
vehicle to learn more about what
alternative organization
structures are available from
others. Strives to create a
learning organization that cares
about what other staff perceive
as solutions. Committed to
cultural assimilation that can
change the need to restructure
the organization. Tries to
understand and respect
technologies that differ from
what the organization is currently
using. Understands that the
organization has multiple and
different technological needs.
(Continued)
319towArd best prACtICes
Table 12.2 (Continued) CEO Technology Best Practices Arc—Detail
INTEGRATION OF BUSINESS
USES OF TECHNOLOGY
IMPLEMENTATION OF
BUSINESS/TECHNOLOGY
PROCESS
STRATEGIC USES OF
TECHNOLOGY
Creates an organization
that has the ability to
relate various technical
concepts and organize
them with non-technical
business issues. Can
manage by operating with
both automated and
manual business
solutions. Can use
technology to expand
business reasoning, logic,
and what-if scenarios.
Establishes business
templates that allow
technology to offer
everyday business
solutions. This involves
the hypothetical
(inductive/deductive)
logical business issues.
Organization’ s use of
technology is concrete,
accurate, and precise, broad
and resistant to interference
from non-authentic
technology business sources.
Ability to resist or recover
from faulty uses of
technology that is not
realistic without a supporting
business plan.
Methods and judgment as a
multidimensional CEO is
independent, has critical
discernment. Conceptual
knowledge of technology can be
transferred and can be used to
self-educate within and outside
of technology. Can use
technology for creative
purposes to create new
business initiatives and
integrate them with short- and
long-term business goals.
Can deal with multiple
dimensions of criticism
about how technology can
be used in the
organization. Can develop
relationships (cooperative)
that are dynamic and
based on written
communication and oral
discourse about how
business can drive
technological investments.
Ability to create new
business relations using
technology with new and
existing customers. Has an
appreciation of cyberspace
as a new market— a place
wide open to dialogue
(spontaneous), to provide
new opportunities for
business growth.
Commitment to open
discussion of alternating
opinions on technology and
acceptance of varying types
of structures to accommodate
technology opportunities.
Ability to sustain dynamic
organizational structures.
Can design new structures to
integrate multidimensions of
business and technology
solutions. Can dynamically
manage different types of
interdependent and dependent
organizational relationships.
Ability to manage within
multiple dimensions of
business cultures, which may
demand self-reliance and
confidence in independence of
initiatives.
(Continued)
320 InForMAtIon teChnoloGY
Table 12.2 (Continued) CEO Technology Best Practices Arc—Detail
DIMENSION VARIABLE
CONCEPTUAL KNOWLEDGE OF
TECHNOLOGY
MULTIPLICITY OF BUSINESS
PERSPECTIVES OF
TECHNOLOGY
Executive Values Understanding of technology
and cultural differences.
Conceptually understands
that global communication,
education, and workplace use
of technology can be
problematic— subject to
false generalizations and
preconceived notions.
Management awareness of
responsibilities to address
assumptions about how
technology will be viewed by
other departments and
customers.
Sets conditions that foster the
need to obtain multiple sources
of information and opinion
about how technology values.
The propagation
organizationally of acceptance
that there can be
multidimensional values in
human character.
Executive Ethics Understands that there is a
need to use technology with
honesty re: privacy of access
and information. Supports
the development of ethical
policies governing business
uses of the Internet, research,
intellectual property rights,
and plagiarism.
Committed to creating an
organization that uses
information in a fair
way— comparison of facts
against equal sources of
business information.
Understands and is
compassionate that business
and technology information may
have different levels of
knowledge access. Recognizes
the need for sharing information
with other business units from a
sense of inequality.
Executive Leadership Conceptualizes the need to
have a leadership role with
respect to technology in the
business— the business and
technologically realizable
executive self.
Understands how other
executives can view technology
leadership differently.
Understands or has awareness
of the construction of self that
occurs when taking on the
integration of technology in
business operations. Focuses
on views of other CEOs in
multiple settings. Understands
that the self (through
technology) is open for more
fluid constructions, able to
incorporate diverse views in
multiple technology settings.
(Continued)
321towArd best prACtICes
As with CEO research, there are myriad best practices that have
been offered as a method of dealing with the subject of technology
management. Unfortunately, these practices usually are vague and
intermingle management levels and departments; that is, it is diffi-
cult to know whether the best practice is for the chief IT executive,
Table 12.2 (Continued) CEO Technology Best Practices Arc—Detail
INTEGRATION OF BUSINESS
USES OF TECHNOLOGY
IMPLEMENTATI ON OF
BUSINESS/TECHNOLOGY
PROCESS
STRAT EGIC USES OF
TECHNOLOGY
Can manage multiple
dimensions of value
systems and can prioritize
multi-tasking events that
are consistent with value
priorities. Ability to assign
value to new and diverse
technology business
alternatives— linking
them to legacy systems
and processes.
Managing value systems in
new ways because technology
changes long-term values
and goals for business
objectives. Recognition that
some concepts remain
unchanged despite emerging
technologies.
Management of technology and
business are based on formed
principles as opposed to
dynamic influences or
impulses. Formed executive
principles establish the basis
for navigating through or
negotiating the diversity of
business opportunities and
impulses for investment in
technologies.
Consistent management
values displayed on
multiple business goals,
mission, and dedication
to authenticity. Maintains
management consistency
in combining values
regarding technology
issues.
Business and technology are a
commitment in all aspects of
management value systems,
including agility in managing
multiple business
commitments. Commitment
to greater openness of mind
to altering traditional and
non-technological
management methods.
Technology management
creativity with self-defined
principles and beliefs.
Risk-taking in technology-
based ventures. Utilizing
technology to expand one’ s
arenas of business
development. Manages the
business liberating capacities
of technology.
Manages technology to
unify multiple parts of
the organization and
understands how the
process behaves in
different business
situations.
Has developed an executive
identity of self from a
multiplicity of management
venues. Method of
management creates positive
value systems that generate
confidence about how
multiple business
communities need to operate.
Acceptance and belief in a
multidimensional business
world of how to lead with
technology. Can determine
comfortably, authenticity of
organization’ s executives and
their view of the self. Can
confirm disposition on
technology independently from
others’ valuations, both
internally and from other
organizations. Beliefs direct
and control multi-dimensional
leadership growth.
322 InForMAtIon teChnoloGY
the CEO, or some other level of management. We know from the
research from Bolman and Deal (1997) that middle managers feel
torn by conflicting signals and pressures they get from both senior
management and the operations that report to them: “ They need to
understand the difference in taking risks and getting punished for
mistakes” (p. 27). According to Bolman and Deal (1997), best prac-
tices for middle managers need to cover the following areas:
1. Knowledge management
2. Alignment
3. Leadership and commitment
4. Organization
5. Human resources
6. Opportunity management
7. Leveraging
8. Performance assessment
Their study covered more than 400 companies in the eight areas
of concern. I extracted 10 middle management-related best practices
from their study results and concluded that middle managers need to
1. Understand how to take a strategy and implement it with
technology; that is, they need to create tactics for completing
the project.
2. Establish team-building measures for linking technology
with daily operations of the staff.
3. Foster the aggregation and collaboration of business unit
assets to form peer groups that can determine joint efforts for
implementing new technologies.
4. Stimulate their staffs using innovative strategies of value
propositions and reward systems.
5. Create multifunctional teams that can focus on particu-
lar aspects of how technology affects their specific area of
expertise.
6. Follow common project management practices so that mul-
titier and department projects can be globally reviewed by
senior management.
7. Form project teams that can respect and perform on an action
basis; that is, teams that are action oriented.
323towArd best prACtICes
8. Understand how to communicate with, and use, IT staff on
projects.
9. Have a systematic process for gathering intelligence relating
to pertinent technology developments.
10. Understand that customers are the drivers for technology
tools provided by the organization.
On reviewing the different aspects of middle manager best practices
with technology research, it appears that there are two focal points:
(1) those best practices that address the needs of senior management,
the CIO, and the CEO; and (2) those that are geared toward the
management of the staffs who need to implement emerging technol-
ogy projects.
This makes sense, given that the middle manager, notwith-
standing whether a director, line manager, or supervisor, needs
to deal with executive productivity-related issues as well as staff
implementation ones. They are, as Bolman and Deal (1997) state,
“ torn” by these two competing organizational requirements.
Table 12.3 represents the combined list of technology-based best
practices organized by executive best practices and implementation
best practices.
Table 12.3 exemplifies the challenge that middle managers
have in balancing their priorities. In accordance with the research,
the best practices mentioned are implemented using methods of
knowledge management, alignment, leadership and commitment,
human resources, opportunity management, leveraging, and per-
formance assessment. As with the other best practices, the middle
manager technology best practices are limited because they do not
address the specific needs of ROD, particularly organizational
learning theories (with the exception of knowledge management).
This shortfall is integrated into another developmental arc model
that combines these theories with the preceding definitions of best
practices.
The Middle Management Best Practices Technology Arc
The middle management best practices technology arc, as with others,
can be used to evaluate a middle manager’ s strategic and operational
324 InForMAtIon teChnoloGY
uses of technology by using a grid that measures competencies rang-
ing from conceptual knowledge about technology to more complex
uses of technology and business operations.
The five principal stages defined by the arc determine the middle
manager’ s maturity with business implementations of technology:
cognitive, organization interactions, management values, project eth-
ics, and management presence. There are five stages of maturation
that guide the middle manager’ s growth. The first is becoming reflec-
tively aware about one’ s existing knowledge with business technology
and how it can be implemented. The second is the recognition of the
Table 12.3 Middle Manager Executive and Implementation Best Practices
EXECUTIVE-BASED MIDDLE MANAGER BEST
PRACTICES
IMPLEMENTATION-BASED MIDDLE MANAGER BEST
PRACTICES
1. Provide valuable input to the executive
management team, including the CEO.
1. Understand how to communicate with and use
IT staff on projects.
2. Incorporate technology into new
products and services.
2. Effectively manage project resources,
including technical staff.
3. Participate in developing a technology
strategy within their business units.
3. Lead innovative groups in their departments.
4. Have proactive methods of dealing
with changes in technology.
4. Understand how to take a strategy and
implement it with technology; that is, create
tactics for completing the project.
5. Focus on how technology can improve
competitive advantage.
5. Establish team-building measures for linking
technology with staff’ s daily operations.
6. Have a systematic process for
gathering intelligence, relating to
pertinent technology developments.
6. Foster the aggregation and collaboration of
business unit assets to form peer groups that
can determine joint efforts for implementing
new technologies.
7. Understand that customers are the
drivers for technology tools provided by
the organization reward.
7. Stimulate their staffs using innovative
strategies of value propositions and systems.
8. Create multifunctional teams that can focus
on particular aspects of how technology
affects their specific area of expertise.
9. Follow common project management practices
so that multitier and department projects can
be globally reviewed by senior management.
10. Form project teams that can respect and
perform on an action basis; that is, teams
that are action oriented.
325towArd best prACtICes
multiplicity of ways that technology can be implemented on projects
(e.g., other business views of how technology can benefit the organiza-
tion). The third is integration of business implementation with tech-
nology, in which a middle manager can begin to combine technology
issues with business concepts and functions on a project basis. The
fourth is stability of business/technology implementation, in which
the middle manager has integrated business/technology as a regu-
lar part of project implementations. The fifth is technology project
leadership, in which the middle manager can use their independent
judgment on how best to use technology and business on a project-by-
project basis. Thus, as middle managers grow in knowledge of tech-
nology and business projects, they can become increasingly more open
to new methods of implementation and eventually, autonomous with
the way they implement projects and provide leadership.
Definitions of Maturity Stages and Dimension Variables
in the Middle Manager Best Practices Arc
Maturity Stages
1. Technology implementation competence and recognition: This
first stage represents the middle manager’ s capacity to learn,
conceptualize, and articulate key issues relating to cogni-
tive business technological skills, organizational interactions,
management value systems, project management ethics, and
management presence.
2. Multiplicity of business implementation of technology: Indicates
the middle manager’ s ability to integrate multiple points of
view during technical project implementations. Using these
new perspectives, the middle manager augments his or her
skills with business implementation with technology career
advancement, expands his or her management value system,
is increasingly motivated to act ethically during projects, and
enhances his or her management presence.
3. Integration of business implementation of technology: Maturing
middle managers accumulate increased understand-
ing of how business and technology operate together and
affect one another. They gain new cognitive skills about
326 InForMAtIon teChnoloGY
technology and a facility with how the organization needs
to interact, expand their management value system, perform
business/technology actions to improve ethics about busi-
ness and technology, and develop effective levels of manage-
ment presence.
4. Stability of business/technology implementation: Middle manag-
ers achieve stable integration when they implement projects
using their cognitive and technological ability; have organi-
zation interactions with operations; have management values
with their superiors, peers, and subordinates; possess project
ethics; and have the management presence appropriate for
performing job duties, not only adequately, but also competi-
tively (with peers and higher-ranking executives in the orga-
nization hierarchy).
5. Technology project leadership: Leadership is attained by the
middle manager when he or she can employ cognitive and
technological skills, organization interactions, management, a
sense of business ethics, and a sense of management presence
to compete effectively for executive positions. This middle
manager is capable of obtaining increasingly executive-level
positions through successful interviewing and organization
performance.
Performance Dimensions
1. Business technology cognition : Pertains to skills specifically
related to learning, applying, and creating resources in busi-
ness and technology, which include the necessary knowledge
of complex operations. This dimension essentially establishes
the middle manager as “ operationally” proficient with tech-
nology and forms a basis for movement to more complex and
mature stages of development when managing technology
projects.
2. Organizational interactions : This focuses on the middle man-
ager’ s knowledge and practice of proper relationships and
management interactions during technology projects. This
pertains to in-person interactions, punctuality of staff, work
327towArd best prACtICes
completion, conflict resolution, deference, and other protocols
in technology projects.
3. Management values : Measures the middle manager’ s ability
to articulate and act on mainstream corporate values credited
with shaping technology project work ethic: independent ini-
tiative, dedication, honesty, and personal identification with
technology project goals, based on the philosophy of manage-
ment protocol of the organization.
4. Project ethics: Reflects the middle manager’ s commitment to
the education and professional advancement of other persons
in technology and in other departments.
5. Management presence: Involves the middle manager’ s view
of the role of a project-based manager during a technology
project implementation and the capacity to succeed in tandem
with other projects. Aspects include a devotion to learning
and self-improvement, self-evaluation, the ability to acknowl-
edge and resolve business conflicts, and resilience when faced
with personal and professional challenges during technology
implementations.
Figure 12.7 shows a graphic view of the middle management tech-
nology best practices arc. Each cell in the arc provides the condi-
tion for assessment. The complete arc is provided in Table 12.4. The
challenge of the middle management best practices arc is whether
to emphasize executive management concepts (more organizationally
intended) or event-driven concepts (project oriented). This arc focuses
on project implementation factors and deals with best practices that
can balance executive pressures with implementation realities. I sug-
gest that senior middle managers, at the director level, who do not
participate in implementation, set their best practices, based on the
CEO maturity arc. Indeed, creating a separate arc for upper manage-
ment would contain too many overlapping cells.
Summary
The formation of best practices to implement and sustain ROD is a
complex task. It involves combining traditional best practice methods
(i.e., what seems to work for proven organizations and individuals)
328 InForMAtIon teChnoloGY
D
ev
el
op
m
en
ta
l d
im
en
sio
ns
o
f m
at
ur
in
g
Bu
sin
es
s t
e c
hn
ol
og
y
O
rg
an
iz
at
io
na
l
M
an
ag
em
en
t v
al
ue
s
Pr
oj
ec
t e
th
ic
s
M
an
ag
em
en
t p
re
se
nc
e
D
im
en
sio
n
sk
ill
Te
ch
no
lo
gy
im
pl
em
en
ta
tio
n
M
ul
tip
lic
ity
o
f
bu
sin
es
s
In
te
gr
at
io
n
of
bu
sin
es
s
St
ab
ili
ty
o
f
bu
sin
es
s/
te
ch
no
lo
gy
Te
ch
no
lo
gy
pr
oj
ec
t
Fi
gu
re
1
2.
7
M
id
dl
e
m
an
ag
em
en
t t
ec
hn
ol
og
y b
es
t p
ra
ct
ic
es
a
rc
.
329towArd best prACtICes
Table 12.4 Middle Management Technology Best Practices Arc—Detail
DIMENSION VARIABLE
TECHNOLOGY
IMPLEMENTATION
COMPETENCE AND
RECOGNITION
MULTIPLICITY OF BUSINESS
IMPLEMENTATION OF
TECHNOLOGY
Business Technology
Cognition
Understands how technology
operates during projects. Has
conceptual knowledge about
hardware interfaces, and the
software development life
cycle. Has the core ability to
relate technology concepts to
other business experiences.
Can also participate in the
decisions about what
technology is best suited for
a particular project. Can be
taught how to expand the use
of technology and can apply
it to other business
situations.
Understands that technology
projects can have multiple
perspectives on how to
implement them. Able to
analyze what is valid vs.
invalid opinions about
business uses of technology.
Can create objective ideas from
multiple technology views
without getting stuck on
individual biases. An ability to
identify and draw upon
multiple perspectives available
from project sources about
technology. Developing a
discriminating ability with
respect to choices available.
Realistic and objective
judgment, as demonstrated by
the applicability of the
technology material drawn for
a particular project or task and
tied to functional/pragmatic
outcomes.
Organizational Interactions Understands that technology
projects require the opinions
of other departments and
staff in multiple ways.
Understands that certain
technological solutions and
training methods may not fit
all project needs and
preferences of the business.
Has the ability to
recommend/suggest
alternative technological
solutions to suite other
business and project needs
and preferences.
Seeks to use technology projects
as a vehicle to learn more
about organization interactions
and mindsets. Strives to care
about what others are
communicating and embraces
these opinions on a project
basis. Tries to understand and
respect technologies that differ
from own. Understands basic
technological project needs of
others.
(Continued)
330 InForMAtIon teChnoloGY
Table 12.4 (Continued) Middle Management Technology Best Practices Arc—Detail
INTEGRATION OF BUSINESS
IMPLEMENTATION OF
TECHNOLOGY
STABILITY OF BUSINESS/
TECHNOLOGY
IMPLEMENTATION
TECHNOLOGY PROJECT
LEADERSHIP
Has the ability to relate
various technical project
concepts and organize
them with non-technical
business issues. Can
operate with both
business and technical
solutions. Can use
technology to expand
reasoning, logic, and
what-if scenarios. Ability
to discern the templates
that technology has to
offer in order to approach
everyday technology
project problems. This
involves the hypothetical
(inductive/deductive)
logical business and
technology skills.
Knowledge of technology
projects are concrete,
accurate, and precise, broad
and resistant to interference
from non-authentic business
and technical project
sources. Ability to resist or
recover from proposed
technology that is not
realistic— and can recover
resiliently.
Methods and judgment in
multidimensional technology
projects are independent and
use critical discernment.
Operational knowledge of
technology and project
management skills can be
transferred and can be used to
self-educate within and
outside of technology. Can use
technology for creative
purposes to solve business and
project challenges and
integrate with executive
management views.
Can deal with multiple
dimensions of criticism
about technology-based
projects. Can develop
relationships
(cooperative) that are
dynamic and based on
discourse. Ability to create
project relations with IT,
other departments, and
customers. Has an
appreciation of project
communication— to
foster open dialogue
(spontaneous), to give
and take, or other than
voyeuristic, one-sidedness
about the project. Ability
to produce in teamwork
situations, rather than
solely in isolation.
Loyalty and fidelity to relations
in multiple organizations.
Commitment to criticism and
acceptance of multiple levels
of IT and business
relationships. Ability to
sustain non-traditional types
of inputs from multiple
sources during projects.
Can utilize and integrate
multidimensions of project
solutions in a self-reliant way.
Developing alone if necessary
using other technical and
non-technical resources. Can
dynamically select types of
interdependent and dependent
organizational relationships.
Ability to operate within
multiple dimensions of
business cultures, which may
demand self-reliance,
independence of initiative, and
interactive communications
during project
implementations.
(Continued)
331towArd best prACtICes
Table 12.4 (Continued) Middle Management Technology Best Practices Arc—Detail
DIMENSION VARIABLE
TECHNOLOGY
IMPLEMENTATION
COMPETENCE AND
RECOGNITION
MULTIPLICITY OF BUSINESS
IMPLEMENTATION OF
TECHNOLOGY
Management Values Technology and cultural
sensitivity during project
implementations. Global
communication, education,
and project use of technology
can be problematic— subject
to false generalizations and
preconceived notions.
Awareness of assumptions
about how technology will be
viewed by other departments
and staff and about biases
about types of technology
used (MAC vs. PC).
Can appreciate need to obtain
multiple sources of information
and opinions during project
implementations. The
acceptance of
multidimensional values in
human character as value
during project design and
completion.
Project Ethics Using technology on the
project with honesty re:
privacy of access and
information. Development of
ethical policies governing
project uses of the Internet,
research, intellectual
property rights, and
plagiarism.
The use of information in a fair
way— comparison of facts
against equal sources of
project information.
Compassion for differences in
project information for which
sources are limited because of
inequality of technology
access. Compassion for
sharing information with other
business units from a sense of
inequality.
Management Presence Has accurate perception of
one’ s own potential and
capabilities in relation to
technology projects— the
technologically realizable
manager.
Understands how other
managers can view self from a
virtual and multiple
perspectives. Understands or
has awareness of the
construction of self that occurs
in projects. Understands views
of other executives and
managers in multiple project
settings. Understands that the
self (thru technology projects)
are open for more fluid
constructions, able to
incorporate diverse views in
multiple settings.
(Continued)
332 InForMAtIon teChnoloGY
with developmental theory on individual maturation. The combina-
tion of these two components provides the missing organizational
learning piece that supports the attainment of ROD. Another way
of comprehending this concept is to view the ROD arc as the over-
arching or top-level model. The other maturity arcs and best practices
Table 12.4 (Continued) Middle Management Technology Best Practices Arc—Detail
INTEGRATION OF BUSINESS
IMPLEMENTATION OF
TECHNOLOGY
STABILITY OF BUSINESS/
TECHNOLOGY
IMPLEMENTATION
TECHNOLOGY PROJECT
LEADERSHIP
Can operate project within
multiple dimensions of
value systems and can
prioritize multitasking
events that are consistent
with value priorities.
Ability to assign value to
new and diverse
technology project
alternatives— integrating
them within a system of
pre-existing business and
technology project
implementation values.
Testing technology value
systems in new ways during
the project implementation is
integrated with long-term
values and goals for
business achievement. Some
project concepts are naturally
persistent and endure
despite new arenas in the
technological era
Use of technology and business
during project implementation
are based on formed principles
as opposed to dynamic
influences or impulses. Formed
principles establish the basis
for navigating through, or
negotiating the diversity of
business influences and
impulses during the project.
Consistent values
displayed on multiple
project communications,
deliverables of content,
and dedication to
authenticity. Maintains
consistency in integrating
values within technology
business issues during
project implementation.
Technology is a commitment in
all aspects of value systems,
including agility in managing
multiple project
commitments. Commitment
to greater openness of mind
to altering traditional and
non-technological methods
on project implementations.
Technological project creativity
with self-defined principles
and beliefs. Risk-taking in
technology-based projects.
Utilizing technology to expand
one’ s arenas of project
freedom. Exploring the project
management liberating
capacities of technology.
Operationalizes technology
projects to unify multiple
components of the self
and understands its
appropriate behaviors in
varying management
situations.
Has regulated an identity of
self from a multiplicity of
management venues. Method
of project interaction creates
positive value systems that
generate confidence about
operating in multiple
organizational communities.
Can determine comfortably,
authenticity of other managers
and their view of the self. Can
confirm project-related
disposition independently from
others’ valuations, both
internally and from other
department cultures. Has
direct beliefs and controls
multidimensional management
growth.
333towArd best prACtICes
represent the major communities of practice that are the subsets of
that model. This is graphically depicted in Table 12.5.
Thus, the challenge is to create and sustain each community and, at
the same time, establish synergies that allow them to operate together.
This is the organizational climate created at ICAP, where the execu-
tive board, senior and middle managers, and operations personnel all
formed their own subcommunities; at the same time, all had the abil-
ity for both downward and upward communication. In summary, this
particular model relies on key management interfaces that are needed
to support ROD.
Ethics and Maturity
The word ethics is defined in many different ways. Reynolds (2007)
defines ethics as “ a set of rules that establishes the boundaries of gen-
erally accepted behaviour” (p. 3). Ethics can also mean conforming to
social norms and rules, which can be challenged by deviant behaviors
of “ others.” Still other groups construct ethics as a moral code that a
community agrees to uphold. Ethics often map to our values— like
integrity and loyalty to others. What is ethical for one person may not
be ethical for another. This issue frames yet another question: How
does ethics relate to leadership, specifically leadership in technology?
Ethics became a heightened issue after the Enron scandal in the
United States. The scandal had a huge effect on the IT industry
because it resulted in Congress enacting the Sarbanes-Oxley (SOX)
Act, which placed significant audit trail requirements on document-
ing processes. Most of these processes existed in automated applica-
tions; thus, IT was required to comply with the rules and regulations
that the SOX Act mandated. Implementing the SOX Act became an
immense challenge for IT organizations mostly because the rules of
compliance were vague.
Most would agree that ethics are a critical attribute for any leader.
The challenge is how to teach it. The SOX Act “ teaches” ethics by
establishing governance by control— control of unethical behavior
through catching deviants. However, history has shown us that devi-
ants are not cured by laws and punishment; rather, they are simply
contained. Unfortunately, containment does not eliminate or cure
unethical behavior. Furthermore, deviants tend to find new ways to
334 InForMAtIon teChnoloGY
Ta
bl
e
12
.5
R
OD
a
nd
B
es
t P
ra
ct
ic
es
A
rc
s
UN
DE
RL
IN
IN
G
BE
ST
P
RA
CT
IC
ES
St
ra
te
gi
c-
in
ki
ng
In
du
st
ry
e
xp
er
tis
e
Ch
an
ge
m
an
ag
em
en
t
Co
m
m
un
ic
at
io
ns
Bu
si
ne
ss
k
no
wl
ed
ge
Te
ch
no
lo
gy
p
ro
fic
ie
nc
y
Hi
rin
g
an
d
re
te
nt
io
n
In
no
va
tio
n
an
d
ou
ts
ou
rc
in
g
le
ad
er
sh
ip
In
fo
rm
at
io
n
ar
ch
ite
ct
UN
DE
RL
IN
IN
G
BE
ST
P
RA
CT
IC
ES
Co
m
m
itt
ed
to
te
ch
no
lo
gy
Te
ch
no
lo
gy
c
at
al
ys
t a
nd
c
ha
m
pi
on
Bu
si
ne
ss
fi
rs
t,
th
en
te
ch
no
lo
gy
Us
e
co
ns
ul
ta
nt
s
fo
r o
bj
ec
tiv
e
in
pu
t
Su
pp
or
t c
om
m
un
iti
es
o
f p
ra
ct
ic
e
Se
t p
ro
je
ct
m
ile
st
on
es
Fo
st
er
c
ul
tu
ra
l a
ss
im
ila
tio
n
Un
de
rs
ta
nd
o
rg
an
iza
tio
na
l l
ea
rn
in
g
Un
de
rs
ta
nd
te
ch
no
lo
gy
li
fe
c
yc
le
Ha
ve
c
hi
ef
it
e
xe
c
re
po
rt
di
re
ct
ly
Su
pp
or
t o
rg
an
iza
tio
na
l d
yn
am
is
m
UN
DE
RL
IN
IN
G
BE
ST
P
RA
CT
IC
ES
M
AN
AG
EM
EN
T-
BA
SE
D
In
te
ra
ct
w
ith
e
xe
cu
tiv
e
m
an
ag
em
en
t
In
co
rp
or
at
e
te
ch
no
lo
gy
in
to
n
ew
p
ro
du
ct
s
Us
e
te
ch
no
lo
gy
fo
r c
om
pe
tit
iv
e
ad
va
nt
ag
e
Pr
oc
es
s
fo
r e
va
lu
at
in
g
ne
w
te
ch
no
lo
gi
es
Un
de
rs
ta
nd
d
riv
er
ro
le
o
f c
us
to
m
er
s
IM
PL
EM
EN
TA
TI
ON
-B
AS
ED
Ut
ili
za
tio
n
of
it
s
ta
ff
on
p
ro
je
ct
s
Le
ad
in
g
in
no
va
tiv
e
gr
ou
ps
Ef
fe
ct
iv
el
y m
an
ag
in
g
pr
oj
ec
t r
es
ou
rc
es
St
ra
te
gi
c
us
e
of
te
ch
no
lo
gy
Es
ta
bl
is
h
te
am
-b
ui
ld
in
g
m
ea
su
re
s
Fo
st
er
a
gg
re
ga
tio
n
an
d
co
lla
bo
ra
tio
n
St
im
ul
at
e
st
af
f w
ith
v
al
ue
p
ro
po
si
tio
ns
Cr
ea
te
m
ul
ti-
fu
nc
tio
na
l t
ea
m
s
Su
pp
or
t c
om
m
on
p
ro
je
ct
m
an
ag
em
en
t p
ra
ct
ic
es
Fo
rm
a
ct
io
n-
or
ie
nt
ed
te
am
s
335towArd best prACtICes
bypass controls and get around the system in time. On the other hand,
educators more often see the solution as transforming behavior of the
individual; that is, ethics can only be taught if the individual realizes
its value. Value in ethical behavior becomes a systemic transforma-
tion when the individual believes in its self-realization. Being ethical
is then aligned with self-actualization and adult maturity. So, ethics
can be aligned with maturity in the same ways that the maturity arcs
presented were mapped to leadership. Why is this so important for
IT leaders? The SOX Act answered this question because it clearly
identified the IT function as the most critical component of com-
pliance. Unethical behavior in technology-based systems can damage
the greater good, which places a big responsibility on the IT function.
I would suggest that IT ethics and leadership are very much linked.
It is a very important responsibility for technology executives to pro-
vide direction to their firms on how technology and ethics are inte-
grated and how they can transform individuals to value conformance
without the overuse of governing controls. Firms must use organi-
zational learning tools as the vehicle to promote such conformance
through changes in behavior. Unfortunately, many executives, includ-
ing those in IT, practice governance much more than influence. I am
not suggesting the elimination of controls, but rather, that leadership
should depend less on governance and more on effecting behavioral
change. In other words, the key to developing strong ethics within an
IT organization is leadership, not governance. An important compo-
nent of leadership is the ability to influence the behaviors of others
(without exerting control or power). The real power of leadership is to
use influence to effect ethical behavior as opposed to demanding it.
How do we create ethical IT organizations? Further, how can a
technology executive provide the necessary strategy and influence to
accomplish firm-wide ethical transformation? The first strategy, for a
number of reasons, should be to create an ethical IT organization as
the model:
1. The technology executive has control over that organization.
2. Most IT ethical problems today emanate from technology per-
sonnel because of their unusual access to data and information.
3. IT is positioned to lead the direction, since it is its area of
expertise.
336 InForMAtIon teChnoloGY
So, IT can set the example for technology-related ethics for the
entire organization by establishing its own level of compliance by a
“ way of being,” as opposed to a way of being managed. Often, this
way of being can evolve into a code of behavior that can become the
cultural “ code” of the organization itself. This code of ethics should
address and be limited to such IT-related issues as:
• Privacy : Because of their access to transactions over the
Internet, IT professionals must respect the privacy of infor-
mation of others. Their code of ethics should go beyond just
e-mail transactions to include access to personal data that
may be stored on desktops or data files.
• Confidentiality : This differs from privacy because the data are
available to IT in the normal transactions of business. That
is, the data are captured or used in the development of an
application. IT personnel need to keep such information con-
fidential at all times— not only for the employees of the firm
but also for clients and vendors.
• Moral responsibility : IT needs to protect the organization from
outside abuses or questionable transactions coming into and
leaving the company. Protection can also include blocking
access to certain websites that are dangerous or inappropriate.
This practice should not be regarded as a control, but rather,
as a moral responsibility of any employee. Of course, there
needs to be careful objectivity in how the moral code is actu-
ally executed when a problem is identified.
• Theft : Removing information that belongs to someone else
can be construed as a form of theft. Theft should always be
regarded as an offense punishable by law— that is, above and
beyond rules and regulations of the company.
These are only examples of areas in which an ethical code might
be applied. Such a code must be implemented in IT as a framework
for how people are employed and as a basis for promotion. Again,
governance plays an important part because unfortunately there will
always be individuals who violate ethics. What we need are organiza-
tions that promote and defend ethics to the greatest possible extent.
This way of being is consistent with the core definition of a learn-
ing organization in that ethics must inevitably be part of the fabric
337towArd best prACtICes
of the culture and evolved within it. With IT serving as a model,
the technology executive can act as the champion for implementation
company-wide. This chapter has shown that ethics are intrinsically
linked to maturity. Indeed, every arc contained a dimension that con-
tained an ethical dimension. Perhaps if such ethical practices existed
at Enron, the “ learning organization” there could have stopped the
abuses.
339
13
ConClusion
Introduction
This book has explored many conceptual aspects of information tech-
nology (IT) and organizational learning and how they can be utilized
together to help firms compete in a rapidly changing world. Case stud-
ies were presented to show how these concepts, and the theories they
derive from, could be implemented into practice. It is most important,
however, to remember that each organization is unique and that the
implementation of organizational learning methods must therefore
be tailored to the particular dynamics at play in a given organiza-
tion. Hence, there can be no boilerplate methodology for the strategic
employment of technology; such an approach could never guarantee
maximum benefit to the organization. My position involves employ-
ing various organizational learning methods that must be carefully
chosen and implemented, based on the projected target audience and
on the particular stage of growth of the organization and its mature
use of technology.
In my study of chief executive officer (CEO) perceptions of IT,
I found that the role of IT was not generally understood in most of the
organizations I surveyed, especially at the CEO level. There appear
to be inconsistent reporting structures within the IT organization,
and there is a lack of IT-related discussion at the strategic and senior
executive levels. Furthermore, most executives are not satisfied with
IT performance, and while most agree that technology should play a
larger role in marketing, few have been able to accomplish this. The
general dilemma has involved an inability to integrate technology
effectively into the workplace.
Certainly, a principle target of this book is to answer the question
of what chief IT executives need to do and in what directions their
340 InForMAtIon teChnoloGY
roles need to evolve regarding IT. Other concerns center on general
organizational issues surrounding who IT people are, where they
report, and how they should be evaluated. IT must also provide better
leadership with respect to guiding a company through the challenges
of unproven technologies. While technology behaves dynamically, we
still need processes that can validate its applicability to the organiza-
tion. Another way of viewing this is to accept the idea that certain
technologies need to be rejected because of their inappropriateness to
drive strategy.
IT is unique in that it is often viewed from a project perspective; for
instance, that which is required to deliver technology and the cultural
impact it has on the organization, and tends to be measured by project
deliverables due to the pressure to see measurable outcomes. From a
project perspective, IT staff members typically take on the role of
project managers, which requires them to communicate with multiple
business units and management layers. They need to establish shorter
project life cycles and respond to sudden changes to the requirements.
No longer does a traditional project life cycle with static dates and
deliverables work with the fast-paced businesses of today. Rather,
these projects are living and breathing entities that must be in balance
with what is occurring in the business at all times. Most important is
that project measurable outcomes must be defined and seen in balance
with expected organizational transformations.
I began my explanation of the role of technology by establishing
it as a dynamic variable, which I termed technological dynamism.
Responsive organizational dynamism (ROD) represents my attempt
to think through a range of responses to the problems posed by tech-
nological dynamism, which is an environment of dynamic and unpre-
dictable change resulting from the advent of innovative technologies.
This change can no longer be managed by a group of executives or
managers; it is simply too complex, affecting every component of a
business. A unilateral approach does not work; the problem requires
an environmental approach. The question is how to create an orga-
nization that can respond to the variability of technologies in such
a way that its responses become part of its everyday language and
discourse. This technological state of affairs is urgent for two major
reasons. First, technology not only is an accelerator of change but also
requires accelerated business responses. Organizations cannot wait
341ConClusIon
for a committee to be formed or long bureaucratic processes to act.
Second, the market is unforgiving when it comes to missing business
opportunities. Every opportunity missed, due to lack of responding in
a timely fashion, can cost an organization its livelihood and future. As
stated by Johansen et al. (1995):
The global marketplace requires constant product innovation, quick
delivery to market, and a large number of choices for the consumer, all
of which are forcing us to rethink the way we structure our business
organizations to compete. Indeed, many businesses are finding their
traditional structure cumbersome— the way they work is more of an
obstacle than help in taking advantage of global opportunities. (p. 1)
While ROD is the overarching approach for a firm that can perform
in a dynamic and unpredictable environment, there are two major
components to that approach that I raised for further consideration. I
discussed how technology, as a variable, is unique in that it affects two
areas of any organization. The first is the technology itself and how it
operates with business strategy. I called this the strategic integration
component of responsive organizational dynamism. The challenge
here is to have organizations create processes that can formally and
informally determine the benefit of new and emerging technologies
on an ongoing basis. The second component is cultural assimilation,
which is about managing the cultural and structural changes that are
required when new strategies are adopted by the organization.
Creating an environment of ROD requires processes that can fos-
ter individual and organizational-level thinking, learning, and trans-
formation. Organizational learning techniques best fit the need as
they contain the core capabilities to assist organizations in reinvent-
ing themselves as necessary, and to build an organization that can
evolve with technology, as opposed to one that needs to be reorga-
nized. I have presented many organizational learning concepts and
modified them to provide specific remedies to the challenges required
to create responsive organizational dynamism. I have also presented
the complex vectors that determine which learning theory should be
applied and integrated with others, so that every aspect of individual
and organizational evolution can be supported. I chose to use the term
vector to describe this force or influence because of the different ways
342 InForMAtIon teChnoloGY
in which these learning methods can help in creating and sustaining
firm-wide responses to technological dynamism.
Perhaps the most important learning process among these is that
of linear development leading to maturation. My use of maturity arcs
permits me a framework for the development and integration of mod-
els that can measure where individuals and organizations are in their
trajectory toward the integration of emerging technologies in their
business strategies. These maturity arcs provide a basis for how to mea-
sure where the organization is, what types of organizational learning
methods to consider, and what outcomes to expect. Indeed, providing
measurable outcomes in the form of strategic performance is the very
reason why executives should consider embracing this model.
I also discussed a number of methods to manage organizational
learning, modifying theories of knowledge management and change
management, so that they specifically addressed the unique aspects
of change brought about by new technologies. I looked at how the
CEO needs to become more knowledgeable about technology, and,
based on case studies and research, I provided sets of best practices to
suggest that staff members cannot become part of a learning organi-
zation without the participation of the CEO and his or her executive
committees. On the other hand, I investigated the interesting work
of Nonaka and Takeuchi (1995) and their middle-up-down theory
of middle management. I modified Nonaka and Takeuchi’ s idea by
complicating the strata that can be used to define the middle, and I
established three tiers of middle management and integrated them
into organizational learning theories. Finally, I used the Ravell case
study to show how operations personnel continue to play an impor-
tant role in organizational learning, and how the maturity arc can be
used to transform individual learning practices into less event-driven
learning at the organizational level. I formulated best practices for
each of these three major organizational structures, along with corre-
sponding maturity arcs to lay the foundation of what each community
needs to do to properly participate in the transformations indicated
for responsive organizational dynamism. To this end, I proposed cer-
tain road maps that, if followed, could provide the mechanisms that
lead to the kind of organizational transformation that is empowered
to handle the challenges of new technologies. This process is sum-
marized in Figure 13.1.
343ConClusIon
I have taken a strong position regarding the debate over whether
learning occurs best on the individual level or at the system-orga-
nizational one, particularly as learning affects the establishing and
sustaining of responsive organizational dynamism. My response to
this debate is “ yes” — yes, in the sense that both are very much needed
and part of a process that leads to a structured way of maturing the
use of organizational learning by an organization to improve strategic
performance. I believe the Ravell case study provides an example of
how learning maturation operates in a dynamic environment. We see
that operations personnel tend to rely on event-driven and individual-
based reflective practices before being able to think at an organiza-
tional level. My prior research (Langer, 2002) on reflective practices
clearly shows that many adults do not necessarily know how to reflect.
Technology
dynamism
�e
“technology”
variable
Requirements for
organizational
change
Responsive organizational
dynamism
Strategic
integration
Cultural
assimilation
Strategic
performance
Organizational
learning
Figure 13.1 Technology “ road map.”
344 InForMAtIon teChnoloGY
The important work of Argyris and Schö n (1996) on introducing and
sustaining individual learning, specifically using double-loop learn-
ing, should be used when implementing an organizational learning
program. Ravell also showed us that time is an important factor for
individual development and that political factions are part of that pro-
cess. With patience and an ongoing program, group learning activi-
ties can be introduced to operations personnel, thereby supporting the
kind of system-level thinking proposed by Senge (1990).
A critical part of organizational learning, in particular the nec-
essary steps to establish a learning organization, is the formation of
communities of practice. Communities of practice, in all of the case
studies, were the cornerstones in the transition from individual-based
learning to group learning. Communities of practice begin the matu-
ration process of getting organizations to change to learning based
on organizational goals. This is critical for ROD because technology
requires planning and vision that are consistent with business strat-
egy. While much of the literature integrates the notion of communi-
ties of practice with knowledge management, I expanded its use and
defined the community as the single most important organizational
structure for dealing with emerging technologies. The reason for this
is the very challenge facing IT organizations today: to be able to inte-
grate their efforts across business units. This has been proven to be the
most difficult challenge for the chief information officers (CIOs) of
today. This was further supported by the Siemens AG case study, in
which Dana Deasy, the corporate CIO of the Americas, provided a
detailed picture of the complex world of a CIO in a global firm, with
over 400,000 employees. Yet, it is the creation of multiple layers of
communities of practice that enables firms to create what I call “ com-
mon threads” of communication. Thus, the linkage across communi-
ties of practice is a central theme of this book, providing guidance and
education to organizations to establish processes that support their
evolution in a responsive way.
The key word that I have used here is evolution. In the past, infor-
mation traveled much slower, and there was more time to interpret
its impact on the organization. Today, the travel time is shrinking;
therefore, evolution can and should occur at a quicker pace. Indeed,
organizational evolution is intertwined with the dynamics of com-
munity legitimization (Aldrich, 2001). Technological development
345ConClusIon
for a particular population has widespread consequences for the rest
of the organization. In these cases, technological innovations repre-
sent a form of collective learning that is different from direct learning
from experience alone (Miner & Haunschild, 1995). There are many
scholars who believe that change management must be implemented
through top-down management approaches. However, I hope this
book demonstrates that leadership through top-down management
will never be solely sufficient to establish the organizational struc-
ture needed to handle technological innovations properly. Many
such efforts to reorganize or reengineer organizations have had dis-
appointing results. Many of these failures, I believe, are attributable
to a dependence on management intervention as opposed to strate-
gic integration and cultural assimilation. Technology only serves to
expose problems that have existed in organizations for decades: the
inability to drive down responsibilities to the operational levels of the
organization.
My case studies provide, I trust, a realistic and pragmatic view
toward the attainment of responsive organizational dynamism,
assuming the appropriate roles and responsibilities are available.
Furthermore, the case studies also reflect that progress toward orga-
nizational learning and maturity is a gradual one. As such, I deter-
mined that organizational transformation must be addressed along
the same basis; that is, transformation is a gradual process as opposed
to a planned specific outcome. I showed that organizations could and
should look at transformation in much shorter “ chunks,” as opposed
to long-term “ big-bang” approaches that rarely work and are difficult
to measure. Measurement was applied to organizational transforma-
tion via the implementation of the balanced scorecard. The scorecard
model I modified is tied to the chunk approach.
Another important concept in this book is the reconciliation
between control and empowerment. As organizations find that their
traditional structures are cumbersome when dealing with emerging
technologies, they realize the need to empower employees to do more
dynamically. With this empowerment, employees may make more
mistakes or seem less genuine at times. When this occurs, there may
be a need for management controls to be instituted and power-central-
ized management styles to be incorporated. Unfortunately, too many
controls end any hope of creating a learning organization that can
346 InForMAtIon teChnoloGY
foster the dynamic planning and needs of responsive organizational
dynamism. They also block the molding of communities of practice
that require common threads of discourse and language. Indeed, it
is communities of practice and discourse that lay the foundations for
addressing the dilemma of employee control versus empowerment.
We are really beginning to experience the results of emerging tech-
nologies, particularly for products traded internationally. We have
seen an unusual trend occur in which off shore product development
and maintenance is at an all-time high, local employment is down,
and corporate earnings are growing. The advent of this cycle lays a
foundation for the new trends of global worker operations, many of
which are shifting from a labor-intensive process to needs for think-
ing, planning, and management.
Unskilled or less-skilled workers, partly because of new technolog-
ical automation, are allowing organizations to displace higher-costing
local labor to international outsourced operations. This means an
increase in management learning related to supervision and coordina-
tion in a technology-driven world. We must be aware of the concern
expressed by O’ Sullivan (2001) that “ new technologies have created
unemployed workers with no rights” (p. 159). The way individuals
communicate, or the rules of their engagement, are quickly chang-
ing, particularly in the need to create more research and develop-
ment (R&D) infrastructures that can respond quicker to innovation
opportunities brought about by emerging technologies. We saw this
dilemma occur at Siemens, where business strategy and technology
became a major investment, and the realization that e-business was
more about business than just technology.
To address the lack of understanding of the technology life cycle, I
presented my concept of driver and supporter functions and mapped
them onto evolutionary transformation. This life cycle is one that
ties business strategy into technology and should be used to convey
ROD to executives. Driver functions explain why strategic integra-
tion is so important and present a case that requires more market-
ing-based philosophies when investing in technologies. This means
that early adaptation of technology requires, as Bradley and Nolan
(1998) call it, “ sense-and-respond” approaches, by which IT organi-
zations can experiment with business units on how a technology may
347ConClusIon
benefit the business. Siemens and ICAP provided good examples of
different ways of creating infrastructures that can support technology
exploration, including Deasy’ s 90-day program, by which technol-
ogy investments were reviewed periodically to see what adjustments
are required to maximize the investment. It also provided a way to
cancel those investments that were not paying off as originally fore-
cast. Understanding that changes along the way are needed, or that
there are technologies that do not provide the intended benefits, must
become a formal part of the process, one that CEOs must recognize
and fund.
On the other hand, the supporter role is one that addresses the
operational side of IT, such that executives and managers under-
stand the difference. I treated the concept of supporter as an eventual
reality, the reality that all technologies, once adopted by operations,
must inevitably become a commodity. This realization paves the way
to understanding when and how technologies can be considered for
outsourcing, based on economies of scale. The adoption of this phi-
losophy creates a structured way of understanding the cost side of the
IT dilemma and requires business units to integrate their own plans
with those offered by emerging technologies. The supporter aspect
of technology became the base of cultural assimilation because once
a technology is adapted by operations, there must be a correspond-
ing process that fosters its impact on organizational structures and
cultural behaviors. It also provides the short- and long-term expected
transformations, which ultimately link technology and strategic
performance.
The driver/supporter philosophy also shows the complexity of the
many definitions of technology, and that executives should not attempt
to oversimplify it. Simply put, technology must be discussed in differ-
ent ways, and chief IT executives need to rise to the occasion to take a
leadership role in conveying this to executives, managers, and opera-
tions, through organizational learning techniques. Organizations
that can implement driver/supporter methods will inevitably be better
positioned to understand why they need to invest in certain technolo-
gies and technology projects. My initial case study at Ravell exposed
the potential limit of only operating on the unit levels and not getting
executives involved in the system thinking and learning phases.
348 InForMAtIon teChnoloGY
These general themes can be formulated as a marriage between
business strategy and technological innovation and can be represented
as follows:
1. Organizations must change the business cycles of technology
investment; technology investment must become part of the
everyday or normative processes, as opposed to specific cycles
based on economic opportunities or shortfalls. Emerging
technologies tend to be implemented on a “ stop-and-go”
basis, or based on breakthroughs, followed by discontinuities
(Tushman & Anderson, 1997).
2. The previous experiences that organizations have had
with technology are not a good indicator for its future use.
Technology innovations must evolve through infrastructure,
learning, and process evaluation.
3. Technology is central to competitive strategy. Executives
need to ensure that technology opportunities are integrated
with all discussions on business strategy.
4. Research and development ( R&D) is at the center of sys-
tems/organizational-level thinking and learning. Companies
need to create R&D operations, not as separate entities, but
as part of the evaluation processes within the organizational
structure.
5. Managing technology innovations must be accomplished
through linkages. Thus, interfaces across communities of
practice via common threads are essential to have learning
improve the ability of the organization to operate within
responsive organizational dynamism.
6. Managing intellectual capital is an exercise of linking the var-
ious networks of knowledge in the organization. Managing
this knowledge requires organizational learning, to transfer
tacit knowledge to explicit knowledge. The cultural assimi-
lation component of ROD creates complex tacit knowledge
between IT and non-IT business units.
7. There are multiple and complex levels of management that
need to be involved in responsive organizational dynamism.
Successful management utilizes organizational learning prac-
tices to develop architectures, manage change, and deal with
349ConClusIon
short- and long-term projects simultaneously. Strong leader-
ship will understand that the communities of practice among
the three primary levels (executive, middle management,
and operations) constitute the infrastructure that best sus-
tains the natural migration toward responsive organizational
dynamism.
This book looked at business strategy from yet another perspective,
beyond its relationship with emerging technologies. Because organi-
zational learning is required to foster responsive organizational dyna-
mism, strategy must also be linked to learning. This linkage is known
as strategic learning, which, if implemented, helps organizations to
continually adapt to the changing business environment, including
changes brought about by technology.
However, due to the radical speed, complexity, and uncertainty,
traditional ways of doing strategy and learning can no longer ignore
the importance of technology. The old methods of determining busi-
ness strategy were based on standard models that were linear and
“ plug-in.” As stated, they were also very much based on projects that
attempted to design one-time efforts with a corresponding result. As
Pietersen (2002) explains, “ These processes usually produce operating
plans and budgets, rather than insights and strategic breakthroughs”
(p. 250). Technological dynamism has accelerated the need to replace
these old traditions, and I emphasized that organizations that practice
ROD must
• Evaluate and implement technology in an ongoing process
and embed it as part of normal practices. This requires a
change in integration and culture.
• Comprehend that the process of ROD is not about planning;
it is about adaptation and strategic innovation.
• Have a process that feeds on the creation of new knowledge
through organizational learning toward strategic organiza-
tional transformation.
Many scholars might correlate strategic success with leadership.
While leadership, in itself, is an invaluable variable, it is just that. To
attain ongoing evolution, I believe we need to move away from relying
on individual leadership efforts and move toward an infrastructure
350 InForMAtIon teChnoloGY
that has fewer leaders and more normative behavior that can support
and sustain responsive organizational dynamism. Certainly, this fos-
ters the important roles and responsibilities of CEOs, managers, and
boards, but to have an ongoing process that changes the thinking and
the operational fundamentals of the way the organization functions
is more important and more valuable than individual leadership. That
is why I raised the issues of discourse and language as well as self-
development. Therefore, it is the ability of an organization to trans-
form its entire community that will bring forth long-term strategic
performance.
What this book really commits to is the importance of lifelong
learning. The simple concept is that adults need to continually chal-
lenge their cultural norms if they are to develop what Mezirow (1990)
calls “ new meaning perspectives.” It is these new meaning perspec-
tives that lay the foundation for ROD so that managers and staff can
continually challenge themselves to determine if they are making the
best strategic decisions. Furthermore, it prepares individuals to deal
with uncertainty as well as the ongoing transitions in the way they
do their jobs. It is this very process that ultimately fosters learning in
organizations.
While on-the-job training is valuable, Ravell shows us that move-
ment, or rotation of personnel, often supports individual learning.
Specifically, the relocation of IT personnel to a business unit environ-
ment during Ravell phase I served to get IT staff more acclimated to
business issues. This relocation helped IT staff members to begin to
reflect about their own functions and their relationship to the over-
all mission of the organization. Ravell phase III showed yet another
transition; taking a group of IT staff members and permanently inte-
grating them in a non-IT business-specific department. Ravell also
teaches us that reflection must be practiced; time must be devoted to
its instruction, and it will not occur automatically without interven-
tions from the executive rank. The executive must be a “ champion”
who demonstrates to staff that the process is important and valued.
Special sessions also need to be scheduled that make the process of
learning and reflection more formal. If this is done and nurtured
properly, it will allow communities to become serious about best prac-
tices and new knowledge creation.
351ConClusIon
Although I used technology as the basis for the need for responsive
organizational dynamism, the needs for its existence can be attributed
to any variable that requires dynamic change. As such, I suggest that
readers begin to think about the next “ technology” or variable that
can cause the same needs to occur inside organizations. Such accel-
erations are not necessarily limited to technology. For example, we
are experiencing the continuation of organizational downsizing from
acquisitions. These acquisitions present similar challenges in that
organizations must be able to integrate new cultures and “ other” busi-
ness strategies and attempt to form new holistic directions— direc-
tions that need to be formed quickly to survive.
The market per se also behaves in a similar way to technology. The
ability to adjust to consumer needs and shifting market segments is
certainly not always related to technological change. My point is that
ROD is a concept that should be embraced notwithstanding whether
technology seems to have slowed or to have no effect on a specific
industry at a particular moment. Thus, I challenge the organizations
of today to develop new strategies that embrace the need to become
dynamic throughout all of their operations and to create communi-
ties of practice that plan for ongoing strategic integration and cultural
assimilation.
This book looked at the advent of technology to uncover a dilemma
that has existed for some time. Perhaps a more general way of defining
what ROD offers is to compare it to another historical concept: “ self-
generating organizations.” Self-generating organizations are known
for their promotion of autonomy with an “ underlying organic sense
of interdependence” (Johansen et al., 1995). Based on this definition,
a self-generating organization is like an organism that evolves over
time. This notion is consistent with organizational learning because
they both inherently support inner growth stemming from the orga-
nization as opposed to its executives. The self-generating organization
works well with ROD in the following ways:
• Traditional management control systems do not apply.
• Risks are higher, given that these organizational workers
are granted a high degree of autonomy and empowerment
that will lead to processes that break with the norms of the
business.
352 InForMAtIon teChnoloGY
• Adjustments and new processes should be expected.
• These organizations tend to transform political activity into
strong supporting networks.
• Leadership definitions do not work. You cannot lead what
you cannot control.
Self-generating organizations have scared traditional managers in
the past, due to the fear they have of losing control. ROD provides
a hybrid model that allows for self-generating infrastructures while
providing certain levels of control fostered by organizational learn-
ing. Specifically, this means that the control is not traditional con-
trol. Responsive organizational dynamism, for example, embraces the
breaking of rules for good reasons; it allows individuals to fail yet to
reflect on the shortfall so that they do not repeat the same errors. It
also allows employees to take risks that show promise and lead to
increased critical thinking and to strategic action. Indeed, manage-
ment and leadership become more about framing conditions for oper-
ations, observing the results, and making adjustments that maintain
stability. Thus, seeing ROD as a form of self-generation is the basis
for sustaining innovative infrastructures that can respond to dynamic
variables, like technology.
I have emphasized the need for organizational learning as the key
variable to make ROD a reality. While I have modified many of the
organizational learning theories to fit this need, I must acknowledge
that a portion of the “ learning” should be considered “ organizing.”
Vince (2002) provides an analysis of how organizational learning
could be used to sustain an “ organized” reflection. He provides an
interesting matrix of how the two theories can be integrated. After
reviewing many of the ways in which organizational learning affects
responsive organizational dynamism, I have developed a modified
chart of Vince’ s original framework, as shown in Table 13.1.
Table 13.1 shows the three kinds of reflective practices that can
operate in an organization: individual, group, and organizational.
I emphasized in Chapter 9 that the extent of organizational learn-
ing maturation is directly related to the sophistication of reflections
among the communities of practice. The more learning that occurs,
based on individual reflection, the earlier the stage at which organi-
zational learning maturity occurs. Thus, more mature organizations
353ConClusIon
Ta
bl
e
13
.1
In
di
vi
du
al
, G
ro
up
, a
nd
O
rg
an
iza
tio
na
l a
nd
R
efl
ec
tiv
e
Pr
ac
tic
es
IN
DI
VI
DU
AL
R
EL
AT
IO
NS
B
ET
W
EE
N
TH
E
PE
RS
ON
, R
OL
E,
A
ND
T
HE
OR
GA
NI
ZA
TI
ON
-IN
-T
HE
-M
IN
D
GR
OU
P
RE
LA
TI
ON
S
AC
RO
SS
T
HE
IT
BO
UN
DA
RI
ES
O
F
SE
LF
O
R
OT
HE
R
AN
D
OF
S
UB
DE
PA
RT
M
EN
TS
W
IT
HI
N
IT
OR
GA
NI
ZA
TI
ON
AL
T
HE
R
EL
AT
IO
NS
BE
TW
EE
N
IT
A
ND
O
TH
ER
B
US
IN
ES
S
UN
IT
S
Pe
er
co
ns
ul
tin
g
gr
ou
ps
(n
on
m
an
ag
er
ia
l
se
lf-
go
ve
rn
in
g
IT
g
ro
up
s
of
a
t l
ea
st
th
re
e
in
di
vi
du
al
s)
M
ak
in
g
co
nn
ec
tio
ns
fo
r t
he
s
el
f:
Re
vi
ew
a
nd
re
fle
ct
io
n
wi
th
in
IT
co
m
m
un
ity
, b
y f
rie
nd
sh
ip
, a
nd
m
ut
ua
lit
y o
f i
nt
er
es
ts
a
nd
n
ee
ds
.
M
ak
in
g
co
nn
ec
tio
ns
in
s
m
al
l g
ro
up
s
wi
th
“
ot
he
rs
”
ac
ro
ss
IT
o
rg
an
iza
tio
n:
De
ve
lo
p
in
te
rp
er
so
na
l c
om
m
un
ic
at
io
n
an
d
di
al
og
ue
w
ith
in
IT
c
om
m
un
iti
es
.
M
ak
in
g
co
nn
ec
tio
ns
w
ith
th
e
en
tir
e
or
ga
ni
za
tio
n:
R
efl
ec
tio
n
on
w
ay
s
th
at
te
ch
no
lo
gy
a
ffe
ct
s
ot
he
r g
ro
up
s
in
th
e
or
ga
ni
za
tio
n.
Or
ga
ni
za
tio
na
l r
ole
a
na
lys
is
(l
in
ki
ng
in
di
vi
du
al
s
wi
th
“
ot
he
rs
”
in
si
de
th
e
IT
or
ga
ni
za
tio
n)
Or
ga
ni
za
tio
na
l r
ol
e
an
al
ys
is
:
Un
de
rs
ta
nd
in
g
th
e
co
nn
ec
tio
ns
be
tw
ee
n
th
e
pe
rs
on
, t
he
p
er
so
n
in
IT
,
an
d
hi
s
or
h
er
ro
le
in
o
rg
an
iza
tio
n.
Ro
le
a
na
lys
is
g
ro
up
s:
T
he
w
ay
s
in
wh
ic
h
te
ch
no
lo
gy
ro
le
s
an
d
th
e
un
de
rs
ta
nd
in
g
of
th
os
e
ro
le
s
in
te
rw
ea
ve
w
ith
in
a
n
IT
c
om
m
un
ity
o
r
de
pa
rtm
en
t.
Te
ch
no
lo
gy
ro
le
p
ro
vi
de
s
th
e
fra
m
ew
or
k
wi
th
in
w
hi
ch
th
e
pe
rs
on
a
nd
or
ga
ni
za
tio
n
ar
e
in
te
gr
at
ed
.
Co
m
m
un
iti
es
of
p
ra
ct
ice
(g
ro
up
s
of
in
di
vi
du
al
s
un
ite
d
in
a
ct
io
ns
th
at
co
nt
rib
ut
e
to
th
e
pr
od
uc
tio
n
of
IT
id
ea
s
in
p
ra
ct
ic
e)
In
vo
lv
em
en
t:
Pr
ov
id
in
g
pe
rs
on
al
ex
pe
rie
nc
e
as
th
e
ve
hi
cl
e
to
o
rg
an
ize
th
e
us
e
of
te
ch
no
lo
gy
.
En
ga
ge
m
en
t:
Ex
pe
rie
nc
e
us
ed
to
a
pp
ly
te
ch
no
lo
gy
a
cr
os
s
IT
o
rg
an
iza
tio
n;
un
de
rs
ta
nd
in
g
of
im
po
rta
nc
e
of
IT
in
te
rd
ep
ar
tm
en
t c
om
m
un
ic
at
io
n.
Es
ta
bl
is
hm
en
t:
Ex
pe
rie
nc
e
of
p
ow
er
re
la
tio
ns
a
s
th
ey
re
ac
t a
nd
re
sp
on
d
to
te
ch
no
lo
gy
u
se
s
am
on
g
co
m
m
un
iti
es
o
f
pr
ac
tic
e
Gr
ou
p
re
la
tio
ns
co
nf
er
en
ce
s
(re
ve
al
th
e
co
m
pl
ex
iti
es
o
f f
ee
lin
gs
, i
nt
er
ac
tio
ns
,
an
d
po
we
r r
el
at
io
ns
th
at
a
re
in
te
gr
al
to
th
e
pr
oc
es
s
of
o
rg
an
izi
ng
te
ch
no
lo
gy
im
pl
em
en
ta
tio
ns
)
Ex
pe
rie
nc
in
g
an
d
re
th
in
ki
ng
te
ch
no
lo
gy
au
th
or
ity
a
nd
th
e
m
ea
ni
ng
a
nd
co
ns
eq
ue
nc
es
o
f l
ea
de
rs
hi
p
an
d
fo
llo
we
rs
hi
p.
Ex
pe
rie
nc
in
g
de
fe
ns
iv
e
m
ec
ha
ni
sm
s
an
d
av
oi
da
nc
e
st
ra
te
gi
es
a
cr
os
s
IT
de
pa
rtm
en
ts
. E
xp
er
ie
nc
e
of
or
ga
ni
zin
g,
b
el
on
gi
ng
, a
nd
re
pr
es
en
tin
g
ac
ro
ss
IT
o
rg
an
iza
tio
ns
.
Ex
pe
rie
nc
in
g
th
e
wa
ys
in
w
hi
ch
IT
a
nd
th
e
or
ga
ni
za
tio
n
be
co
m
e
in
te
gr
at
ed
us
in
g
co
lle
ct
iv
e
em
ot
io
na
l e
xp
er
ie
nc
e,
po
lit
ic
s,
le
ad
er
sh
ip
, a
ut
ho
rit
y ,
an
d
or
ga
ni
za
tio
na
l t
ra
ns
fo
rm
at
io
n.
354 InForMAtIon teChnoloGY
reflect at the group and organizational level. Becoming more mature
requires a structured process that creates and maintains links between
reflection and democratic thinking. These can be mapped onto the
ROD arc, showing how, from an “ organizing” perspective, reflective
practice serves as a process to “ outline what is involved in the pro-
cess of reflection for learning and change” (Vince, 2002, p. 74). Vince
does not, however, establish a structure for implementation, for which
ROD serves that very purpose, as shown in Figure 13.2.
Figure 13.2 graphically shows how organized reflection maps to
the linear stages of the ROD arc (the organizational-level maturity
arc), which in turn maps onto the three best practices arcs, discussed
in Chapter 9. Each of the management arcs represents a level of man-
agement maturity at the organizational level, with Vince’ s (2002)
matrix providing the overarching concepts on how to actually orga-
nize the progression from individual-based thinking and reflection to
a more comprehensive and systems-level thinking and learning base.
Organizational-level
maturity arc
Chief IT executive
best practices
maturity arc
CEO technology
best practices
maturity arc
Middle
management
technology best
practices
maturity arc
Vince’s “organizing” reflection matrix
Strategic integration
Cultural assimilation
Organizational learning constructs
Varying levels of management
participation
Individual
reflection
Organizational
reflectionLinear stages learning maturation
Individual
reflection
Group
reflection
Organizational
reflection
Figure 13.2 ROD and Vince’ s reflection matrix.
355ConClusIon
The emphasis, overall, is that individual learning alone will under-
mine collective governance. Therefore, the movement from individual
to organizational self-management remains a critical part of under-
standing how technology and other dynamic variables can foster new
strategies for competitive advantage.
Perhaps the most important conclusion of this third edition is the
impact that digital technologies are having on the acceleration of
change being experienced throughout the world. Indeed, digital tech-
nology has begun to change not only the business world but the very
fabric of our lives. Particular to this change is the continual emer-
gence of social media as a driver of new and competitive products and
services. I also discussed the changing work philosophy and expecta-
tions of our new generation of employees, and how they think differ-
ently and want a more complex experience in the places in which they
work. The Gen Y population is clearly a new breed of employees and
the Gen Z behind them will be even more accustomed to using digi-
tal technologies in every fabric of the ways they want to learn, their
preferences in communicating with others, and their role in society.
Most important are the ways that technology has changed consumer
behavior. I truly believe that future generations will look back on this
period and indeed say, this was truly a consumer revolution!
357
Glossary
baby boomers: The generation of individuals who were born between
the years of 1946 and 1964.
business process reengineering: a process that organizations under-
take to determine how best to use technology to improve
business performance
customer relationship management (CRM): the development and
maintenance of integrated relationships with the customer
base of an organization. CRM applications provide organiza-
tions with integrated tools that allow individuals to store and
sustain valuable information about their customers.
data mapping: the process of comparing the data fields in one data-
base to another, or toward a new application database
decision‑support systems (DSS): systems that assist managers to
make better decisions by providing analytical results from
stored data
digital disruption: When new digital technology advancements
impact the value of goods and services.
digital transformation: The repositioning of or a new investment
in technology and business models in efforts to compete in
a rapidly changing digital economy and create a newfound
sense of value for customers.
358 GlossArY
enterprise resource planning (ERP): a set of multimodule applica-
tions that support an entire manufacturing and business oper-
ation, including product planning, purchasing, maintaining
inventories, interacting with suppliers, providing customer
service, accounting interfaces, and tracking order shipments.
These systems are also known as enterprise-level applications.
garbage can: an abstract concept for allowing individuals a place to
suggest innovations, brought about by technology. The inven-
tory of technology opportunities needs regular evaluation
Gen X: The generation of individuals who were born between the
years of 1965 and 1980.
Gen Y/Millennials: The generation of individuals who were born
between the years of 1981 and 1992. There is disagreement
on the exact end dates of Gen Y individuals.
internet: a cooperative message-forwarding system that links com-
puter networks all over the world.
intranet: a network confined to a single organization or unit.
ISO 9000: a set of quality assurance standards published by the
91-nation International Organization for Standardization
(ISO). ISO 9000 requires firms to define and implement
quality processes in their organization.
legacy: an existing software application or system that is assumed to
operate. By definition, all applications in production become
legacies.
operational excellence: a philosophy of continuous improvement
throughout an organization by enhancing efficiency and qual-
ity across operations
outsourcing: A practice utilized by corporations which involves hav-
ing external suppliers complete internal work in efforts to
reduce costs.
storyboarding: the process of creating prototypes that allow users to
actually see examples of technology, and how it will look and
operate. Storyboarding tells a story and can quickly educate
executives, without being intimidating.
technology definitions branding: the process of determining how an
organization wants to be viewed by its customers. Branding
includes not only the visual view, but also the emotional,
359GlossArY
rational, and cultural image that consumers associate with an
organization, its products, and services.
user interface: the relationship with end users that facilitates the pro-
cess of gathering and defining logical requirements
user level: the tier of computer project experience of the user. There
are three levels: (1) knowledgeable, (2) amateur, and (3) novice
virtual teams: groups of people, geographically disbursed, and linked
together using communication technologies
World Wide Web (web): loosely organized set of computer sites that
publish information that anyone can read via the Internet
using mainly HTTP (Hypertext Transfer Protocol)
Year 2000 (Y2K): a monumental challenge to many organizations
due to a fear that software applications could not handle the
turn of the century. Specifically, calculations that used the
year portion of a date would not calculate properly. As such,
there was a huge investment in reviewing legacy systems to
uncover where these flaws existed.
Organizational Learning Definitions
action science: pioneered by Argyris and Schö n (1996), action science
was designed to promote individual self-reflection, regarding
behavior patterns and to encourage a productive exchange
among individuals. Action science encompasses a range of
methods to help individuals learn how to be reflective about
their actions. A key component of action science is the use
of reflective practices— including what is commonly known
among researchers and practitioners as reflection in action,
and reflection on action.
balanced scorecard: a means for evaluating transformation, not only
for measuring completion against set targets, but also, for
defining how expected transformations map onto the strate-
gic objectives of the organization. In effect, it is the ability of
the organization to execute its strategy.
communities of practice: are based on the assumption that learning
starts with engagement in social practice and that this prac-
tice is the fundamental construct by which individuals learn.
360 GlossArY
Thus, communities of practice are formed to get things done
using a shared way of pursuing interest.
cultural assimilation: a process that focuses on the organizational
aspects of how technology is internally organized, including
the role of the IT department, and how it is assimilated within
the organization as a whole. It is an outcome of responsive
organizational dynamism.
cultural lock‑in: the inability of an organization to change its corpo-
rate culture, even when there are clear market threats (Foster
& Kaplan, 2001)
double‑loop learning: requires individuals to reflect on a prior action
or habit that needs to change in behavior and change to oper-
ational procedures. For example, people who engage in dou-
ble-loop learning, may need to adjust how they perform their
job as opposed to just the way they communicate with others.
drivers: those units that engage in frontline or direct revenue-
generating activities
experiential learning: a type of learning that comes from the experi-
ences that adults have accrued over the course of their individ-
ual lives. These experiences provide rich and valuable forms of
“ literacy” that must be recognized as important components
to overall learning development.
explicit knowledge: documented knowledge found in manuals, doc-
umentation, files, and other accessible places and sources
flame: a lengthy, often personally insulting, debate in an elec-
tronic community that provides both positive and negative
consequences
frame‑talk: focuses on interpretation to evaluate the meanings of talk
knowledge management: the ability to transfer individual tacit
knowledge into explicit knowledge
left‑hand column: a technique by which individuals use the right-hand
column of a piece of paper to transcribe dialogues that they feel
have not resulted in effective communication. In the left-hand
column of the same page, participants write what they were
really thinking at the time of the dialogue but did not say.
management self‑development: increases the ability and willingness
of managers to take responsibility for themselves, particularly
for their own learning (Pedler et al., 1988)
361GlossArY
mythopoetic‑talk: communicates ideogenic ideas and images that
can be used to communicate the nature of how to apply tool-
talk and frame-talk, within the particular culture or society.
This type of talk allows for concepts of intuition and ideas for
concrete application.
organizational knowledge: is defined as “ the capability of a company
as a whole to create new knowledge, disseminate it through-
out the organization, and embody it in products, services, and
systems” (Nonaka & Takeuchi, 1995, p. 3)
organizational transformation: changes in goals, boundaries, and
activities. According to Aldrich (2001), organizational trans-
formations “ must involve a qualitative break with routines
and a shift to new kinds of competencies that challenge exist-
ing organizational knowledge” (p. 163).
reflection with action: term used as a rubric for the various methods
involving reflection in relation to activity
responsive organizational dynamism: the set of integrative
responses, by an organization, to the challenges raised by
technology dynamism. It has two component outcomes: stra-
tegic integration and cultural assimilation.
single‑loop learning: requires individuals to reflect on a prior action
or habit that needs to be changed in the future but that does
not require individuals to change their operational procedures
with regard to values and norms
strategic integration: a process that addresses the business-strategic
impact of technology on organizational processes. That is,
the business-strategic impact of technology requires imme-
diate organizational responses and, in some instances, zero
latency. It is an outcome of responsive organizational dyna-
mism, and it requires organizations to deal with a variable
that forces acceleration of decisions in an unpredictable
fashion.
supporters: units that do not generate obvious direct revenues but
rather are designed to support frontline activities
tacit knowledge: an experience-based type of knowledge and skill,
with the individual capacity to give intuitive forms to new
things; that is, to anticipate and preconceptualize the future
(Kulkki & Kosonen, 2001)
362 GlossArY
technological dynamism: characterizes the unpredictable and accel-
erated ways in which technology, specifically, can change
strategic planning and organizational behavior/culture. This
change is based on the acceleration of events and interactions
within organizations, which in turn create the need to better
empower individuals and departments.
tool‑talk: includes instrumental communities required to discuss,
conclude, act, and evaluate outcomes
363
References
Aldrich, H. (2001). Organizations Evolving. London: Sage.
Allen, F., & Percival, J. (2000). Financial strategies and venture capital. In
G. S. Day & P. J. Schoemaker (Eds.), Wharton on Managing Emerging
Technologies (pp. 289– 306). New York: Wiley.
Allen, T. J., & Morton, M. S. (1994). Information Technology and the
Corporation. New York: Oxford University Press.
Applegate, L. M., Austin, R. D., & McFarlan, F. W. (2003). Corporate
Information Strategy and Management (2nd edn.). New York:
McGraw-Hill.
Argyris, C. (1993). Knowledge for Action: A Guide to Overcoming Barriers to
Organizational Change. San Francisco, CA: Jossey-Bass.
Argyris, C., & Schö n, D. A. (1996). Organizational Learning II. Reading,
MA: Addison-Wesley.
Arnett, R. C. (1992). Dialogue Education: Conversation about Ideas and between
Persons. Carbondale, IL: Southern Illinois University Press.
Bakanauskiené , I., BendaravicIiené , R., Krikstolaitis, R., & Lydeka, Z.
(2011). Discovering an employer branding: Identifying dimensions of
employers’ attractiveness in University. Management of Organizations:
Systematic Research , 59, pp. 7– 22.
Bazarova, N. N., & Walther, J. B. (2009). Virtual groups: (Mis)attribution
of blame in distributed work. In P. Lutgen-Sandvik & B. Davenport
Sypher (Eds.), Destructive Organizational Communication: Processes,
Consequences, and Constructive Ways of Organizing (pp. 252– 266). New
York: Routledge.
Bellovin. S. M. (2015). Thinking Security: Stopping Next Year’ s Hackers . Boston,
MA: Addison-Wesley.
364 reFerenCes
Bensaou, M., & Earl, M. J. (1998). The right mind-set for managing informa-
tion technology. In J. E. Garten (Ed.), World View: Global Strategies for
the New Economy (pp. 109– 125). Cambridge, MA: Harvard University
Press.
Bertels, T., & Savage, C. M. (1998). Tough questions on knowledge manage-
ment. In G. V. Krogh, J. Roos, & D. Kleine (Eds.), Knowing in Firms:
Understanding, Managing and Measuring Knowledge (pp. 7– 25). London:
Sage.
Boland, R. J., Tenkasi, R. V., & Te’ eni, D. (1994). Designing information
technology to support distributed cognition. Organization Science , 5,
456– 475.
Bolman, L. G., & Deal, T. E. (1997). Reframing Organizations: Artistry,
Choice, and Leadership (2nd edn.). San Francisco, CA: Jossey-Bass.
Bradley, S. P., & Nolan, R. L. (1998). Sense and Respond: Capturing Value in
the Network Era. Boston, MA: Harvard Business School Press.
Brown, J. S., & Duguid, P. (1991). Organizational learning and communities
of practice. Organization Science , 2, 40– 57.
Burke, W. W. (2002). Organizational Change: Theory and Practice. London:
Sage.
Cadle, J., Paul, D., & Turner, P. (2014). Business Analysis Techniques: 99
Essential Tools for Success (2nd edn.). Swindon, UK: Chartered Institute
for IT.
Capgemini Consulting. (2013). Being Digital: Engaging the Organization to
Accelerate Digital Transformation [White Paper]. Retrieved from http://
www.capgemini-consulting.com/resource-file-access/resource/pdf/
being_digital_engaging_the_organization_to_accelerate_digital_
transformation
Cash, J. I., & Pearlson, K. E. (2004, October 18). The future CIO. Information
Week . Available at http://www.informationweek.com/story/showArti-
cle.jhtml?articleID=49901186
Cassidy, A. (1998). A Practical Guide to Information Strategic Planning. Boca
Raton, FL: St. Lucie Press.
Cisco (2012). Creating an Office from an Easy Chair [White Paper]. Retrieved
from http://www.cisco.com/c/en/us/solutions/collateral/enterprise/
cisco-on-cisco/Trends_in_IT_Gen_Y_Flexible_Collaborative_
Workspace
Collis, D. J. (1994). Research note: How valuable are organizational capabili-
ties? Strategic Management Journal , 15, 143– 152.
Cross, T, & Thomas, R. J. (2009). Driving Results through Social Networks.
How Top Organizations Leverage Networks for Performance and Growth.
San Francisco, CA: Jossey-Bass.
Cyert, R. M., & March, J. G. (1963). The Behavioral Theory of the Firm.
Englewood Cliffs, NJ: Prentice-Hall.
De Jong, G. 2014. Financial inclusion of youth in the Southern provinces of
Santander: Setting up a participatory research in Columbia. In P. Wabike
& J. van der Linden (Eds.), Education for Social Inclusion (pp. 87– 106).
Groningen, the Netherlands: University of Groningen.
365reFerenCes
Dewey, J. (1933). How We Think. Boston, MA: Health.
Dodgson, M. (1993). Organizational learning: A review of some literatures.
Organizational Studies , 14, 375– 394.
Dolado, J. (Ed.) (2015). No Country for Young People? Youth Labour Market
Problems in Europe , VoxEU.org eBook, London: CEPR Press.
Dragoon, A. (2002). This changes everything. Retrieved December 15, 2003,
from http://www.darwinmag.com
Earl, M. J. (1996a). Business processing engineering: A phenomenon of orga-
nizational dimension. In M. J. Earl (Ed.), Information Management: The
Organizational Dimension (pp. 53– 76). New York: Oxford University
Press.
Earl, M. J. (1996b). Information Management: The Organizational Dimension.
New York: Oxford University Press.
Earl, M. J., Sampler, J. L., & Short, J. E. (1995). Strategies for business pro-
cess reengineering: Evidence from field studies. Journal of Management
Information Systems , 12, 31– 56.
Easterby-Smith, M., Araujo, L., & Burgoyne, J. (1999). Organizational
Learning and the Learning Organization: Developments in Theory and
Practice. London: Sage.
Eisenhardt, K. M., & Bourgeois, L. J. (1988). Politics of strategic decision
making in high-velocity environments: Toward a midrange theory.
Academy of Management Journal , 31, 737– 770.
Elkjaer, B. (1999). In search of a social learning theory. In M. Easterby-
Smith, J. Burgoyne, & L. Araujo (Eds.), Organizational Learning and
the Learning Organization (pp. 75– 91). London: Sage.
Ernst & Young, (2012). The digitization of everything: How organisations
must adapt to changing consumer behaviour [White Paper]. Retrieved
from http://www.ey.com/Publication/ vwLUAssets/The_digitisation_
of_everything_-_How_organisations_must_adapt_to_changing_con-
sumer_behaviour/$FILE/ EY_Digitisation_of_everything
Fineman, S. (1996). Emotion and subtexts in corporate greening. Organization
Studies , 17, 479– 500.
Foster, R. N., & Kaplan, S. (2001). Creative Destruction: Why Companies
That Are Built to Last Underperform the Market: And How to Successfully
Transform Them . New York: Currency.
Franco, V., Hu, H., Lewenstein, B. V., Piirto, R., Underwood, R., & Vidal,
N. K. (2000). Anatomy of a flame: Conflict and community build-
ing on the Internet. In E. L. Lesser, M. A. Fontaine, & J. A. Slusher
(Eds.), Knowledge and Communities (pp. 209– 224). Woburn, MA:
Butterworth-Heinemann.
Friedman, T. L. (2007). The World Is Flat. New York: Picador/Farrar, Straus
and Giroux.
Friedman, V. J, Razer, M., Tsafrir, H., & Zorda, O. (2014). An action science
approach to creating inclusive teacher-parent relationships. In P. Wabike
& J. van der Linden (eds.), Education for Social Inclusion (pp. 25– 51).
Groningen, the Netherlands: University of Groningen.
366 reFerenCes
Garvin, D. A. (1993). Building a learning organization. Harvard Business
Review , 71 (4), 78– 84.
Garvin, D. A. (2000). Learning in Action: A Guide to Putting the Learning
Organization to Work. Boston, MA: Harvard Business School Press.
Gephardt, M. A., & Marsick, V. J. (2003). Introduction to special issue on
action research: Building the capacity for learning and change. Human
Resource Planning , 26(2), 14–18.
Glasmeier, A. (1997). The Japanese small business sector (Final report to the
Tissot Economic Foundation, Le Locle, Switzerland, Working Paper
16). Austin: Graduate Program of Community and Regional Planning,
University of Texas at Austin.
Grant, D., Keenoy, T., & Oswick, C. (Eds.). (1998). Discourse and Organization.
London: Sage.
Grant, R. M. (1996). Prospering in a dynamically-competitive environ-
ment: Organizational capability as knowledge integration. Organization
Science , 7, 375– 387.
Gregoire, J. (2002, March 1). The state of the CIO 2002: The CIO title,
What’ s it really mean? CIO . Available at http://www.cio.com/arti-
cle/30904/The_State_of_the_CIO_2002_The_CIO_Title_What_s_
It_Really_Mean_
Habermas, J. (1998). The Inclusion of the Other: Studies in Political Theory.
Cambridge, MA: MIT Press.
Halifax, J. (1999). Learning as initiation: Not-knowing, bearing witness,
and healing. In S. Glazier (Ed.), The Heart of Learning: Spirituality in
Education (pp. 173– 181). New York: Penguin Putnam.
Hardy, C., Lawrence, T. B., & Philips, N. (1998). Talk and action:
Conversations and narrative in interorganizational collaboration. In
D. Grant, T. Keenoy, & C. Oswick (Eds.), Discourse and Organization
(pp. 65– 83). London: Sage.
Heath, D. H. (1968). Growing Up in College: Liberal Education and Maturity.
San Francisco, CA: Jossey-Bass.
Hoffman, A. (2008, May 19). The social media gender gap. Business Week .
Available at http://www.businessweek.com/technology/content/
may2008/tc20080516_580743.htm
Huber, G. P. (1991). Organizational learning: The contributing processes and
the literature. Organization Science , 2, 99– 115.
Hullfish, H. G., & Smith, P. G. (1978). Reflective Thinking: The Method of
Education. Westport, CT: Greenwood Press.
Huysman, M. (1999). Balancing biases: A critical review of the literature
on organizational learning. In M. Easterby-Smith, J. Burgoyne, & L.
Araujo (Eds.), Organizational Learning and the Learning Organization
(pp. 59– 74). London: Sage.
Johansen, R., Saveri, A., & Schmid, G. (1995). Forces for organizational
change: 21st century organizations: Reconciling control and empower-
ment. Institute for the Future , 6 (1), 1– 9.
367reFerenCes
Johnson Controls (2010) Generation Y and the workplace: Annual report
2010 [White Paper]. Retrieved from http://www.johnsoncontrols.com/
content/dam/WWW/jci/be/global_workplace_innovation/oxygenz/
Oxygenz_Report_-_2010
Jonas, L., & Kortenius, R. (2014) Beyond a Paycheck: Employment as an
act of consumption for Gen Y talents (Master’ s Thesis) Retrieved from
http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=44
56566&fileOId=4456569
Jones, M. (1975). Organizational learning: Collective mind and cognitiv-
ist metaphor? Accounting Management and Information Technology , 5(1),
61– 77.
Kanevsky, V., & Housel, T. (1998). The learning-knowledge-value cycle. In G.
V. Krogh, J. Roos, & D. Kleine (Eds.), Knowing in Firms: Understanding,
Managing and Measuring Knowledge (pp. 240– 252). London: Sage.
Kaplan, R. S., & Norton, D. P. (2001). The StrategyFocused Organization.
Cambridge, MA: Harvard University Press.
Kegan, R. (1994). In Over Our Heads: The Mental Demands of Modern life.
Cambridge, MA: Harvard University Press.
Kegan, R. (1998, October). Adult Development and Transformative Learning.
Lecture presented at the Workplace Learning Institute, Teachers
College, New York.
Knefelkamp, L. L. (1999). Introduction. In W. G. Perry (Ed.), Forms of Ethical
and Intellectual Development in the College Years: A Scheme (pp. xi–xxxvii).
San Francisco, CA: Jossey-Bass.
Koch, C. (1999, February 15). Staying alive. CIO Magazine . 38– 45.
Kolb, D. (1984a) Experiential Learning: Experience as the Source of Learning
and Development. Englewood Cliffs, NJ: Prentice-Hall.
Kolb, D. (1984b). Experiential Learning as the Science of Learning and
Development. Englewood Cliffs, NJ: Prentice Hall.
Kolb, D. (1999). The Kolb Learning Style Inventory. Boston, MA: HayResources
Direct.
Kulkki, S., & Kosonen, M. (2001). How tacit knowledge explains organiza-
tional renewal and growth: The case at Nokia. In I. Nonaka & D. Teece
(Eds.), Managing Industrial Knowledge: Creation, Transfer and Utilization
(pp. 244– 269). London: Sage.
Langer, A. M. (1997). The Art of Analysis. New York: Springer-Verlag.
Langer, A. M. (2001a). Analysis and Design of Information Systems. New York:
Springer-Verlag.
Langer, A. M. (2001b). Fixing bad habits: Integrating technology person-
nel in the workplace using reflective practice. Reflective Practice , 2(1),
100– 111.
Langer, A. M. (2002). Reflecting on practice: using learning journals in higher
and continuing education. Teaching in Higher Education , 7, 337– 351.
Langer, A. M. (2003). Forms of workplace literacy using reflection-with
action methods: A scheme for inner-city adults. Reflective Practice , 4,
317– 336.
368 reFerenCes
Langer, A. M. (2007). Analysis and Design of Information Systems (3rd edn.).
New York: Springer-Verlag.
Langer A. M. (2009). Measuring self-esteem through reflective writing:
Essential factors in workforce development. Journal of Reflective Practice ,
9(10): 45– 48.
Langer, A. (2011). Information Technology and Organizational Learning:
Managing Behavioral Change through Technology and Education (2nd
edn.). Boca Raton, FL: CRC Press.
Langer, A. M. (2013). Employing young talent from underserved popula-
tions: Designing a flexible organizational process for assimilation and
productivity. Journal of Organization Design , 2(1): 11– 26.
Langer, A. M. (2016) Guide to Software Development: Designing and Managing
the Life Cycle (2nd edn.). New York: Springer.
Langer, A.M. & Yorks, L. 2013. Strategic IT: Best practices for Managers and
Executives . Hoboken, NJ: Wiley.
Lesser, E. L., Fontaine, M. A., & Slusher, J. A. (Eds.). (2000). Knowledge and
Communities. Woburn, MA: Butterworth-Heinemann.
Levine, R., Locke, C., Searls, D., & Weinberger, D. (2000). The Cluetrain
Manifesto. Cambridge, MA: Perseus Books.
Lientz, B. P., & Rea, K. P. (2004). Breakthrough IT Change Management:
How to Get Enduring Change Results. Burlington, MA: Elsevier
Butterworth-Heinemann.
Lipman-Blumen, J. (1996). The Connective Edge: Leading in an Independent
World. San Francisco, CA: Jossey-Bass.
Lipnack, J., & Stamps, J. (2000). Virtual Teams (2nd edn.). New York: Wiley.
Lounamaa, P. H., & March, J. G. (1987). Adaptive coordination of a learning
team. Management Science , 33, 107– 123.
Lucas, H. C. (1999). Information Technology and the Productivity Paradox. New
York: Oxford University Press.
Mackenzie, K. D. (1994). The science of an organization. Part I: A new model
of organizational learning. Human Systems Management , 13, 249– 258.
March, J. G. (1991). Exploration and exploitation in organizational learning.
Organization Science , 2, 71– 87.
Marshak, R. J. (1998). A discourse on discourse: Redeeming the meaning
of talk. In D. Grant, T Keenoy, & C. Oswick (Eds.), Discourse and
Organization (pp. 65– 83). London: Sage.
Marsick, V. J. (1998, October). Individual strategies for organizational learn-
ing. Lecture presented at the Workplace Learning Institute, Teachers
College, New York.
McCarthy, B. (1999). Learning Type Measure. Wauconda, IL: Excel.
McDermott, R. (2000). Why information technology inspired but cannot
deliver knowledge management. In E. L. Lesser, M. A. Fontaine, &
J. A. Slusher (Eds.), Knowledge and Communities (pp. 21– 36). Woburn,
MA: Butterworth-Heinemann.
McGraw, K. (2009). Improving project success rates with better leadership:
Project Smart. Available at www.projectsmart.co.uk/improving-project-
success-rateswith-better-leadership.html
369reFerenCes
Mezirow, J. (1990). Fostering Critical Reflection in Adulthood: A Guide to
Transformative and Emancipatory Learning. San Francisco, CA:
Jossey-Bass.
Milliken, C. (2002). A CRM success story. Computerworld . Available at
www.computerworld.com/s/article/75730?A_CRM_success-story
Miner, A. S., & Haunschild, P. R. (1995). Population and learning. In B.
Staw & L. L. Cummings (Eds.), Research in Organizational Behavior
(pp. 115– 166). Greenwich, CT: JAI Press.
Mintzberg, H. (1987). Crafting strategy. Harvard Business Review , 65(4), 72.
Mintzberg, H., & Waters, J. A. (1985). Of strategies, deliberate and emer-
gent. Strategic Management Journal , 6, 257– 272.
Moon, J. A. (1999). Reflection in Learning and Professional Development: Theory
and Practice. London: Kogan Page.
Moon, J. A. (2000). A Handbook for Academics, Students and Professional
Development. London: Kogan Page.
Mossman, A., & Stewart, R. (1988). Self-managed learning in organiza-
tions. In M. Pedler, J. Burgoyne, & T. Boydell (Eds.), Applying Self
Development in Organizations (pp. 38– 57). Englewood Cliffs, NJ:
Prentice-Hall.
Mumford, A. (1988). Learning to learn and management self-development. In
M. Pedler, J. Burgoyne, & T Boydell (Eds.), Applying SelfDevelopment
in Organizations (pp. 23– 37). Englewood Cliffs, NJ: Prentice-Hall.
Murphy, T. (2002). Achieving Business Practice from Technology: A Practical
Guide for Today’ s Executive. New York: Wiley.
Narayan, S. (2015). Agile IT Organization Design for Digital Transformation
and Continuous Delivery . New York: Addison-Wesely.
Newman, K. S. 1999. No Shame in My Game: The Working Poor in the Inner
City . New York: Vintage Books.
Nonaka, I. (1994). A dynamic theory of knowledge creation. Organization
Science , 5(1), 14– 37.
Nonaka, I., & Takeuchi, H. (1995). The KnowledgeCreating Company: How
Japanese Companies Create the Dynamics of Innovation. New York: Oxford
University Press.
Olson, G. M., & Olson, J. S. (2000). Distance matters. Human– Computer
Interactions , 15(1), 139– 178.
Olve, N., Petri, C., Roy, J., & Roy, S. (2003). Making Scorecards Actionable:
Balancing Strategy and Control. New York: Wiley.
O’ Sullivan, E. (2001). Transformative Learning: Educational Vision for the 21st
Century. Toronto: Zed Books.
Peddibhotla, N. B., & Subramani, M. R. (2008). Managing knowledge in
virtual communities within organizations. In I. Becerra-Fernandez &
D. Leidner (Eds.), Knowledge Management: An Evolutionary View (229–
247). Armonk, NY: Sharp.
Pedler, M., Burgoyne, J., & Boydell, T. (Eds.). (1988). Applying self
Development in Organizations . Englewood Cliffs, NJ: Prentice-Hall.
Peters, T. J., & Waterman, R. H. (1982). In Search of Excellence: Lessons from
America’ s BestRun Companies. New York: Warner Books.
370 reFerenCes
Pietersen, W. (2002). Reinventing Strategy: Using Strategic Learning to Create
and Sustain Breakthrough Performance. New York: Wiley.
Porter, M. E., & Kramer, M. R. 2011. Creating shared value. Harvard Business
Review , 89(1/2): 62– 77.
Prange, C. (1999). Organizational learning: Desperately seeking
theory. In M. Easterby-Smith, J. Burgoyne, & L. Araujo (Eds.),
Organizational Learning and the Learning Organization (pp. 23– 43).
London: Sage.
Probst, G., & Bü chel, B. (1996). Organizational Learning: The Competitive
Advantage of the Future. London: Prentice-Hall.
Probst, G., Bü chel, B., & Raub, S. (1998). Knowledge as a strategic resource.
In G. V. Krogh, J. Roos, & D. Kleine (Eds.), Knowing in Firms:
Understanding, Managing and Measuring Knowledge (pp. 240– 252).
London: Sage.
Rassool N. 1999. Literacy for Sustainable Development in the Age of Information
(Vol. 14). Clevedon: Multilingual Matters Limited.
Reynolds, G. (2007). Ethics in Information Technology (2nd edn.). New York:
Thomson.
Robertson, S., & Robertson, J. (2012). Mastering the Requirements Process:
Getting Requirements Right (3rd edn.). Upper Saddle River, NJ:
Addison-Wesley.
Sabherwal, R., & Becerra-Fernandez, I. (2005). Integrating specific knowl-
edge: Insights from the Kennedy Space Center. IEEE Transactions on
Engineering Management , 52, 301– 315.
Sampler, J. L. (1996). Exploring the relationship between information tech-
nology and organizational structure. In M. J. Earl (Ed.), Information
Management: The Organizational Dimension (pp. 5– 22). New York:
Oxford University Press.
Saxena, P., & Jain. R. (2012) Managing career aspirations of generation Y at
work place. International Journal of Advanced Research in Computer Science
and Software Engineering , 2(7), 114–118..
Schein, E. H. (1992). Organizational Culture and Leadership (2nd edn.). San
Francisco, CA: Jossey-Bass.
Schein, E. H. (1994). The role of the CEO in the management of change: The
case of information technology. In T. J. Allen & M. S. Morton (Eds.),
Information Technology and the Corporation (pp. 325– 345). New York:
Oxford University Press.
Schlossberg, N. R. (1989). Marginality and mattering: Key issues in building
community. New Directions for Student Services , 48, 5– 15.
Schoenfield, B. S. E. (2015). Securing Systems: Applied Security Architecture and
Threat Models . Boca Raton, FL: CRC Press.
Schö n, D. (1983). The Reflective Practitioner: How Professionals Think in Action.
New York: Basic Books.
Senge, P. M. (1990). The Fifth Discipline: The Art and Practice of the Learning
Organization. New York: Currency Doubleday.
Siebel, T. M. (1999). Cyber Rules: Strategies for Excelling at EBusiness. New
York: Doubleday.
371reFerenCes
Stolterman, E., & Fors, A. C. (2004). Information technology and the good
life. Information Systems Research , 143, 687– 692.
Swadzba, U. (2010). Work or consumption: Indicators of one’s place in the
Society. In Beyond Globalisation: Exploring the Limits of Globalisation in
the Regional Context (Conference Preceedings) (pp. 123– 129). Ostrava:
University of Astrava Czech Republic.
Swieringa, J., & Wierdsma, A. (1992). Becoming a Learning Organization,
beyond the Learning Curve. New York: Addison-Wesley.
Szulanski, G., & Amin, K. (2000). Disciplined imagination: Strategy mak-
ing in uncertain environments. In G. S. Day & P.J. Schoemaker (Eds.),
Wharton on Managing Emerging Technologies (pp. 187– 205). New York:
Wiley.
Teece, D. J. (2001). Strategies for managing knowledge assets: The role of
firm structure and industrial context. In I. Nonaka & D. Teece (Eds.),
Managing Industrial Knowledge: Creation, Transfer and Utilization
(pp. 125– 144). London: Sage.
Teigland, R. (2000). Communities of practice at an Internet firm: Netovation
vs. in-time performance. In E. L. Lesser, M. A. Fontaine, & J. A.
Slusher (Eds.), Knowledge and Communities (pp. 151– 178). Woburn,
MA: Butterworth-Heinemann.
Tushman, M. L., & Anderson, P. (1986). Technological discontinuities
and organizational environments. Administrative Science Quarterly , 31,
439– 465.
Tushman, M. L., & Anderson, P. (1997). Managing Strategic Innovation and
Change. New York: Oxford University Press.
Vince, R. (2002). Organizing reflection. Management Learning , 33(1), 63– 78.
Wabike P. 2014. University-Community Engagement: Universities at a
crossroad? In P. Wabike & J. van der Linden (eds.), Education for Social
Inclusion (pp. 131– 149). Groningen, the Netherlands: University of
Groningen.
Wallemacq, A., & Sims, D. (1998). The struggle with sense. In D. Grant, T.
Keenoy, & C. Oswick (Eds.), Discourse and Organization (pp. 65– 83).
London: Sage.
Walsh, J. P. (1995). Managerial and organizational cognition: Notes from a
trip down memory lane. Organizational Science , 6, 280– 321.
Watkins, K. E., & Marsick, V. J. (1993). Sculpting the Learning Organization:
Lessons in the Art and Science of Systemic Change. San Francisco, CA:
Jossey-Bass.
Watson, T. J. (1995). Rhetoric, discourse and argument in organizational
sense making: A reflexive tale. Organization Studies , 16, 805– 821.
Wellman, B., Salaff, J., Dimitrova, D., Garton, L. Gulia, M., &
Haythornthwaite, C. (2000). Computer networks and social networks:
Collaborative work, telework, and virtual community. In E. L. Lesser,
M. A. Fontaine, & J. A. Slusher (Eds.), Knowledge and Communities
(pp. 179– 208). Woburn, MA: Butterworth-Heinemann.
Wenger, E. (1998). Communities of Practice: Learning, Meaning and Identity.
Cambridge, MA: Cambridge University Press.
372 reFerenCes
Wenger, E. (2000). Communities of practice: The key to knowledge strategy.
In E. L. Lesser, M. A. Fontaine, & J. A. Slusher (Eds.), Knowledge and
Communities (pp. 3– 20). Woburn, MA: Butterworth-Heinemann.
West, G. W. (1996). Group learning in the workplace. In S. Imel (Ed.),
Learning in Groups: Exploring Fundamental Principles, New Uses,
and Emerging Opportunities: New Directions for Adult and Continuing
Education (pp. 51– 60). San Francisco, CA: Jossey-Bass.
Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading Digital: Turning
Technology into Business Transformation . Boston, MA: Harvard Business
School Press.
Yorks, L., & Marsick, V. J. (2000). Organizational learning and transforma-
tion. In J. Mezirow (Ed.), Learning as Transformation: Critical Perspectives
on a Theory in Progress (pp. 253– 281). San Francisco, CA: Jossey-Bass.
Yourdon, E. (1998). Rise and Resurrection of the American Programmer
(pp. 253– 284). Upper Saddle River, NJ: Prentice Hall.
373
Index
A
Abstract conceptualization, as
learning preference, 84
Abstraction, leaps of, 10
Acceleration, xxi
Access control, social networking
issues, 134
Access tracking, 137
Accountability, of IT staff, 5
Accounting, IT relevance to, 24
Action, emotional components
of, 92
Action science, 3, 4, 74, 142,
152, 359
Active experimentation, as learning
preference, 84
Activity, relationship to content, 90
Activity systems, organizational
transformation of, 139–141
Adaptive organizations, 83
Administrative departments,
alignment with, 17–19
Adoptive approach, 63
Adversarial relationships, 2
Advertising business,
transformation to media
market, 231
Alignment
with administrative
departments, 17–19
to business strategy, 148
with human resources, 18
with social networks, 138
of technology with business
strategy, 149
Answerthink Corporation, 304
Applied individual learning for
technology model, 87
Architecture, 46
Assimilation, 6
cultural, 7
Assumptions, unearthing
unspoken, 9–10
Attribution theory, 166
Autonomy, 351
374 Index
B
Back office issues
at ICAP, 204
as strategic business problems, 195
Balanced scorecard, 359
availability variable, 156
business stakeholder support
for, 155
as checklist and tracking
system, 154
competence variable, 156
discourse and, 156–158
enthusiasm requirements, 156
executive management support
for, 156
implementing using application
software, 155
information provider
responsibilities, 155
learning pilots
responsibilities, 155
as living document, 147
as measurement of knowledge
creation, 158
modifications for ROD, 149
organizational transformation
and, 139–161
for Ravell phase I, 153
roles and responsibilities, 156
rules of success, 160
schematic diagram, 149
scorecard designer, 155
responsibilities, 155
Behavioral shifts, 12, 140
in CIOs, 195
Benefits realization, five pillars of,
45–46
Best practices, 37
CEO best practices technology
arc, 313–314
for chief executive officers,
299–313
chief IT executive best practices
arc, 297–299
for chief IT executives, 288–297
ethics and maturity issues,
333–337
middle management, 316–325
quest for, 287–288
unproven nature of, 56–57
Billable time records
financial returns from technology
solution, 229
limitations at HTC, 225–226
value-added services solutions, 232
Board meetings, IT issues discussed
at, 34
Boise Cascade Office Products, 308,
309, 311
Bottom-up learning, xxix, 132,
214, 235
at Siemens AG, 192, 193
Boundaries, organizational
transformation of, 139, 140
Brain hemispheres, and learning
preferences, 85
Budgetary cutbacks, 47, 293
Budgeting, IT strategies, 35
Business analysts, as
intermediaries, 22
Business ethics, 296–298, 302, 326
Business knowledge, by chief IT
executives, 291
Business plan, judging technologies
based on, 212–213
Business process, implementation
of, 314
Business process impact, 45
Business process outsourcing
leaders, 296
Business process reengineering
(BPR), 25, 47, 357
Business rules, 126
Business strategy, 31, 34–35
aligning technology with, xii, 149
375Index
articulating through balanced
scorecard, 151
balanced scorecards and, 159–160
CIO assimilation to, 195
as continual process, 148
and importance of balanced
scorecards, 161
IT role in, 21, 23–24, 26
translating to operational
terms, 148
Business technology cognition,
326, 329
Business user involvement, risks, 126
C
Case studies, xxx, 187, 233–238;
see also Ravell
Corporation case study
HTC, 225–232
ICAP, 203–224
learning-knowledge-value
cycle, 238
organizational learning
approaches summary, 235
Siemens AG, 187–203
Catch-up, playing, 120
Centralization issues, 290, 305–306
CEO best practices technology arc,
313–314
CEO roles, in HTC case study,
227–228
CEO technology best practices arc,
313–314, 317, 334
detail, 318–321
maturity stages, 314–315
performance dimensions,
315–316
CFO quarterly meetings, 193
at Siemens AG, 191, 192
Change, xxiv
evolutionary vs. revolutionary, 120
frequency of, 141
mobilizing through executive
leadership, 148
need for accelerated, 68
overwhelming pace of, 293
pace of, 18
planned vs. unplanned, 120
rejection of technology-driven, 72
unpredictability of, 77
vs. transformation, 141, 152
Change agents, 38
CEOs as, 311
IT executives as, 39
technology, 68, 195
Change leaders, 296
Change management, xxiv, 45,
120–123, 345
by chief IT executives, 291
goals for IT, 123
at ICAP, 209
for IT organizations, 123–133
Chief executive officers (CEOs),
xxix; see also Executive
perspective
advice for participation
in organizational
transformation, 222
best practices, 299–305, 312–313
business-first perspectives, 309
business uses of technology
and, 314
as change agents, 311
CIO direct reporting to,
305–306
commitment to technology
projects, 308–309
conceptual knowledge of
technology, 314
dependence on CIOs for business
advice, 305
engaging in transformation, 208
executive decision making, 311
five stages of maturity, 313
ICAP case study, 206
376 Index
intervention in supporter
departments, 230
and IT centralization/
decentralization issues,
306–307
lack of cognizance about
technology uses, 202
linear development model, 313
multiple business perspectives, 314
need for linking business and
technology leaders, 310
need for standards, 307
outsourcing responsibilities, 306
perceptions of IT, xxix, 339
recognition of never-ending
technology projects,
311–312
reluctance to implement new
technology, 25
risk management and, 307–313
role in examining own biases, 203
role in HTC transformation, 230
strategic technology uses by, 315
willingness to learn from
staff, 208
Chief financial officer (CFO) CIOs
reporting to, 188, 226, 288
lack of creativity, 226
Chief information officers (CIOs),
288–289
business-level vs.
corporate-level, 291
inability to establish corporate
strategy, 211
integrating job functions into
business strategy, 190
integration challenges, 344
interactions with board-level
executives, 190
lack of involvement in business
strategy, 190
need for advanced degrees, 307
need to educate, 188
reporting to CFOs, 188
as senior lower level, 193
training in business strategy, 202
transformation to proactive
technologists, 195
Chief IT executives
best practices, 288
best practices arc, 299, 334
business environment
influences, 293
business ethics, 298
business knowledge
competence, 291
as business process outsourcing
leaders, 296
change creation and management
roles, 291
as change leaders, 296
common barriers to success, 293
communications roles, 291
compensation methods, 290
comprehension of technology
processes, 297
detail, best practices arc, 300–303
emerging roles and
responsibilities, 295–296
as executive account managers, 295
executive presence, 299
factors influencing strategic
options, 294
hiring and retention roles, 292
implementation of business/
technology processes by, 315
industry expertise, 291
as information architects, 295
as innovation leaders, 295
lack of uniform titles, 288
leadership roles, 292
management skills, 292
management values, 298
multiplicity of technology
perspectives, 297
377Index
organizational culture
competency, 298
as process leaders, 295
relationship building role, 291
roles and responsibilities,
290–292
as shared services leaders, 295
stable technology integration, 298
strategic thinking role, 290
as supply chain executives, 295
technology cognition, 298
technology competence and
recognition, 297
technology driver influences, 294
technology leadership, 298
technology proficiency, 291
Chief knowledge officer (CKO), 289
Chief technology officer (CTO), 289
intermediary role at HTC, 232
CIO advisory board, 191, 193
need for peer relationships, 192
at Siemens, 190
Citibank, use of technology as
driver, 60
Code of ethics, 336
Cognitive schemata, 82
Collaborative inquiry, 142
Collective identity, 91
Collective knowledge, storage of, 81
Collective learning, 78, 345
Combination, and virtual teams,
171–172
Combination dynamism, 173
Commitment, raising IT levels of, 6
Commoditization, 59, 61, 62, 207,
224, 259, 260, 306, 347
Common threads of communication,
156, 158, 159, 221, 344
at Siemens, ICAP/ETC,
HTC, 233
Communications
by chief IT executives, 291
failures in virtual team efforts, 164
importance to chief IT executive
role, 291
between IT organization and
others, 49
tacit knowledge buried in, 172
in virtual teams, 168
Communities of practice, xii, xxi,
45, 75–83, 87–90, 94, 103,
130, 189, 344, 346, 351,
359–360
application to virtual teams,
168–172
blockage by organizational
controls, 346
CIO-based, 191
and consensus building, 88
disagreements within, 196
discourse as basis of successful, 172
electronic, 78
extended seven steps, 79
formation of multiple, 233
at ICAP, 218–219
incorporating technology projects
into, 149
knowledge creation through, 234
knowledge management in, 197
Milliken’s formation of, 310
and organizational learning, 144
overlapping, 215
participation at Siemens, ICAP/
ETC, HTC, 235
preparing CIOs for, 196
reliance on innovation, 78
technology as change agent
for, 178
use in ROD, 75
virtual dynamism, 179
Communities of practice common
threads, 159
Communities of practice
threads, 157
Competitive advantage, 117, 233,
236, 355
378 Index
centrality of technology to, 348
dependence on knowledge
management, 116
importance of strategic
integration to, 46
improvement at Siemens AG, 190
issues at ICAP, 205, 207
IT potential to provide, 28
loss without transformation, 139
technology as source of, 41, 72
Completion time, estimating, 56
Compliance monitoring, in virtual
teams, 177
Concrete experience, as learning
preference, 84
Confidentiality, 336
end of, 134
Consequential interoperability, 45
Consultants, use of outside, 32
Containers, 96
Content, vs. technology, 207
Content-activity relationship, 90, 91
Continual learning, in virtual
teams, 179
Continuous innovation, 117
Control
organizational fear of losing, 352
vs. empowerment, 345–346
Conversion effectiveness, 21
Coopers & Lybrand, 58
Corporate culture, six myths of, 160
Corporate services standards, at
Siemens AG, 200
Creative professionals, cultural
assimilation challenges, 225
Cross-functional synergetic teams, 9
Cultural assimilation, xvii, xxiv, 42,
48–49, 140, 187, 245, 293,
306, 341, 343, 360
between brokers and
technologists at ICAP, 212
changes caused by new
technologies, 245
and communities of practice, 77
and creation of new cultures,
50, 51
as foundation for organizational
transformation, 143
in global organizations, 200
at implementation stage, 55–57
issues at HTC, 227
and IT organization
communications with
others, 49
and movement of traditional IT
staff, 49–51
at product maturity phase, 61
in ROD, 43
at Siemens AG, 199, 202
in transition, 146
uniqueness to each
organization, 50
in virtual teams, 179
Cultural awareness, 17
Cultural change, inevitability
of, 122
Cultural differences, 181, 199
ICAP experiences, 215
regarding operational norms, 200
in virtual teams, 174, 177
Cultural history, and virtual
teams, 175
Cultural lock-in, 120, 360
Cultural transformation, 8, 16–17
limitations of power approach, 17
Customer perspective
in balanced scorecard, 150
in Ravell Phase I balanced
scorecard, 153
ROD adjustments, 150–151
Customer relationship
management (CRM), 290,
308–311, 357
Customer-vendor relationships,
technology as change agent
for, 46
379Index
D
Data mapping, 126, 357
Deasy, Dana, 67, 187–190, 192, 195–
202, 207, 309, 344, 347
Decentralization, 24, 32, 306–307
with mature uses of
technology, 307
Decision support systems (DSS),
159, 357
Democratic leadership, importance
to high-velocity
environments, 123
Department/unit view as other, in
ROD arc, 102, 103
Design/planning phase, 88
Development schedules
failures in, xxiv
shortening dynamic, 56
Direct return, 47
through technology investments,
236, 238
Disagreements, in communities of
practice, 196–197
Disciplines, 89
Discourse, xxi, 350; see also Social
discourse
balanced scorecards and,
156–158
as basis of successful COP
implementation, 172
links to organizational learning
and technology, 93
Distance workers, 81, 164; see also
Virtual teams
Documentation, 125
and training materials, 126
Dot.com phenomenon, 23, 24
fallacies of, 212
and growth of e-business, 188
loss of commitment to learning
organizations in, 112, 113
and negative perceptions of
technology as business
driver, 201
Double-loop learning, 4, 344, 360
Driver functions, 58, 113, 360
conversion to supporter
functions, 62
and high-risk operations, 59
IT-related, 59
vacuum of IT presence in, 60
Driver to supporter life cycle, 141,
306, 346–347
organizational transformation
in, 145
Dynamic, xxi
E
E-business, 140
business vs. technology
components, 346
expanding use at Siemens
AG, 187
IT role at Siemens AG, 193
perception as IT
responsibility, 201
top-down strategy
introduction, 188
transferring emotion in, 181
E-business realignment, 46, 57
Economies of scale, 34, 58, 61, 62,
130, 152, 347
in driver to supporter life cycle,
141, 145
Education
through reflection, 14
and transformation, 14
Electronic communities, 78, 79
Electronic trading, 207, 223
growth at ICAP, 206
at ICAP, 205
and need for organizational
transformation, 206
380 Index
as proportion of total trading
dollars, 224
replacement of mediocre brokers
by, 210
role in business strategy, 203
as supplement to voice broker, 209
Emerging technologies, xiii
challenges to business strategy, 28
as change agents, xxiv
communities of practice and, 344
impact on business strategy, 124
and mission modification, 207
Emotion
requirements for virtual team
members, 174
and social discourse, 92–96
in virtual teams, 181–185
Employee evaluations, 17
Employee replacement, see Staff
replacement
Empowerment, 91, 345–346
vs. control, 345–346
Enron Corporation, 333, 337
Enterprise resource planning
(ERP), 358
Ethics
and best practices, 333–337
executive, 315, 320
Event-driven education
limits of, 109
at operations tier, 114
Evolution phase, 52
in technology business cycle, 57
Evolutionary change, 120–121
at Siemens, 196
Evolutionary learning, 77, 99
initiation by senior
management, 189
in ROD, 112
Executive account managers, 295
Executive decision making, 196, 310
by Milliken, 310
Executive-driven programs, limits
of, 109
Executive ethics, 315, 320
Executive interface, 150
earlier needs in virtual COPs,
185
Executive leadership, 148, 313–316,
320
Executive learning, systems
perspective, 114
Executive participation, importance
of direct, 232
Executive perspective, 9
consequences of exclusion, 214
importance of including in
learning, 98
on IT, 29–31
lack of detail knowledge, 110
level of involvement with IT, 34
poor understanding of
competitive dynamics,
25–27
on role of IT, 32–33
and stimulus for cultural
assimilation, 114
support for virtual teams, 174
Executive presence, 61, 296–299,
302, 315
Executive sponsors, 54, 55, 150, 232
Executive tier, 114
Executive values, 313, 314
Expectations, and virtual
teams, 176
Experiential learning, xxviii, 83–88,
360
Explicit knowledge, 118, 171, 360
challenges for virtual COPs, 172
transformation to, 223
Externalization, and virtual teams,
169–171
Externalization dynamism, 172
Eye-opening events, 15
381Index
F
Facebook, 135
Facilitator role, 127
Factors of multiplicity, 44
False generalizations, 299
Feasibility phase, 52, 53, 87
Feedback, 7
negative IT responses to, 7
Financial measurements
in balanced scorecard, 150
inability to capture IT value, 147
in Ravell Phase I balanced
scorecard, 153
ROD adjustments, 150
First-line managers, 111
Flame communities, 81, 360
Frame-talk, 93, 95, 181, 361
G
Garbage can model, 27, 53, 358
of IT value, 54
Gender participation, in social
networks, 138
Generalizations, entrenched, 10
Globalization, 81, 256, 283
for scalable technologies, 199
Goals
organizational transformation of,
139, 140
working towards, 15
Governance by control, 333
Governance issues, 245
Group discussions, 84, 85, 88, 282
Group relations conferences, 353
H
Hackett Benchworking, 304
Hardware upgrades, 124
High-risk operations, 59
High-velocity environments, 122
ICAP, 209
need for democratic leadership
in, 123
HT/IT governance model, 18
HTC case study, xxx, 225–226, 308
billable time records problems, 225
cascading effect of increased
profits, 229
CEO interactions, 227–228
CTO intermediary role, 232
follow-up developments, 231–232
IT contribution to learning-
knowledge-value chain, 237
IT history, 226–227
middle-ground solution, 228
organizational transformation,
229–231
process, 228–229
similarities to Ravell, 231
Human resources (HR)
dedicated staff allocation to
IT, 19
failure to align with, 18
social networking issues, 137–138
as undisclosed enemy, 18
Hype, about IT importance, 33
I
ICAP case study, xxx, 203–224,
308, 347
B brokers, 210
broker classes, 209–210
A brokers, 209–210
C brokers, 209–210
communities of practice, 218–222
COP common threads, 221
expanding markets through
technology, 213
five-year follow-up, 224
hybrid brokers, 210
382 Index
IT contribution to
learning-knowledge-value
chain, 237
language discourse at, 215, 218
limitations of off-the-shelf
solutions, 205
middle-management COP, 220
recognition of IT as business
driver, 203
role of electronic trading in
business strategy, 203
steps to transformation, 217
Y2K event and executive
involvement, 214–215
Identity definition, 133
for IT, 10–12
Identity development, 91
in virtual teams, 179–180
Ideogenic issues, 181
in virtual teams, 182
Implementation phase, 52, 88
of technology business cycle,
55–57
Indirect returns, 27, 29, 47, 213
need for management to
recognize, 236
Individual learning, xxviii, 70, 83,
171, 344
event driven, 76
at ICAP, 205
by middle managers, 115, 116
moving to system-level learning
from, 143
personnel rotation and, 350
progression to systems thinking,
354–355
in ROD arc, 183–184
with ROD arc, 104
shift to social, 97
simultaneously with
organizational learning, 220
styles of, 69
vs. system-level, 344
Industry expertise, by chief IT
executives, 291
Inferential learning, 64
Information architects, 295
Information overload, 119
Information Technology (IT),
xxxii, 1
as agent for business
transformation, 39
best organizational structure
for, 28
and centralization/
decentralization, 24
centralization/decentralization
issues, 306–307
CEO perceptions of, 339
combining with organizational
learning, 339
contribution to
learningknowledge-value
chain, 237
defining role and mission, 30
difficulty proving value of, 293
executive knowledge and
management of, 28–29
executive views of, 29–31
extent of non-IT executive
knowledge in, 28
hype about, 33
identifying driver component at
Siemens AG, 195
identity definition at Ravell,
10–12
impacts on marketing and
productivity, 35
importance to business strategy
and organizational
structure, 21, 30
improving performance through
organizational learning, 1
integrating into organizational
processes, xix
as key to success, xx
383Index
lack of representation at strategic/
executive levels, 339
management and strategic issues,
31, 34
means of evaluation, 28
need to provide greater
leadership, 340
operation reductions, 38
perception and role, 30
potential to provide competitive
advantage, 28
potential to reinvent business, 23
project perspective, 340
relationship to organizational
structure, 28
relevance to operations,
accounting, marketing, 24
role and mission, 30
role in behavioral
transformation, 228
role in business strategy, 26–27
seamless relationship with
organizational learning, xix
social networks and, 134–138
as strategic business tool, 26
strategic importance, 32
synergistic union with
organizational learning, 187
view as cost center, 38
ways to evaluate, 27–28
Informational flows, importance
of, 33
Innovation leaders, 295
Intangible assets, 147
Integrated disposition phase, in
ROD arc, 103
Integration, see IT integration
Intellectual capital
management
Interactive culture, 48
Internal business processes
in balanced scorecard, 150
ROD adjustments, 150
Internal capacity, lack of, 188
Internal development, 32
Internalization, and virtual teams, 171
Internalization dynamism, 173
Internet, 358
as driving force for e-business
realignment, 46
impact on business strategy, 32
increasing pressure to open, 134
Internet delivery, 23
impact on business strategy, 30
Intranets, 358
and accelerated learning pace, 142
Invisible organizations, 135, 137
ISO 9000, 77, 358
use for virtual team
processes, 177
Isolation, xxvii, 10
of IT departments, 1–2, 21
physical, 2
IT departments
assimilation challenges, 21
change management for, 123–133
complex working hours, 18
concerns about elimination of
integrity, 49–50
dichotomous relationship with
communities of practice, 78
incorporating into true
organizational learning, 68
integrating into organizational
culture, 3
isolation of, 1
lack of executive presence in
management teams, 61
marginalization, xv
as nucleus of all products and
careers, 211
overhead-related functions, 59
perception as back office
operations, 188, 211
perception as support function, 188
reporting to CFOs, 226
384 Index
restructuring of, 50
shifting salary structures, 18
silo operations, 18
subjection to budgetary
cutbacks, 47
support functions, 59
view by other departments, 30
IT dilemma, xxiv, 21–39
day-to-day issues, 137
defining, 36–38
and developments in operational
excellence, 38–39
executive knowledge and
management, 28–29
executive perspective, 32–33
general results, 36
IT role in business strategy,
25–27
management and strategic
issues, 34
organizational context, 24
and organizational structure,
24–25
recent background, 23–24
IT evaluation, 27
IT integration, 3–5, 14, 23, 109
blueprint for, 5–6
enlisting support for, 6–7
executive confusion about, 36
failures in strategic
planning, 36
implementing, 12–14
progress assessment, 7
vs. outsourcing, 22
IT investment
historical phases, 44
risk identification for, 46
IT jobs, decline in U.S., 163
IT performance, measuring, 31
IT professionals
as individualists, 212
integrating with non-IT
personnel, 49
movement into other
departments, 49–51
outsider image, 22
poor interpersonal skills, 33
project management roles, 340
reluctance to take on responsibility
for others, 212
as techies, 21
IT projects, failure of completion, xxiv
IT roles and responsibilities, 60–61
IT spending needs, failure of ROI to
accommodate, 27
IT value, garbage can model, 53–54
IT virtual teams, 19
K
Knowledge creation, 53, 117,
118, 234
balanced scorecards and, 158–161
as basis of transformation, 144
and communities of practice, 169
by electronic communities, 80
structured approach
difficulties, 77
by technology, 222
Knowledge development, 64,
83, 234
Knowledge management, xii, xxi,
xxix, 45, 116–120, 360
in COPs, 197
four modes of, 172
instilling through
technology, 234
participation at Siemens, ICAP/
ETC, HTC, 236
in virtual context, 174
KPMG, 308
L
Language use, xxviii, 89–91, 350
at ICAP, 215
385Index
Latency problems, xxv
Leadership, xii
executive, 315, 320
movement away from individual,
349–350
need for greater IT, 340
vs. governance, 335
Leaps of abstraction, 10
Learner centeredness, 127
Learning
converting to strategic
benefit, 65
defining at organizational level,
14–15
double-loop, 4
single-loop, 4
situated, 75
Learning and growth perspective
in balanced scorecard, 150
in Ravell Phase I balanced
scorecard, 152, 153
ROD adjustments, 150–151
Learning and working, 75
Learning approaches, linear
development in, 96–107
Learning contracts, 130
Learning-knowledge-value
chain, 238
IT contributions to, 237
Learning maturation, 94, 96,
104, 343
phases of, 98
Ravell case study example, 100
Learning organizations, xii, xviii, 9,
45, 72–75
developing through reflective
practice, 15, 20
and fallacy of staff
replacement, 113
transitioning to through
technology acceleration, 13
Learning preferences, xxviii, 83–88
and curriculum development, 85
Learning Style Inventory (LSI),
84, 85
combined applied learning
wheel, 87
McCarthy rendition, 86
Learning Type Measure
instrument, 85
Learning wheel, 84, 86, 87, 95
Left-hand column, 9, 360
Legacy systems, xi, 124, 358
Lessons learned, Ravell Corporation
case study, 14–17
Life-cycle maturation, shortening
through strategic
integration, 44
Lifelong learning, 350
Line management, 111
BPR buy-in by, 25
defined, 111
executive and operations
perspectives, 9
as executives in training, 111
feedback from, 7
importance of support from, 6, 219
learning maturation among, 96–97
meeting attendance, 11
proximity to day-to-day activities,
110
in Ravell case study, 109, 132
role in managing organizational
learning, 109–111
role in product maturity phase, 62
strategic importance to IT
integration, 8–9
technical knowledge, 9
as technology users, 54, 55
vs. first-line managers, 111
vs. supervisors, 112
Linear development, 342
in learning approaches, 96–107
Linkages, 348
Linkedln, 135
Lost jobs cycle, 163
386 Index
M
Management presence, 324–328, 331
Management pushback, 17
Management self-development,
127, 360
Management skills, by chief IT
executives, 292
Management support, importance
of, 6
Management tiers, 113–114
Management values, 296–299, 302,
324–327, 331
Management vectors, 112–116, 341
and three-tier organizational
structure, 114
Mandatory services, at Siemens
AG, 200
Marginality, 89
Marginalization, xxvii, 2, 22, 49
of nonadaptive virtual team
members, 168
reducing, 18
Marketing
IT relevance to, 24
social networking issues, 138
Mastery learning, 128
Mature self, 182
role in virtual team success, 173
and virtual team
complexities, 185
Maturity
ethics and, 327, 334
as gradual process, 345
Maturity arcs, xxviii, 173, 332,
335, 342
Maturity stages
CEO best practices technology
arc, 314–315
in chief IT executive best
practices arc, 297–299
middle manager best practices
arc, 325–326
McDermott, Stephen, 113, 203–
216, 222–224
learning philosophy, 204
Measurable outcomes, 45, 92, 110,
142, 342
Measurement phase, 52, 88
in technology business cycle, 53
Mentorship, 127–130
in self-development process,
129, 131
Middle manager best practices arc,
323, 324
maturity stages, 325–326
performance dimensions,
326–327
Middle managers, 110–112
avoidance of worker compliance
demands, 227
balanced scorecard discourse,
157–158
best practices, 316–325
challenges of defining best
practices for, 316
community of practice at
ICAP, 221
executive-based best
practices, 324
first-line managers, 111
five stages of maturity, 323
implementation-based best
practices, 322
importance to organizational
learning, 109
issues at ICAP, 219
as key business drivers, 110
line managers, 111
multiple tiers of, 111, 113–114
organizational vs. individual
learning by, 115, 116
representing required changes
through, 220
stagnation and resistance to
change among, 110–111
387Index
supervisors, 112
three tiers of, 342
Middle-up-down approach, 110,
112, 150, 214, 215, 235,
313, 342
Milliken, Christopher, 308–312
Missed opportunities, consequences
of, 341
Mission, 36
executive perspectives, 22
identifying IT, 10
modifying based on emerging
technologies, 207
Moral responsibility, 336
Multiple locations, 166–169
for virtual teams, 178
Mythopoetic-talk, 93, 94, 181,
182, 361
N
Networked organizations, 24
New meaning perspectives, 350
Non-IT executives, extent of
technology knowledge,
28, 31
Non-sense, 93
Normative behavior, vs.
leadership, 350
Not knowing, 16
O
Off-the-shelf software, 124
limitations at ICAP, 205
Older workers, fallacy of
replacing, 113
Online banking, 60
Operational efficiencies, 150, 213
Operational excellence, 358
developments in, 39
Operational knowledge, in ROD
arc, 103, 106
Operational norms, cultural
differences regarding, 200
Operations management, at
ICAP, 219
Operations perspective, 9
IT relevance to, 24
Operations tier, 114
Operations users, 54
Opportunities matrix, 27, 29
Organizational change, 66
business case for, 122
cultural lock-in as barrier to, 120
external environment and, 121
and inevitability of cultural
change, 122
internal organization and, 121
and organizational readiness, 121
requirements for, 343
sustaining, 122
Organizational culture, 298,
300–301, 303
altering, 65–66
balanced scorecards and,
158–161
integrating IT into, 3
technology support for evolution
of, 42
transformation of, 16–17
Organizational discourse, 90
Organizational dynamism, 42–48
Organizational evolution, 94,
341, 344
Organizational interactions,
325–327, 329
Organizational knowledge creation,
64, 116, 361
steps to, 117
with technology extension, 117
Organizational leadership phase, in
ROD arc, 102104
Organizational learning, xi, xii, 1,
20, 22, 38, 293, 295, 339,
343, 348
388 Index
balanced scorecard applicability
to, 154
balancing with individual
learning, 70
CEO need to understand, 312
and communities of practice,
75–83
and dealing with change
agents, 69
defining, 14–15
evolution of, 144
at executive levels, 215
experiential learning and, 83–88
and explicit knowledge
creation, 236
fostering through trust, 210
gradual process of, 345
HR/IT integration through, 18
instilling through
technology, 234
IT department isolation from, 2
and IT role at Siemens, 193
and language use, 89–96
learning preferences and, 83–88
limitations of top-down and
bottom-up approaches,
109–110
linear development in learning
approaches, 96–107
links to transformation and
performance, 64
by middle managers, 115
need for executive involvement
in, 215
and organizational knowledge
creation, 116
ratio to individual learning by
manager type, 116
reflective organizational
dynamism arc model, 101
relationship to self-generating
organizations, 351–352
and ROD, 71, 215
ROD and, 293
in ROD arc, 183
self-development and, 133
at Siemens AG, 192
simultaneously with individual
learning, 220
and social discourse, 89–96
synergistic union with IT, 187
transformational results, 139
in virtual teams, 185
Organizational learning
management, 109
change management, 120–133
knowledge management,
116–120
management vectors and,
112–116
mapping tacit knowledge to
ROD, 119
role of line management, 109–111
social networks and IT issues,
134–138
Organizational learning theory, 73
North American vs. global
cultural norms in, 67
and technology, 64–72
Organizational memory, building, 1
Organizational readiness, 121
Organizational role analysis, 353
Organizational structure, 313–314
CEO performance dimensions, 315
IT and, 24–25
IT importance to, 21
Organizational theory, 63
Organizational transformation,
xxviii, 63, 361
balanced scorecard and, 139–144,
146–147
defined, 146
in driver to supporter life cycle,
146
as event milestones, 146
evolutionary aspect of, 143
389Index
HTC case study, 229–230
at ICAP, 217
and knowledge creation, 158–161
knowledge links to, 160
ongoing evaluation methods,
146–156
as ongoing process, 146
stages, with ROD, 145
strategic integration and cultural
assimilation as foundation
for, 143
technology as driver of, 206
three dimension of, 139–141
validation of, 147
vs. change, 152
Organizations
as goal-oriented activity
systems, 63
as interdependent members, 63–64
Organizing principles, and virtual
teams, 176
Other centeredness, 296
in chief IT executives, 296
Outsourcing, xxx, 163–165
all-time highs, 346
by CEOs, 306
cost savings benefits, 163
executive opinions, 29
failure to achieve cost-cutting
results with, 304
of IT, 178
of mature technologies, 62
perception by midsize firms, 32
talent supply benefits, 163
time zone challenges, 164
in virtual teams, 178
vs. internal development, 32
vs. IT integration, 22
P
Payback, 46
Peer consulting groups, 353
Performance
linking organizational learning
to, 64, 65
reflective reviews, 6
Performance dimensions
CEO best practices technology
arc, 315
middle manager best practices
arc, 326, 327
Performance improvement, 3
Performance measurement, 31,
35–37
Permanence, disappearance in
organizations, 185
Personal transformation, through
self-development, 128
Personnel rotation, and individual
learning, 350
Persuasion, as skill to transform talk
into action, 180–181
Pillars model, 45–48
Planning phase, 52
in technology business cycle,
54–55
Politics, xxvi
blood cholesterol analogy, 67
negative impact of power
centralization, 123
and organizational learning,
66–67
Power approach, limitations for
transformation, 13
Power centralization, 122
President’s Council, 193
CIO exposure to, 195
at Siemens AG, 191, 192
Privacy issues, 336
Problem-solving modes, in virtual
teams, 175
Process comprehension, by chief IT
executives, 297
Process leaders, 295
Process measurement, 126
390 Index
in Ravell Phase I balanced
scorecard, 153
Product maturity, 61
Productivity, lagging returns in, 44
Project completion, 125
barriers to, 69
Project ethics, 324, 326–327, 331
Project managers
broad IT responsibilities, 57
as complex managers, 56
IT staff as, 340
Q
Quality, commitment to, 15–16
R
Ravell Corporation case study, xxvi,
1–2, 51, 60, 120, 342, 347
alignment with administrative
department, 17–19
balanced scorecard, 152–153
blueprint for integration, 5–6
BPR buy-in, 25
commitment to quality, 15–16
cultural evolution, 97
culture transformation, 16–17
decision-support systems, 159
defining reflection and learning,
14–15
employee resistance, 8
enlisting support, 6–7
goal orientation, 15
implementing integration,
12–14
IT identity definition, 10–12
IT self-reflection, 9–10
key lessons, 14–17
learning maturation analysis, 96,
99, 343
line management importance,
8–9, 109
new approach to IT integration,
3–7
not knowing mindset, 16
performance evaluation, 133
phase I balanced scorecard, 153
reflective practices and
measurable outcomes, 65
and self-development results, 132
technological acceleration at, 151
virtual team aspect, 164
Real date of delivery, 211
Reflection, 10
with action, 74, 76, 361
defining on organizational level,
14–15
education through, 15
organization movement toward,
12–14
Reflection matrix, 354
Reflective observation, as learning
preference, 84
Reflective organizational dynamism
arc model, 101
Reflective practices, xxviii, 4, 19, 38,
96, 142, 143, 152, 219, 313,
343
by chief IT executives, 296
fostering organizationally, 5
at ICAP, 205
importance to ROD, 74
individual, group, and
organizational, 352, 353
by middle managers, 115
need to develop in virtual teams,
165
in Ravell case study, 65
Reflective skills development, 9–10
Relationship building, by chief IT
executives, 291
Replacement phase, in technology
business cycle, 61–62
Reporting structures, 34, 37, 312
changes in, 38
391Index
for CIOs, 288
of CIOs, 195
CIOs to CEOs, 305
CIOs to CFOs, 188
inconsistent, 339
increased reporting to
CEOs, 304
and organizational change, 225
technological dynamism effects
on, 209
Requirements definition, risks, 126
Research and development, 348
Resistance to change, 209
to integration, 8
Responsive organizational
dynamism (ROD), 42–48,
53, 70, 89, 165, 187, 340
and adaptation, 349
arc model, 102
attainment of, 345
balanced scorecard
modifications, 149
and best practices arcs, 334
best practices to implement, 287
cultural assimilation in, 48–52
dependence on organizational
learning, 72
developmental stages, 100
drivers and supporters concept,
58–59
at HTC, 230–231, 233
at ICAP, 211, 217
importance of reflective practices
to, 74
and individual learning, 123
integrating transformation theory
with, 139
and IT roles and responsibilities,
60–61
mapping tacit knowledge to, 117
in multinational companies, 203
organizational inability to
manage, 97
and organizational learning,
71, 294
relationship to self-generating
organizations, 352
replacement/outsourcing decision,
61–62
requirements, 349
role in economic survival, 232
role of social networks in, 136
at Siemens AG, 189, 199
situational and evolutionary
learning in, 112
stages of organizational
transformation with, 145
and strategic innovation, 349
and strategic integration, 43–48
and systems thinking, 123
technology as, 41
technology business cycle and,
52–57
three-dimensional, 167
and Vince’s reflection matrix, 354
as way of life, 186
Retention
evaluation methods, 145
of IT talent by chief IT
executives, 292
and organizational
transformation, 144
Return-on-investment (ROI), 114
example of direct IT
contributions to, 230
failure of traditional strategies,
27, 29
for ICAP, 213
IT expenditure issues, 189
line management responsibilities,
111
and measurement phase, 53
monetary and nonmonetary, 53
payback as basis for, 46
technology failures, 69
Revalidation, at Siemens AG, 198
392 Index
Revolutionary change, 120
Rippling effect, 154
Risk assessment, 46
in feasibility phase, 53
misunderstandings about, 48
Risk management, as CEO role,
307–313
Risk minimization, with social
networks, 134
Risk orientation, in virtual
teams, 175
ROD arc, 99, 102, 144, 334
with applied individual learning
wheel, 104
department/unit view as other
stage, 100
example, 104
integrated disposition phase, 100,
102, 103
operational knowledge stage, 100
organizational leadership
phase, 103
stable operations phase, 102, 103
stages of individual and
organizational learning,
183–184
Roles and responsibilities
varying IT, 57
of virtual team members, 179
S
Santander Bank, 60
Sarbanes-Oxley (SOX) Act, 333
Scope changes, 56
risks, 126
Selection, and organizational
transformation, 144
Self-development
evaluation phase, 131, 133
formal learning program phase,
129–132
fostering of bottom-up
management through, 132
implementation phase, 131
learning-to-learn phase, 129, 132
as management issue, 128
in organizational learning, 133
in Ravell case study, 132
setbacks to, 132
through discourse, 132
as trial-and-error method, 128
Self-generating organizations, 351,
352
Self-governance, 6
Self-managed learning, 130
Self-management, 124
developing among IT staff, 127
by non-IT staff members, 127
and reflective practice theory, 128
Self-motivation, 123–124
Self-reflection, 9, 12, 16
promotion of, 4
Senior lower level, at Siemens AG,
194
Senior management
initiation of evolutionary learning
by, 189
sharing in learning, 196
Sense and respond, 58, 59, 199, 346
and IT, 59
Sense making, 63, 94
Shared leadership, 18
Shared services leaders, 295
Siemens AG, xxviii, 68, 344, 347
case study, 187–203
CEO quarterly meetings, 191
CFO quarterly meetings, 192
CIO advisory board, 190, 191
CIOs as senior lower level, 194
competitive advantage
improvement, 190
corporate services standards, 20
five-year follow-up, 202–203
393Index
interrelationships among
CIO communities of
practice, 191
IT contribution to
learning-knowledge-value
chain, 237
local-to-global links, 194
mandatory services standards, 200
multiple levels of CIO s, 242
optional technologies
standards, 200
President’s Council, 191, 192
quarterly CFO meetings, 190
revalidation concept at, 198
storyboarding process, 198
technology standards at, 200
Silo operations, 18, 50
Single-loop learning, 4, 361
Situated learning, 75
in ROD, 112
Skills, 92
in virtual teams, 180–181
Social discourse, 89–96, 234
application to virtual teams, 178
and content-activity
relationships, 90
and emotion, 92–96, 180–182
and identity development, 91,
179–180
Marshak’s model mapped to
technology learning
wheel, 93
as “passive” activity, 90
skills and, 92, 180
technology as driver of, 209
type of talk containers, 96
Social networks, xxix
ambiguous effects, 138
attempts to lock out
capabilities, 134
gender participation issues, 138
invisible participants, 137
and IT, 134–138
management issues, 136
Socialization, in virtual teams, 172
Socialization dynamism, 173–177
Soft skills analysis, 180–181
Software packages, 124
Software upgrades, 124
Spiral conversion process, 110
Spy networks, 137
Stable operations phase, in ROD
arc, 102, 103
Stable technology integration, 297,
298, 326
Staff replacement
fallacy of, 113
vs. staff transformation, 4, 20
Standards, 307
CEO needs for, 307
lack of, 56–57
and reduced development
costs, 305
at Siemens AG, 200
Storyboarding process, 358
at Siemens, 198
Strategic advocacy, 67
Strategic alignment, 45
Strategic innovation, 349
Strategic integration, xxviii, 42–48,
82, 187, 197, 201, 293, 341,
343, 361
and communities of practice, 78
consequential interoperability
and, 45
as foundation for organizational
transformation, 143
and need for increased cultural
assimilation, 230
as outcome of ROD, 48
shortening life-cycle maturation
through, 44
Strategic learning, 65, 110, 349
Strategic performance, 342, 343
improving through organizational
learning, 63
394 Index
Strategic thinking
by chief IT executives, 290
importance to chief IT executive
role, 292
shortage of time for, 293
Strategy, see Business strategy
Strategy map, 148, 151–158
Supervisors, 112
Supply chain executives, 295
Support, enlisting, 6–7
Supporter functions, 58, 113,
312, 346
and organizational
transformation, 144
and perception of IT, 188
at product maturity, 61
SWAT teams, 13
System upgrades, 124, 125
Systems thinking, 1, 83, 89, 94,
123, 347
by executives, 114
progression to, 354–355
T
Tacit knowledge, 361
challenges for COP virtual
organizations, 172
in e-mail communications, 171
IT manager practice at
transforming, 197
mapping to ROD, 119, 174–176
protection by technology, 222
role in knowledge
management, 118
transformation to explicit product
knowledge, 223, 236
translating to explicit forms, 169
and virtual teams, 175–176
Talk, and discourse, 90
Talk-action cycles, 96
application to virtual teams, 178
role of persuasion in, 180
Teachers, as facilitators, 127
Techies, stereotyping as, 21
Technical knowledge, need for
up-to-date, 125
Technical resources, strategic
integration into core
business units, 19
Technological acceleration, xxiv, 41,
43, 66, 72, 208, 296, 307
enabling organizations to cope
with, 45
in Ravell case study, 152
Technological dynamism, xxiv,
41–42, 66, 209, 210, 340,
343, 362
as 21st-century norm, 178
self-management
requirements, 128
at Siemens AG, 193
Technological proficiency, as staff
job requirement, 211
Technology, xxxii
aligning with business
strategy, 149
basing on quality of business
plan, 212
benefits at ICAP, 222–223
centrality to competitive
strategy, 348
CEO conceptual knowledge
of, 103
challenges of strategic use, xxii
as change agent, 70, 195
commoditization of, 224
as commodity, 207
and communities-individuals
relationships, 76
and competitive advantage, 72
as component of discourse,
xxvicontribution to
learning-knowledge-value
chain, 236
as driver of business strategy, xxiii
395Index
as driver of organizational
transformation, 206
as driver of virtual team
growth, 163
effects on discourse, 94
effects on legacy systems, 47
evolution of, 223
executive need to recognize
instability of, 199
expansion benefits, 223
failure to result in improved
ROI, 69
feasibility stage, 87
flexibility benefits, 223
impact on ICAP business, 204
impatience with evolution of, 202
implementation risks, 126
impossibility of predicting impact
of, 83
increasing organizational
learning through, 68
as independent variable, 43
integration of business
implementation, 314,
325–326
as internal driver, 42
knowledge creation by, 223
learning maturation with, 100
linking to learning and
performance, 71
multiplicity of business
implementations, 325
multiplicity of business
perspectives, 314
and need for interaction, 142
need to integrate into
organizational
learning, xviiiongoing
implementation of, 349
perception as support function,
206
phased-in implementation,
149–150
role in knowledge
management, 117
strategic uses, 315–316
in TQM, 140
unpredictability of, 41
untested, 56
variability of, 236
as variable, xxvii, 41, 112, 235,
311, 340
Technology acceleration, 13, 20, 47
Technology-based organizations, xviii
Technology business cycle, 52, 63
evolution phase, 52, 57
feasibility phase, 52, 53
garbage can model of IT value, 54
implementation phase, 52, 55–57
measurement phase, 52–54
planning phase, 52, 54–55
stages of, 52
Technology business
implementations,
multiplicity of, 325
Technology cognition, 298–300
Technology competence, 300
by chief IT executives, 297
Technology concepts, 318
for CEOs, 316
Technology definitions branding,
358–359
Technology evaluation, 149
Technology for technology’s sake, 26
Technology implementation
competence for middle
managers, 325
stability of, 326
Technology implementation
competence/recognition, by
middle managers, 325
Technology investment, as
normative process, 348
Technology leadership, 298
by chief IT executives, 298
Technology life cycle, xxiv, xxvii, 346
396 Index
Technology perspectives,
multiplicity of, 297
Technology proficiency, by chief IT
executives, 291
Technology project leadership, 326
Technology projects
reasons for failure, 310
risks to success, 125
Technology road map, 342, 343
Technology strategy map, 151
Technology users, three types of,
54–55
Telephone brokers, 224
electronic trading as supplement
to, 209
at ICAP, 204
Teleworkers, 81
Theft, 336
Three-tier organizational
structure, 113
Throughput improvements, at
ICAP, 213
Time zones, virtual team
challenges, 177
Timeliness
barriers to, 69
issues with IT projects, 74
Timing, and learning
maturation, 104
Tool-talk, 93, 95, 181, 361
Top-down strategy introduction,
xxix, 214, 235, 345
at Siemens, 188, 192
Total quality management (TQM),
technology in, 140
Training
for CIOs, 202
and documentation, 126
inadequacy of, 14
limitations of IT-based, 125
for Siemens CIOs, 196
for virtual team COPs, 173, 177
Transformation, vs. change, 141
Transformative learning, 142
Trust
based on corporate honesty, 210
in communities of practice, 197
organizational movement toward,
12–14
Twitter, 135
Type of talk containers, 95, 181
virtual team applicability, 182
U
Uncertainty, in technology, 77
Unemployed workers, without
rights, 346
Unpredictability, xxi
User interface, 359
User level, 359
V
Value-added services, 232
Variation, and organizational
transformation, 143
Velocity fluxes, 122
Version control, 124
Videoconferencing, 81
Vince’s reflection matrix, 354
Virtual teams, xxx, 163, 359
combination and, 171–172
combination dynamism in, 173
communication failures in, 164
communities of practice with,
170–171
contract administration, 164
cultural and language
barriers, 163
distorted perceptions in, 166
executive support for, 174
externalization and, 169, 171
externalization dynamism in, 172
internalization and, 171
internalization dynamism in, 173
397Index
lack of individual development in,
165–166
management of, 19, 24, 164, 166
marginalization of nonadaptive
members, 169
multi-company challenges, 177
multiple identities in, 179
multiple location challenges, 164,
166–178
need for documented
processes, 177
need for individual
development, 344
need for mature individuals
in, 185
operating differences from
traditional teams, 168
outsourcing in, 177–178
quality checkpoints for, 164
readiness for participation, 173
socialization and, 172
socialization dynamism in,
173–174, 177
soft skills assessment challenges,
180–181
status, 165–166
as subset of organizations, 167
tacit knowledge and, 175–176
and three-dimensional
ROD, 167
transient nature of members, 168
W
Web, 359
Women-centric
communication, 138
Worldviews, and virtual teams, 176
Y
Y2K event, 214, 218, 359
Z
Zero latency, 43