READ THESE two discussion questions and provide the answer – you also need to provide two references per answer
1748390 – Wiley US ©
derator or interviewer of the group be familiar with group processes and with the range of
possible roles as moderator (Barbour, 2008; Hennink, 2014; Krueger & Casey, 2015; Stewart
& Shamdasani, 2015).
Finally, “focus groups work best for topics people could talk about to each other in their
everyday lives—but don’t” (Macnaghten & Myers, 2004, p. 65). Obviously, a focus group is
a poor choice for topics that are sensitive, highly personal, and culturally inappropriate to
talk about in the presence of strangers. Of course, it’s not always obvious ahead of time
how appropriate a topic might be. Crowe (2003) reports successful use of focus groups to
create culturally appropriate HIV prevention material for the deaf community. Jowett and
O’Toole (2006) report an interesting analysis of two focus groups—one of mature students
and their attitude toward participation in higher education, and one of young women and
their views of feminism. They found that the mature students’ focus group was a failure
but the young women’s group was a success. The authors had not anticipated “how
ingrained the sense of inadequacy is for some people who have felt excluded from
education” (p. 462), nor how the power imbalance among members of the mature
students’ group and between the researcher and the group inhibited participation. Finally,
Stewart and Williams (2005) explore the practical and ethical issues of conducting
synchronous and asynchronous online focus groups.
Thus, as with any other data collection method, focus groups are appropriate to be used
when this is the best way to get the best data that addresses your research question. And
as with any other method, the advantages need to be weighed against the disadvantages;
one also needs to develop the skills necessary for using this technique.
Online Interviews
There is no question that the Internet has changed the world. It has also increased the
possibilities for the myriad ways that one can collect data through online venues in
conducting qualitative research through various information communication technologies
(ICTs) and computer mediated communication (CMC) tools (Salmons, 2015). Qualitative
data are collected from or through email, blogs, online discussion groups, Skype, tweets,
texts, and various forms of social media. Here we discuss issues in conducting online
interviews.
One can conduct online interviews synchronously (in real time) through various CMC tools
such as Skype or Adobe Connect. These are typically verbal interviews with a video
component that are more like face-to-face interviews; one can also conduct voice-to-voice
real-time interviews over the telephone. One can also conduct interviews asynchronously
(where there is a lag time) over email or on an online discussion group; asynchronous
interviews tend to be text-based or written interviews. There are strengths and weaknesses
to both synchronous and asynchronous venues. As will be discussed in more detail later,
in general it is helpful to build rapport with participants when conducting qualitative
interviews. Rapport building can be slightly more challenging in text-only asynchronous
venues (such as email), when visual cues are missing (James & Busher, 2012). Further,
participants may not respond to email queries or not respond to certain questions over
1748390 – Wiley US ©
email that they would likely answer in synchronous video or voice-to-voice formats. On the
other hand, text-based interviews over email provide the researcher with a ready-made
transcript, making it easy to document what was said, though the nonverbal cues and
pauses in conversation are missing. Such an email “transcript” can save the researcher
time and money in transcription costs.
Given the availability of various information and communications technology (ICT) tools
for conducting online interviews in either synchronous or asynchronous formats, Salmons
(2015), in her book on online interviews, presents a framework for considering what she
refers to as “e-interview research” (p. 4). She invites the researcher to explore key questions
in eight interrelated categories: (1) aligning the purpose of the research with the design;
considering issues related to (2) choosing data collection methods and (3) one’s position
as a researcher; determining (4) the e-interview style, (5) the type of ICT tools to use, (6)
sampling issues, (7) ethical issues, and (8) actually collecting the data. While qualitative
researchers always need to consider similar issues in all qualitative studies, Salmons
proposes questions and issues specifically related to the online environment.
There is a growing discussion of the availability of various ICT tools for conducting online
interviews, many of which are reviewed by Salmons (2015) and others, talking mainly
about individual interviews. Tuttas (2015) focuses more specifically on lessons learned
from using real-time audiovisual web conferencing technology to carry out focus group
interviews with nurses from geographically dispersed locations in the United States. While
she ultimately chose to use Adobe Connect, she considers the strengths and weaknesses
of various web conferencing technology venues (Skype, ooVoo, GoToMeeting, and Adobe
Connect) for her focus group interviews, which can provide some guidance for some of
the available options.
Like any data collection method, conducting online interviews has its strengths and
weaknesses. One of the obvious strengths is that the researcher is no longer constrained
by geography in considering participants. A researcher could interview participants across
the world, and could perhaps even conduct focus group interviews where all parties can
see each other. Another strength is that many CMC venues allow video recordings to be
made, which can be helpful if one wants to explore or review nonverbal cues later. Some
obvious weaknesses are that not everyone has access to various CMC tools or the
knowledge of how to use them. Further, technology is always subject to breakdowns.
There can be problems with audio recording equipment, as voices sometimes break up on
cell phones or over Skype or other computer mediated venues, which can cause frustration
for both the interviewer and interviewee. Finally, there is always the chance of
confidentiality being compromised when one uses CMC tools over the Internet. While this
may be unlikely, it is always a consideration for researchers and institutional review
boards in dealing with ethical issues in doing research. In sum, all of the strengths and
weaknesses of CMC tools in relationship to qualitative interviewing need to be considered
when undertaking a qualitative research study.
Asking Good Questions
The key to getting good data from interviewing is to ask good questions; asking good
questions takes practice. Pilot interviews are crucial for trying out your questions. Not only
do you get some practice in interviewing, but you also quickly learn which questions are
1748390 – Wiley US ©
confusing and need rewording, which questions yield useless data, and which questions,
suggested by your respondents, you should have thought to include in the first place.
Different types of questions will yield different information. The questions you ask depend
upon the focus of your study. Using the example of mentoring in the career development
of master teachers, if you wanted to know the role mentoring played in career
development, you would ask questions about teachers’ personal experience with
mentoring and probably get a descriptive history. Follow-up questions about how they felt
about a certain mentoring experience would elicit information that is more affective in
nature. You might also want to know their opinion as to how much influence mentoring
generally has in a teacher’s career.
The way in which questions are worded is a crucial consideration in extracting the type of
information desired. An obvious place to begin is by making certain that what is being
asked is clear to the person being interviewed. Questions need to be couched in familiar
language. “Using words that make sense to the interviewee, words that reflect the
respondent’s world view, will improve the quality of data obtained during the interview.
Without sensitivity to the impact of particular words on the person being interviewed, the
answer may make no sense at all—or there may be no answer” (Patton, 2015, p. 454).
Avoiding technical jargon and terms and concepts from your particular disciplinary
orientation is a good place to begin. In a study of HIV-positive young adults, for example,
participants were asked how they made sense of or came to terms with their diagnosis, not
how they constructed meaning in the process of perspective transformation (the
theoretical framework of the study) (Courtenay, Merriam, & Reeves, 1998).
Types of Questions, Good Questions, and Questions to Avoid
An interviewer can ask several types of questions to stimulate responses from an
interviewee. Patton (2015) suggests six types of questions:
1. Experience and behavior questions —This type of question gets at the things a person
does or did, his or her behaviors, actions, and activities. For example, in a study of
leadership exhibited by administrators, one could ask, “Tell me about a typical day at
work; what are you likely to do first thing in the morning?”
2. Opinion and values questions —Here the researcher is interested in a person’s beliefs or
opinions, what he or she thinks about something. Following the preceding example of
a study of administrators and leadership, one could ask, “What is your opinion as to
whether administrators should also be leaders?”
3. Feeling questions —These questions “tap the affective dimension of human life. In
asking feeling questions—‘how do you feel about that?’—the interviewer is looking for
adjective responses: anxious, happy, afraid, intimidated, confident, and so on” (p. 444).
4. Knowledge questions —These questions elicit a participant’s actual factual knowledge
about a situation.
5. Sensory questions —These are similar to experience and behavior questions but try to
elicit more specific data about what is or was seen, heard, touched, and so forth.
�. Background/demographic questions —All interviews contain questions that refer to the
particular demographics (age, income, education, number of years on the job, and so
1748390 – Wiley US ©
on) of the person being interviewed as relevant to the research study. For example, the
age of the respondent may or may not be relevant.
Interestingly, Patton (2015) recommends against asking “why” questions because they
tend to lead to speculation about causal relationships and they can lead to dead-end
responses. Patton recounts an amusing interview with a child in a study of open
classrooms. When a first grader responded that her “favorite time in school” was recess,
Patton asked her why she liked recess. Her answer was because she could go outside and
play on the swings. When he asked, “why” she went outside, the child responded, “Because
that’s where the swings are!” (p. 456). Although “why” questions can put an end to a line
of questioning, it has been our experience that an occasional “why” question can uncover
insights that might be speculative but that might also suggest a new line of questioning.
Another typology of different types of questions that we have found particularly useful in
eliciting information, especially from reticent interviewees, is Strauss, Schatzman, Bucher,
and Sabshin’s (1981) four major categories of questions: hypothetical, devil’s advocate,
ideal position, and interpretive questions. Each is defined in Table 5.2 and illustrated with
examples from a case study of displaced workers participating in a job training and
partnership ( JTPA) program.
TABLE 5.2
Four Types of Questions with Examples from a JTPA Training Program Case Study.
Type of Question
Example
1. Hypothetical questions—Ask what the
respondent might do, or what it might be like in a
particular situation; these usually begin with
“what if” or “suppose.”
Suppose it were my first day in this
training program. What would it be
like?
2. Devil’s advocate questions—The respondent is
challenged to consider an opposing view or
explanation to a situation.
Some people would say that
employees who lost their job did
something to bring about being fired.
What would you tell them?
3. Ideal position questions—Ask the respondent
to describe an ideal situation.
Would you describe what you think the
ideal training program would be like?
4. Interpretive questions—The researcher
advances tentative explanations or
interpretations of what the respondent has been
saying and asks for a reaction.
Are you finding returning to school as
an adult a different experience from
what you expected?
Hypothetical questions ask respondents to speculate as to what something might be like
or what someone might do in a particular situation. Hypothetical questions begin with
“What if” or “Suppose.” Responses are usually descriptions of the person’s actual
experience. In the JTPA study, for example, the hypothetical question, “Suppose it were my
first day in this training program—what would it be like?” elicited descriptions of what it
was actually like for the participants.
1748390 – Wiley US ©
Devil’s advocate questions are particularly good to use when the topic is controversial and
you want respondents’ opinions and feelings. This type of question also avoids
embarrassing or antagonizing respondents if they happen to be sensitive about the issue.
The wording begins, “Some people would say,” which in effect depersonalizes the issue.
The response, however, is almost always the respondent’s personal opinion or feeling
about the matter. In the JTPA example, the question, “Some people would say that
employees who lost their job did something to bring it about. What would you say to
them?” usually revealed how the respondent came to be unemployed and thus involved in
the training program.
Ideal position questions elicit both information and opinion; these can be used with
virtually any phenomenon under study. They are good to use in evaluation studies
because they reveal both the positives and the negatives or shortcomings of a program.
Asking what the ideal training program would be like in the JTPA example revealed things
participants liked and would not want changed, as well as things that could have made it
a better program.
Interpretive questions provide a check on what you think you are understanding, as well as
offer an opportunity for yet more information, opinions, and feelings to be revealed. In the
JTPA example, the interpretive question, “Would you say that returning to school as an
adult is different from what you expected?” allowed the investigator to confirm the
tentative interpretation of what had been said in the interview.
Overall, good interview questions are those that are open-ended and yield descriptive data,
even stories about the phenomenon. The more detailed and descriptive the data, the better.
The following questions work well to yield this type of data:
Tell me about a time when…
Give me an example of…
Tell me more about that…
What was it like for you when…
Some types of questions should be avoided in an interview. Table 5.3 outlines three types
of questions to avoid and illustrates each from the JTPA study. First, avoid multiple
questions—either one question that is actually a multiple question or a series of single
questions that does not allow the respondent to answer one by one. An example of a
multiple question is, “How do you feel about the instructors, the assignments, and the
schedule of classes in the JTPA training program?” A series of questions might be,
“What’s it like going back to school as an adult? How do instructors respond to you? What
kind of assignments do you have?” In both cases the respondent is likely to ask you to
repeat the question(s), ask for clarification, or give a response covering only one part of
the question—and that response may be uninterpretable. If, for example, an interviewee
responded to the question, “How do you feel about the instructors, the assignments, and
the schedule of classes?” with “They’re OK—some I like, some I don’t,” you would not know
whether instructors or assignments or the schedule was being referred to.
1748390 – Wiley US ©
TABLE 5.3
Questions to Avoid.
Type of
Question
Example
Multiple
questions
How do you feel about the instructors, the assignments, and the
schedule of classes?
Leading
questions
What emotional problems have you had since losing your job?
Yes-or-no
questions
Do you like the program? Has returning to school been difficult?
Leading questions should also be avoided. Leading questions reveal a bias or an
assumption that the researcher is making, which may not be held by the participant. These
set the respondent up to accept the researcher’s point of view. The question, “What
emotional problems have you had since losing your job?” reflects an assumption that
anyone losing a job will have emotional problems.
Finally, all researchers warn against asking yes-or-no questions. Any question that can be
answered with a simple yes or no may in fact be answered just that way. Yes-or-no
responses give you almost no information. For the reluctant, shy, or less verbal
respondent, they offer an easy way out; they can also shut down or at least slow the flow
of information from the interviewee. In the JTPA example, questions phrased in a yes-or-
no manner, although at their core they are seeking good information, can yield nothing.
Thus asking, “Do you like the program?” may be answered yes or no; rephrasing it to,
“What do you like about the program?” necessitates more of a response. The same is true
of the question “Has returning to school been difficult?” Asking, “How have you found the
experience of returning to school?” mandates a fuller response.
A ruthless review of your questions to weed out poor ones before you actually conduct an
interview is highly recommended. Ask the questions of yourself, challenging yourself to
answer as minimally as possible. Also note whether you would feel uncomfortable
honestly answering any of the questions. This review, followed by a pilot interview, will go
a long way to ensure that you are asking good questions.
Probes
Probes are also questions or comments that follow up on something already asked. It is
virtually impossible to specify these ahead of time because they are dependent on how
the participant answers the lead question. This is where being the primary instrument of
data collection has its advantages, especially if you are a highly sensitive instrument. You
make adjustments in your interviewing as you go along. You sense that the respondent is
onto something significant or that there is more to be learned. Probing can come in the
form of asking for more details, for clarification, for examples. Glesne and Peshkin (1992)
point out that “probes may take numerous forms; they range from silence, to sounds, to a
single word, to complete sentences” (p. 85). Silence, “used judiciously…is a useful and
easy probe—as is the bunched utterance, ‘uh huh, uh huh,’ sometimes combined with a
nodding head. ‘Yes, yes’ is a good alternative; variety is useful” (p. 86, emphasis in
1748390 – Wiley US ©
original). As with all questions, not just probes, the interviewer should avoid pressing too
hard and too fast. After all, the participant is being interviewed, not interrogated.
Probes or follow-up questions—or as Seidman (2013) prefers to call it, “exploration”—can
be as simple as seeking more information or clarity about what the person has just said.
These are typically who, what, when, and where questions, such as Who else was there?
What did you do then? When did this happen? or Where were you when this happened?
Other probes seek more details or elaboration, such as What do you mean? Tell me more
about that. Give me an example of that. “Walk” me through the experience. Would you
explain that? and so on.
The following is a short excerpt (Weeks, n.d.) from an interview with a man in midlife who
had been retained (held back a grade) in grammar school. The investigator was interested
in how being retained had affected the person’s life. Note the follow-up questions or
probes used to garner a better understanding of his initial reaction to being retained.
Interviewer: How did you feel about yourself the second time you were in first grade?
Respondent: I really don’t remember, but I think I didn’t like it. It was probably
embarrassing to me. I think I may have even had a hard time explaining it to
my friends. I probably got teased. I was probably defensive about it. I may
even have rebelled in some childlike way. I do know I got more aggressive at
this point in my life. But I don’t know if being retained had anything to do
with it.
Interviewer: How did you feel about your new first grade teacher?
Respondent: She was nice. I was very quiet for a while, until I got to know her.
Interviewer: How did you feel about yourself during this second year?
Respondent: I have to look at it as a follow-up to a period when I was not successful.
Strictly speaking, I was not very successful in the first grade—the first time.
Interviewer: Your voice sometimes changes when you talk about that.
Respondent: Well, I guess I’m still a little angry.
Interviewer: Do you feel the retention was justified?
Respondent: (long pause) I don’t know how to answer that.
Interviewer: Do you want to think about it for a while?
Respondent: Well, I did not learn anything in the first grade the first time, but the lady was
nice. She was my Mom’s best friend. So she didn’t teach me anything, and
she made me repeat. I had to be retained, they said, because I did not learn
the material, but (shaking his finger), I could have. I could have learned it
well. I was smart.
The best way to increase your skill at probing is to practice. The more you interview,
especially on the same topic, the more relaxed you become and the better you can pursue
potentially fruitful lines of inquiry. Another good strategy is to scrutinize a verbatim
transcript of one of your interviews. Look for places where you could have followed up but
did not, and compare them with places where you got a lot of good data. The difference
will most likely be from having maximized an opportunity to gain more information
through gentle probing.
1748390 – Wiley US ©
The Interview Guide
The interview guide, or schedule as it is sometimes called, is nothing more than a list of
questions you intend to ask in an interview. Depending on how structured the interview will
be, the guide may contain dozens of very specific questions listed in a particular order
(highly structured) or a few topical areas jotted down in no particular order (unstructured)
or something in between. As we noted earlier, most interviews in qualitative research are
semistructured; thus the interview guide will probably contain several specific questions
that you want to ask everyone, some more open-ended questions that could be followed
up with probes, and perhaps a list of some areas, topics, and issues that you want to know
more about but do not have enough information about at the outset of your study to form
specific questions.
An investigator new to collecting data through interviews will feel more confident with a
structured interview format in which most, if not all, questions are written out ahead of
time in the interview guide. Working from an interview schedule allows the new researcher
to gain the experience and confidence needed to conduct more open-ended questioning.
Most researchers find that they are highly dependent upon the interview guide for the first
few interviews but soon can unhook themselves from constant reference to the questions
and go with the natural flow of the interview. At that point, an occasional check to see
whether all areas or topics are being covered may be all that is needed.
New researchers are often concerned about the order of questions in an interview. No rules
determine what should go first and what should come later. Much depends upon the
study’s objectives, the time allotted for the interview, the person being interviewed, and
how sensitive some of the questions are. Factual, sociodemographic-type questions can
be asked to get the interview started, but if there are a lot of these, or if some of them are
sensitive (for example, if they ask about income, age, or sexual orientation), it might be
better to ask them at the end of the inter-view. By then the respondent has become
invested in the interview and is more likely to see it through by answering these questions.
Generally it is a good idea to ask for relatively neutral, descriptive information at the
beginning of an interview. Respondents can be asked to provide basic descriptive
information about the phenomenon of interest, be it a program, activity, or experience, or to
chronicle their history with the phenomenon of interest. This information lays the
foundation for questions that access the interviewee’s perceptions, opinions, values,
emotions, and so on.
Of course, it is not always possible to separate factual information from more subjective,
value-laden responses. And again, the best way to tell whether the order of your questions
works or not is to try it out in a pilot interview.
In summary, then, questions are at the heart of interviewing, and to collect meaningful
data a researcher must ask good questions. In our years of experience doing and
supervising qualitative research, the fewer, more open-ended your questions are, the better.
Having fewer broader questions unhooks you from the interview guide and enables you to
really listen to what your participant has to share, which in turn enables you to better
follow avenues of inquiry that will yield potentially rich contributions. Exhibit 5.1 is an
interview guide for a study of how older adults become self-directed in their health care
(Valente, 2005). These open-ended questions, followed up by the skillful use of probes,
yielded substantive information about the topic.
1748390 – Wiley US ©
Exhibit 5.1. Interview Guide.
1. I understand that you are concerned about your health. Tell me about your health.
2. What motivated you to learn about your health?
3. Tell me, in detail, about the kinds of things you have done to learn more about
your health. (What did you do first?)
4. Where do you find information about your health?
5. Tell me about a time when something you learned had a positive impact on your
health care.
�. What kinds of things have you changed in your life because of your learning?
7. Whom do you talk to about your health?
�. Tell me about your current interactions with your health care provider.
9. Tell me about what you do to keep track of your health.
10. What other things do you do to manage your health?
11. What kinds of challenges (barriers) do you experience when managing your health
care?
12. What else would you like to share about your health-related learning?
Source: Valente (2005). Reprinted with permission.
Beginning the Interview
Collecting data through interviews involves, first of all, determining whom to interview.
That depends on what the investigator wants to know and from whose perspective the
information is desired. Selecting respondents on the basis of what they can contribute to
the researcher’s understanding of the phenomenon under study means engaging in
purposive or theoretical sampling (discussed in Chapter Four). In a qualitative case study
of a community school program, for example, a holistic picture of the program would
involve the experiences and perceptions of people having different associations with the
program—administrators, teachers, students, community residents. Unlike survey research,
in which the number and representativeness of the sample are major considerations, in
this type of research the crucial factor is not the number of respondents but the potential
of each person to contribute to the development of insight and understanding of the
phenomenon.
How can such people be identified? One way is through initial on-site observation of the
program, activity, or phenomenon under study. On-site observations often involve informal
discussions with participants to discover those who should be interviewed in depth. A
second means of locating contacts is to begin with a key person who is considered
knowledgeable by others and then ask that person for referrals. Initial informants can be
found through the investigator’s own personal contacts, community and private
organizations, advertisements on bulletin boards, or on the Internet. In some studies a
preliminary interview is necessary to determine whether the person meets the criteria for
file://view/books/9781119003601/epub/OPS/c04.html
1748390 – Wiley US ©
participating in the study. For example, in Moon’s (2011) study of the transformational
potential of grieving in older adulthood, he first had to determine if prospective
participants could identify some change in their sense of self or view of the world as a
result of grieving the loss of a loved one.
Taylor and Bogdan (1984) list five issues that should be addressed at the outset of every
interview:
1. The investigator’s motives and intentions and the inquiry’s purpose
2. The protection of respondents through the use of pseudonyms
3. Deciding who has final say over the study’s content
4. Payment (if any)
5. Logistics with regard to time, place, and number of interviews to be scheduled (pp.
87–88)
Besides being careful to word questions in language clear to the respondent, the
interviewer must be aware of his or her stance toward the interviewee. Since the
respondent has been selected by the investigator on purpose, it can be assumed that the
participant has something to contribute, has had an experience worth talking about, and
has an opinion of interest to the researcher. This stance will go a long way in making the
respondent comfortable and forthcoming with what he or she has to offer.
An interviewer should also assume neutrality with regard to the respondent’s knowledge;
that is, regardless of how antithetical to the interviewer’s beliefs or values the respondent’s
position might be, it is crucial for the success of the interview to avoid arguing, debating,
or otherwise letting personal views be known. Patton (2015) distinguishes between
neutrality and rapport. “At the same time that I am neutral with regard to the content of
what is being said to me, I care very much that that person is willing to share with me what
they are saying. Rapport is a stance vis-à-vis the person being interviewed. Neutrality is a
stance vis-à-vis the content of what that person says” (p. 457, emphasis in original).
There are several ways of maximizing the time spent getting an informant to share
information. A slow-starting interview, for example, can be moved along by asking
respondents for basic descriptive information about themselves, the event, or the
phenomenon under study. Interviews aimed at constructing life histories can be
augmented by written narratives, personal documents, and daily activity logs that
informants are asked to submit ahead of time. The value of an interview, of course,
depends on the interviewer’s knowing enough about the topic to ask meaningful questions
in language easily understood by the informant.
Interviewer and Respondent Interaction
The interaction between interviewer and respondent can be looked at from the perspective
of either party or from the interaction itself. Skilled interviewers can do much to bring
about positive interaction. Being respectful, nonjudgmental, and nonthreatening is a
beginning. Obviously, becoming skilled takes practice; practice combined with feedback
on performance is the best way to develop the needed skills. Role playing, peer critiquing,
1748390 – Wiley US ©
videotaping, and observing experienced interviewers at work are all ways novice
researchers can improve their performance in this regard.
What makes a good respondent? Anthropologists and sociologists speak of a good
respondent as an “informant”—one who understands the culture but is also able to reflect
on it and articulate for the researcher what is going on. Key informants are able, to some
extent, to adopt the stance of the investigator, thus becoming a valuable guide in
unfamiliar territory. But not all good respondents can be considered key informants in the
sense that anthropologists use the term. Good respondents are those who can express
thoughts, feelings, opinions—that is, offer a perspective—on the topic being studied.
Participants usually enjoy sharing their expertise with an interested and sympathetic
listener. For some, it is also an opportunity to clarify their own thoughts and experiences.
Dexter (1970) says there are three variables in every interview situation that determine the
nature of the interaction: “(1) the personality and skill of the interviewer, (2) the attitudes
and orientation of the interviewee, and (3) the definition of both (and often by significant
others) of the situation” (p. 24). These factors also determine the type of information
obtained from an interview. Let us suppose, for example, that two researchers are studying
an innovative curriculum for first-year college students. One interviewer is predisposed to
innovative practices in general, while the other favors traditional educational practices.
One student informant is assigned to the program, while another student requests the
curriculum and is eager to be interviewed. The particular combination of interviewer and
student that evolves will determine, to some extent, the type of data obtained.
There has been much attention in recent literature to the subjectivity and complexity
inherent in the interview encounter. Critical theory, feminist theory, critical race theory,
queer theory, and postmodernism have been brought to bear on analyzing the intricacies
of the interview encounter. Although each of these perspectives challenges us to think
about what we are doing when interviewing, what they have in common is a concern for
the participants and their voices, the power dynamics inherent in the interview, the
construction of the “story,” and forms of representation to other audiences.
Some of this discussion is framed in terms of insider-outsider status, especially with
regard to visible social identities, most notably gender, race, age, and socioeconomic
class. Seidman (2013, p. 101) discusses how “our experience with issues of class, race,
and gender…interact with the sense of power in our lives.” And, in turn, “the interviewing
relationship is fraught with issues of power—who controls the direction of the interview,
who controls the results, who benefits.” Foster (1994), for example, explores the
ambiguities and complexities of the interviewer-respondent relationship in her study of
attitudes toward law and order among two generations. She analyzes her stance with
regard to interactions with women versus men, the younger generation versus the older,
middle class versus the working class.
Does a researcher need to be a member of the group being investigated to do a credible
study? Is it preferable for women to interview women or for Hispanics to interview
Hispanics? What about the intersection of race, gender, and class? Are people more likely
to reveal information to insiders or outsiders? There are of course no single right answers
to any of these questions, only the pluses and minuses involved in any combination of
interviewer and respondent. Seidman (2013) suggests that along with being highly
sensitive to these issues and taking them into account throughout the study, “interviewing
1748390 – Wiley US ©
requires interviewers to have enough distance to enable them to ask real questions and to
explore, not to share, assumptions” (p. 102).
Thus the interviewer-respondent interaction is a complex phenomenon. Both parties bring
biases, predispositions, attitudes, and physical characteristics that affect the interaction
and the data elicited. A skilled interviewer accounts for these factors in order to evaluate
the data being obtained. Taking a stance that is nonjudgmental, sensitive, and respectful
of the respondent is but a beginning point in the process.
Recording and Transcribing Interview Data
Of the three basic ways to record interview data, the most common by far is to audio
record the interview. This practice ensures that everything said is preserved for analysis.
The interviewer can also listen for ways to improve his or her questioning technique. The
potential drawbacks are malfunctioning equipment and a respondent’s uneasiness with
being recorded. Most researchers find, however, that after some initial wariness
respondents tend to forget they are being recorded, especially if one uses an unobtrusive
digital recorder. Occasionally interviews are videotaped. This practice allows for recording
of nonverbal behavior, but it is also more cumbersome and intrusive than audio recording
the interview.
A second way to record interview data is to take notes during the interview. Since not
everything said can be written down, and since at the outset of a study a researcher is not
certain what is important enough to write down, this method is recommended only when
mechanical recording is not feasible or when a participant does not want to be recorded.
Some investigators like to take written notes in addition to recording the session. The
interviewer may want to record his or her reactions to something the informant says, to
signal the informant of the importance of what is being said, or to pace the interview.
The third—and least desirable—way to record interview data is to write down as much as
can be remembered as soon after the interview as possible. The problems with this
method are obvious, but at times, writing or recording during an interview might be
intrusive (when interviewing terminally ill patients, for example). In any case, researchers
must write their reflections immediately following the interview. These reflections might
contain insights suggested by the interview; descriptive notes on the behavior, verbal and
nonverbal, of the informant; parenthetical thoughts of the researcher; and so on.
Postinterview notes allow the investigator to monitor the process of data collection as well
as begin to analyze the information itself.
Ideally, verbatim transcription of recorded interviews provides the best database for
analysis. Be forewarned, however, that this is a time-consuming prospect; you can either
transcribe the interview yourself or have someone do it for you. Hiring a transcriber can be
expensive, and there are trade-offs in doing so. You do not get the intimate familiarity with
your data that doing your own transcribing affords. Also, a transcriber is likely to be
unfamiliar with terminology and, not having conducted the interview, will not be able to fill
in places where the recording is of poor quality. If someone else has transcribed your
interview, it is a good idea to read through the transcript while listening to it in order to
correct errors and fill in blanks. However, hiring someone to transcribe allows you to spend
time analyzing your data instead of transcribing. We recommend that new and
1748390 – Wiley US ©
inexperienced researchers transcribe at least the first few interviews of any study, if at all
possible.
There are great benefits to transcribing the interview yourself, not the least of which is
increasing your familiarity with your data. If you do it yourself, you can write yourself
analytic memos along the way. But even with good keyboarding skills, transcribing
interviews is a tedious process, though many of our students have found some voice
recognition software extremely helpful to this task, cutting down the time it takes. The one
that is mentioned most often and appears to be the most economical in terms of time and
money is Dragon NaturallySpeaking. However, it generally recognizes only the voice of the
trained speaker. One of our students described her procedure for using it for transcription
much the same way as is described on their website: using what they call the “parroting”
function, which means to respeak the interview. She found this extremely helpful and
relatively quick; further, using it provided her the opportunity to become very familiar with
her data in the process. The procedure she used is described on their website
(www.nuance.com/dragon/transcription-solutions/index.htm):
Listen to the recording through the headphones of your Dragon headset and activate
your Dragon microphone and repeat the recorded text as you hear it.
Speaking the text aloud in your own voice enables Dragon to accu-rately transcribe
the audio using the Dragon profile tuned to your voice.
Dragon turns your voice into text as quickly as you can speak the words—so there’s no
need to constantly rewind the audio while you try to type out the corresponding text.
This of course is not the only transcription software available, but it is one that she found
very helpful and strongly recommends to others. These technol
http://www.nuance.com/dragon/transcription-solutions/index.htm
Topic 4 DQ 1
Imagine you are serving on the board of a for-profit educational services company. Staff communicate to the board their concerns about the transition from foster care to independence for young adults who have reached the age of 18. These individuals are no longer eligible to be in the foster care system. Of particular concern is their self-esteem through this transition. There is extensive quantitative research in the scholarly literature regarding the function of self-esteem in such a transition, but a dearth of qualitative research on the topic. You want to assist staff in providing adequate support for this client population by commissioning an internal qualitative study to better understand the phenomenon and improve their transitions. Develop a problem statement for this query using a case study design. What would be the purpose of the study? What research questions would you ask? Justify each response in reference to the nature of case study design.
Topic 4 DQ 2
Having identified and developed a case study problem statement, purpose statement, and research questions regarding the transition to independence of emancipated foster care youth and their self-esteem, what do you perceive are the epistemological and methodological strengths of this design for exploring the phenomenon of the study and answering the research questions? What are the epistemological and methodological limitations of the design for the same purpose?