Prior to beginning work on this discussion, review
Standard 9: Assessment Links to an external site.
in the APA’s Ethical Principles of Psychologists and Code of Conduct and DSM-5.
It is recommended that you read Chapters 1 and 2
The assessment process Links to an external site.
in the APA handbook of testing and assessment in psychology, Vol. 2:
Testing and assessment in clinical and counseling psychology Links to an external site.
(2013) e-book, as well as the Kielbasa, Pomerantz, Krohn, and Sullivan (2004) “How Does Clients’ Method of Payment Influence Psychologists’ Diagnostic Decisions?” and the Pomerantz and Segrist (2006) “The Influence of Payment Method on Psychologists’ Diagnostic Decisions Regarding Minimally Impaired Clients” articles for further information about how payment method influences the assessment and diagnosis process.
For this discussion, you will assume the role of a clinical or counseling psychologist and diagnose a hypothetical client. Begin by reviewing the
PSY650 Week Two Case Studies
Download PSY650 Week Two Case Studies
document and select one of the clients to diagnose.
In your initial post, compare the assessments typically used by clinical and counseling psychologists, and explain which assessment techniques (e.g., tests, surveys, interviews, client records, observational data) you might use to aid in your diagnosis of your selected client. Describe any additional information you would need to help formulate your diagnosis, and propose specific questions you might ask the client in order to obtain this information from him or her. Identify which theoretical orientation you would use with this client and explain how this orientation might influence the assessment and/or diagnostic process. Using the DSM-5 manual, propose a diagnosis for the client in the chosen case study.
Analyze the case and your agency’s required timeline for diagnosing from an ethical perspective. Considering the amount of information you currently have for your client, explain whether or not it is ethical to render a diagnosis within the required timeframe. Evaluate the case and describe whether or not it is justifiable in this situation to render a diagnosis in order to obtain a third party payment.
Ethical principles of psychologists and code of conduct (apa.org)
PSY650 Week Two Case Studies
You are a psychologist working for an agency whose policy states that an assessment and
diagnosis must be rendered within 48 hours of an initial session with a client. Please review and
choose one of the following cases to diagnose.
The Case of Amanda
Amanda is a 16-year-old Hispanic female that was referred to treatment due to body image issues.
Her parents believe that she has an eating disorder because she restricts her food intake and
exercises excessively. Amanda denies any compensatory behaviors, but reports the following
symptoms: anxiety, trouble sleeping through the night, and not feeling like a worthwhile person.
She has reservations about seeking treatment because confidentiality is not guaranteed. She has
agreed to attend the first session and opted to use insurance to pay for it. Her insurance company
will allot her 8 sessions upon receipt of her diagnosis. What diagnosis would you give Amanda?
The Case of Charles
Charles is a 33-year-old African American male seeking treatment due to suicidal ideation. He is
currently going through divorce proceedings and reports feeling agitated, angry, sad, and stressed
most days. He is concerned that his relationship issues have begun to impact his responsibilities at
work and fears losing his job. Charles is open to seeking treatment, but his insurance provider is
out-of-network. His insurance company is willing to reimburse him for up to 8 sessions if an
acceptable diagnosis is submitted. What diagnosis would you give Charles?
3
DOI: 10.1037/14048-001
APA Handbook of Testing and Assessment in Psychology: Vol. 2. Testing and Assessment in Clinical and Counseling Psychology,
K. F. Geisinger (Editor-in-Chief)
Copyright © 2013 by the American Psychological Association. All rights reserved.
C h a P t e r 1
ClInICal and CounsElInG
TEsTInG
Janet F. Carlson
Many clinical and counseling psychologists depend
on tests to help them understand as fully as possible
the clients with whom they work (Camara, Nathan,
& Puente, 2000; Hood & Johnson, 2007; Masling,
1992; Naugle, 2009). A broad and comprehensive
understanding of an individual supports decisions to
be made by or regarding a client. Tests provide a
means of sampling behavior, with results used to
promote better decision making. Decisions may
include such matters as (a) what diagnosis or diag-
noses may be applicable, (b) what treatments are
most likely to produce behavioral or emotional
changes in desired directions, (c) what colleges
should be considered, (d) what career options might
be most satisfying, (e) whether an individual quali-
fies for a gifted educational program, (f) the extent
to which an individual is at risk for given outcomes,
(g) the extent to which an individual poses a risk of
harm to others or to himself or herself, (h) the
extent to which an individual has experienced dete-
rioration in his or her ability to manage important
aspects of living, and (i) whether an individual is
suitable for particular types of roles or occupations
such as those that involve high risk or extreme
stress or where human error could have catastrophic
effects. The foregoing list is certainly not exhaustive.
The term assessment as used in clinical and
counseling settings is a broader term than testing
because it refers to the more encompassing integra-
tion of information collected from numerous
sources. Tests comprise sources of information that
often contribute to assessment efforts. Discussion
within this chapter focuses on procedures used in
clinical and counseling assessment, all of which
provide samples of behavior and, thus, qualify as
tests. The narrative begins with a consideration of
how clinical assessment may be framed and then
addresses briefly ethics and other guidelines perti-
nent to assessment practices. Next, specific assess-
ment techniques used in clinical and counseling
contexts are reviewed, followed by a discussion of
concerns related to interpretation and integration
of assessment results. The chapter concludes with a
section devoted to the importance of providing
assessment feedback.
TRADITIONAL AND THERAPEUTIC
ASSESSMENT
A diverse collection of procedures may be viewed as
falling within the purview of clinical and counseling
assessment (Naugle, 2009). The disparate array of
procedures makes it somewhat difficult to appreciate
commonalities among them, particularly for individu-
als who are relatively new to the field of assessment.
Although clinical and counseling assessment proce-
dures take many forms, nearly all are applied in a
manner that facilitates an intense focus on concerns
of a single individual or small unit of individuals,
such as a couple or family (Anastasi & Urbina, 1997).
The clinician who works one-on-one with a client
during a formal assessment effectively serves as data
collector, information processor, and clinical judge
(Graham, 2006). Procedures that may be adminis-
tered to groups of people often serve as screening
measures that identify respondents who may be at
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
4
risk and, therefore, need closer clinical attention (i.e.,
further testing conducted individually).
The immediate goals of clinical and counseling
assessment frequently address mental illness and
mental health concerns. Testing can help practitio-
ners to better address an individual’s mental illness
or mental health needs by identifying those needs,
improving treatment effectiveness, and tracking the
process or progress of interventions (Carlson &
Geisinger, 2009; Kubiszyn et al., 2000). Tests that
assist clinicians’ diagnostic efforts also may be
important in predicting therapeutic outcome (i.e.,
prognosis) and establishing expectations for
improvement. On a practical level, testing can be
used to satisfy insurance or managed care require-
ments for evidence that supports diagnostic determi-
nations or progress monitoring.
Within this basic framework, practitioners view
the assessment process and their role within it dif-
ferently. Indeed, some clinicians regard their role as
similar to that of a technician or skilled tradesper-
son. From this traditional vantage point, skillful
assessment begins to develop during graduate train-
ing, as trainees become familiar with the tools of the
trade—tests, primarily. They learn about a variety
of tests and how to use them. As trainees become
practitioners, they accumulate experience with spe-
cific tests and find certain tests more helpful to their
work with clients than other tests. It is not surpris-
ing that clinicians rely on tests that have proven
most useful to them in their clinical work (Masling,
1992), despite test selection guidelines and stan-
dards that emphasize the importance of matching
tests to the needs of the specific client or client’s
agent (American Educational Research Association
[AERA], American Psychological Association
[APA], & National Council on Measurement in
Education [NCME], 1999; Eyde, Robertson, &
Krug, 2010). As Cates (1999) observed, “the temp-
tation to remain with the familiar [test battery] is an
easy one to rationalize, but may serve the client
poorly” (p. 637). It is important to note that the
clinical milieu is fraught with immediate practical
demands to provide client-specific information that
is accurate, is useful, and addresses matters such as
current conflicts, coping strategies, strengths and
weaknesses, degree of distress, risk for self-harm,
and so forth. The dearth of well-developed tests to
assess certain clinical features does not alleviate or
delay the need for this information in clinical prac-
tice. Thus, practitioners may find it necessary to do
the best they can with the tools at hand.
Therapeutic assessment represents an alternative
to traditional conceptualizations of the assessment
process (Finn & Martin, 1997: Finn & Tonsager,
1997; Kubiszyn et al., 2000). In this contemporary
framework, test givers and test takers collaborate
throughout the assessment process and work as
partners in the discovery process. Test takers have a
vested interest in the initiation and implementation
of assessment as well as in evaluating and interpret-
ing results of the procedures used. Advocates of
therapeutic assessment value and seek input from
test takers throughout the assessment process and
regard their perspectives as valid and informed.
Rather than dismissing client input as fraught with
self-serving motives and inaccuracies, practitioners
who embrace the therapeutic assessment model
engage clients as equal partners. This stance,
together with the participatory role of the test giver,
led Finn and Tonsager (1997) to characterize the
process as an empathic collaboration in which tests
offer opportunities for dialogue as well as interper-
sonal and subjective exchanges. A more thorough
discussion of therapeutic assessment and its applica-
tion is given in Chapter 26, this volume.
TEST USAGE
A survey of clinical psychology and neuropsychology
practitioners (Camara et al., 2000) indicated that clin-
ical psychologists most frequently used tests for per-
sonality or diagnostic assessment. The findings were
consistent with those from an earlier study (O’Roark
& Exner, 1989, as cited by Camara et al., 2000), in
which 53% of psychologists also reported that they
used testing to help determine the most effective ther-
apeutic approach. Testing constitutes an integral
component of many practitioners’ assessment efforts
as practitioners report using formal measures with
regularity. Ball, Archer, and Imhof (1994) reported
results from a national survey of a sample of 151 clini-
cal psychologists who indicated they provided psy-
chological testing services. The seven most used tests
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
5
reported by respondents were used by more than half
of the practitioners who responded to the survey. In
order, these tests included the Wechsler IQ scales,
Rorschach, Thematic Apperception Test (TAT), Min-
nesota Multiphasic Personality Inventory (MMPI),
Wide-Range Achievement Test, Bender Visual Motor
Gestalt Test, and Sentence Completion. Camara et
al.’s (2000) sample comprising 179 clinical psycholo-
gists reported remarkably similar frequencies of use,
with the Wechsler IQ scales, MMPI, Rorschach,
Bender Visual Motor Gestalt Test, TAT, and Wide-
Range Achievement Test heading up the list. The pre-
ceding reports notwithstanding, considerable
evidence suggests that test usage is in decline (Ben-
Porath, 1997; Camara et al., 2000; Garb, 2003; Eis-
man et al., 2000; Meyer et al., 2001), whereas other
researchers have noted a corresponding decline in
graduate instruction and training in testing and
assessment (Aiken, West, Sechrest, & Reno, 1990;
Fong, 1995; Hayes, Nelson, & Jarrett, 1987).
The now ubiquitous presence of managed care in
all aspects of health care, including mental health
care, clearly influences practitioners’ use of tests
(Carlson & Geisinger, 2009; Yates & Taub, 2003).
As is true for health care providers generally, mental
health care providers can expect reimbursement for
services they provide only if those services can be
shown to be cost effective and essential for effective
treatment. In a managed care environment, practi-
tioners no longer have the luxury of making unilat-
eral decisions about patient care, including test
administration. Clinical assessments that pinpoint a
diagnosis and provide direction for effective treat-
ment are reimbursable, within limits, and typically
are considered by third-party payers as therapeutic
interventions (Griffith, 1997; Kubiszyn et al., 2000;
Yates & Taub, 2003). Moreover, a number of studies
have demonstrated that clinical tests have therapeu-
tic value in and of themselves (Ben-Porath, 1997;
Finn & Tonsager, 1997) and encourage their use as
interventions.
STANDARDS, ETHICS, AND RESPONSIBLE
TEST USE
Counseling and clinical psychologists who conduct
assessments must maintain high standards and abide
by recommendations for best practice. In short, their
assessment practices must be beyond reproach.
Considering the important and varied uses to which
assessment results may be applied, it is not surpris-
ing that an array of rules, guidelines, and recom-
mendations govern testing and assessment practices.
For many years, the Standards for Educational and
Psychological Testing (AERA, APA, & NCME, 1999)
have served several professions well as far as delin-
eating the standards for test users as well as for test
developers, and clinical and counseling psycholo-
gists must adhere to ethical principles and codes of
conduct that influence testing practices.
The APA’s Ethical Principles of Psychologists and
Code of Conduct (APA Ethical Principles; APA, 2010)
addresses assessment specifically in Standard 9,
although passages relevant to assessment occur in
several other standards, too. The 11 subsections of
Standard 9 address issues such as use of tests, test
construction, release of test data, informed consent,
test security, test interpretation, use of automated
services for scoring and interpretation, and commu-
nication of assessment results. In essence, the stan-
dards demand rigorous attention to the relationship
between the clinician (as test giver) and the client
(as test taker) from inception to completion of the
assessment process. Ultimately, practitioners must
select and use tests that are psychometrically sound,
appropriate for use with the identified client, and
responsive to the referral question(s). Furthermore,
clinicians retain responsibility for all aspects of the
assessment including scoring, interpretation and
explanation of results, and test security, regardless
of whether they choose to use other agents or ser-
vices to carry out some of these tasks.
The Standards for Educational and Psychological
Testing (AERA, APA, & NCME, 1999) and Standard
9 of the APA Ethical Principles (APA, 2010) provide
sound guidance for counseling and clinical psycholo-
gists who provide assessment-related services. A
number of other organizations concerned with good
testing practices have official policy statements that
offer additional assistance to practitioners seeking
further explication of testing-related guiding princi-
ples or whose services may extend to areas beyond
traditional parameters. The policy statements
most likely to interest counseling and clinical
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
6
psychologists include the ACA Code of Ethics (Ameri-
can Counseling Association, 2005), Specialty Guide-
lines for Forensic Psychology (Committee on the
Revision of Specialty Guidelines for Forensic Psychol-
ogy, 2011), Principles for Professional Ethics (National
Association of School Psychologists, 2010), and the
International Guidelines for Test Use (International
Test Commission, 2001). In addition to the forego-
ing, many books about ethics in the professional
practice of psychology include substantial coverage of
ethical considerations in assessment (e.g., Cottone &
Tarvydas, 2007; Ford, 2006). A particularly accessible
volume by Eyde et al. (2010) provides expert analysis
of case studies concerning test use in various settings,
including mental health settings, and illustrating real-
life testing challenges and conundrums.
ASSESSMENT METHODS
As in all assessment endeavors, tasks associated with
assessment in clinical and counseling psychology
involve information gathering. Clinical and counsel-
ing assessments typically comprise evaluations of
individuals with the goal of assisting an individual
client in some manner. To determine the best way to
help an individual, clinicians rely on comprehensive
assessments that evaluate several aspects of an indi-
vidual’s functioning. Thus, most such assessments
involve collecting information using a variety of
assessment techniques (e.g., interviews, behavioral
observations). Moreover, the use of multiple proce-
dures (e.g., tests) facilitates the overarching goal of
clinical and counseling assessment and also reso-
nates with the important principle of good testing
practice. Specifically, Standard 11.20 of the Stan-
dards for Educational and Psychological Testing
(AERA, APA, & NCME, 1999) states that, in clinical
and counseling settings, “a test taker’s score should
not be interpreted in isolation; collateral informa-
tion that may lead to alternative explanations for the
examinee’s test performance should be considered”
(p. 117). It follows that inferences drawn from a sin-
gle measure must be validated against evidence
derived from other sources, including other tests
and procedures used in the assessment.
Counseling and clinical assessment methods vary
widely in their forms. The means of identifying what
information is needed and gathering relevant evi-
dence may include direct communications with
examinees, observations of examinees’ behavior,
input from other interested parties (e.g., family
members, peers, coworkers, teachers), reviews of
records (e.g., psychiatric, educational, legal), and
use of formal measures (i.e., tests). Interviews,
behavioral observations, and formal testing proce-
dures represent the primary ways of obtaining clini-
cally relevant information.
Interviewing
Intake or clinical interviews often represent a first
point of contact between a client and a clinician in
which information that contributes to clinical
assessment surfaces. Many important concerns must
be handled effectively within what is probably no
more than a 50-minute session. Beyond practical
(e.g., scheduling, billing, emergency contact infor-
mation) and ethical (e.g., informed consent, confi-
dentiality and its limits) matters, the practitioner
must accurately grasp and convey his or her under-
standing of the issues to the client. If this under-
standing captures the client’s concerns, then it likely
helps the client to believe that his or her problems
can be understood and treated by the clinician. If
the practitioner’s understanding of the client’s issues
is not accurate, then the client has the opportunity
to provide additional information that represents his
or her concerns more accurately. At the same time
and somewhat in the background, the clinician
exudes competence and concern in a manner that
inspires hope and commitment, while, in the fore-
ground, he or she establishes a fairly rapid yet accu-
rate appraisal of the client’s issues and concerns.
Effective treatment depends on the establishment of
rapport sufficient to suggest that a productive work-
ing relationship is possible along with an appraisal
that accurately reflects the severity of the concerns
expressed and disruptions in the client’s ability to
function on a day-to-day basis as well as attendant
risks. For a more complete discussion, readers can
consult Chapter 7, this volume, concerning clinical
interviewing.
Many intake procedures involve clinical inter-
viewing that is somewhat formalized by the use of a
structured format or questionnaire. The quality of
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
7
intake forms varies widely, partly as a function of
how they were developed. For example, clinicians
may complete an intake form developed or adopted
by the facility in which he or she works. Such forms
generally include questions about the client’s cur-
rent concerns (e.g., “presenting problem” or “chief
complaint”) as well as historical information that
may bear on the client’s status (e.g., history of previ-
ous treatment, family history, developmental his-
tory). Depending on the quality of the intake form,
practitioners may find it necessary to supplement
the information collected routinely through comple-
tion of the form. In the appendices of her book, The
Beginning Psychotherapist’s Companion, Willer
(2009) offers several lists of intake questions that
may be used to probe specific areas of concern that
may surface during the collection of intake informa-
tion (e.g., depression and suicide, mania, substance
use). Advisable in all clinical settings and essential
in clinical settings that provide acute and crisis ser-
vices, intake procedures must address the extent to
which the client poses a danger to others or to him-
self or herself.
Intake interviews may be considered semistruc-
tured if they address specific content uniformly from
one client to the next but are not tightly “scripted”
as are structured interviews. According to Garb’s
(2005) review, semistructured interviews are more
reliable than unstructured clinical interviews, most
likely because of the similarity of content (if not
actual test items) across interviewers. An example of
a semistructured technique is the mental status
examination (MSE), which refers to a standardized
method of conducting a fairly comprehensive inter-
view. The areas of mental status comprising an MSE
are summarized in Table 1.1. Many MSE elements
may be evaluated through unobtrusive observations
made during the meeting or through verbal
exchanges that occur naturally in ordinary
conversation.
The semistructured nature of the MSE ensures
coverage of certain vital elements of mental status
but is flexible enough to allow clinicians to ask
follow-up questions if he or she believes it is neces-
sary or helpful to do so. The MSE is used by a wide
variety of mental health providers (counseling and
clinical psychologists as well as social workers,
psychiatrists, and others) and typically is completed
at intake or during the course of treatment to assess
progress. There are several versions of the MSE,
including standardized and nonstandardized forms
(Willer, 2009). An example of a structured diagnos-
tic interview is the Structured Clinical Interview for
the DSM–IV–TR (SCID; First, Spitzer, Gibbon, &
Williams, 2002), where DSM–IV–TR refers to the
Diagnostic and Statistical Manual of Mental Disorders
(4th ed., text revision; American Psychiatric Associ-
ation, 2000). Completion of the SCID allows practi-
tioners to arrive at an appropriate psychiatric
diagnosis.
Regardless of whether an initial clinical contact
calls for formal assessment, a crucial area to evaluate
during one’s initial interactions with clients is the
presence of symptoms that indicate risk of harm to
self or others. “Assessing risk of suicide is one of the
most important yet terrifying tasks that a beginning
clinician can do” (Willer, 2009, p. 245) and consti-
tutes the ultimate high-stakes assessment. It is also
frequently encountered in clinical practice (Stolberg
& Bongar, 2002). Multiple factors contribute to
overall risk status either by elevating or diminishing
risk. Bauman (2008) describes four areas to examine
when evaluating risk of suicide: (a) short-term risk
factors, including stressors arising from environ-
mental sources and mental health conditions;
(b) long-term precipitating risk factors, including
genetic traits or predispositions and personality
traits; (c) precipitating events, such as legal matters,
significant personal or financial losses, unwanted
pregnancy, and so forth; and (d) protective factors
or buffers, such as hope, social support, and access
to mental health services. An individual’s overall
risk of suicide represents a combination of risks
emanating from the first three elements, which ele-
vate overall risk, adjusted by the buffering effect of
the last element, which reduces overall risk.
In practice, assessment of suicide risk relies heav-
ily on clinical interviewing (Stolberg & Bongar,
2002). Specific tests designed to assess suicide risk,
such as the Beck Hopelessness Scale (Beck, 1988)
and the Suicide Intent Scale (Beck, Schuyler, & Her-
man, 1974), appear to be used infrequently by prac-
titioners (Jobes, Eyman, & Yufit, 1990; Stolberg &
Bongar, 2002). Assessment of risk must consider
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
8
several features of risk beyond its mere presence
including immediacy, lethality, and intent. Immedi-
acy represents a temporal consideration with higher
levels of immediacy associated with imminent risk—
a state of acute concern for the individual’s life.
Assessment of imminent risk involves consideration
of several empirically derived risk factors including
(a) history of prior attempts (with recent attempts
given greater weight than attempts that occurred
longer ago); (b) family history of suicide or attempt;
and (c) presence of mental or behavior disorders
such as substance abuse, depression, and conduct
disorder. Imminent risk is accelerated by an inability
to curb impulses and a need to “blow off steam,”
which constitute poor prognostic signs. Lethality
refers to the possibility of death occurring as a result
of a particular act. In assessing risk of suicide, the
act in question is one that is planned or contem-
plated by the client. Use of firearms connotes higher
lethality than overdosing on nonprescription drugs
(e.g., aspirin). Lethality differs from intent, which
refers to what the person seeks to accomplish with a
particular act of self-harm. Serious suicidal intent is
not necessarily associated with acts of high lethality.
Behavioral Observations
One of the earliest means by which assessment
information begins to accumulate is the test taker’s
TABLE 1.1
Major Areas Assessed During a Mental Status Examination
Area Content
Appearance The examiner observes and notes the person’s age, race, gender, and overall appearance.
Movement The examiner observes and notes the person’s gait (manner of walking), posture, psychomotor excess or
retardation, coordination, agitation, eye contact, facial expressions, and similar behaviors.
Attitude The examiner notes client’s overall demeanor, especially concerning cooperativeness, evasiveness, hostility,
and state of consciousness (e.g., lethargic, alert).
Affect The examiner observes and describes affect (outwardly observable emotional reactions), as well as
appropriateness and range of affect.
Mood The examiner observes and describes mood (underlying emotional climate or overall tone of the client’s
responses).
Speech The examiner evaluates the volume and rate of speech production, including length of answers to questions,
the appropriateness and clarity of the answers, spontaneity, evidence of pressured speech, and similar
characteristics.
Thought content The examiner assesses what the client says, listening for indications of evidence of misperceptions,
hallucinations, delusions, obsessions, phobias, rituals, symptoms of dissociation (feelings of unreality,
depersonalization), or thoughts of suicide.
Thought process The examiner assesses thought processes (logical connections between thoughts and how thoughts connect
to the main thread or gist of conversation), noting especially irrelevant detail, verbal perseveration,
circumstantial thinking, flight of ideas, interrupted thinking, and loose or illogical connections between
thoughts that may indicate a thought disorder.
Cognition The evaluation assesses the person’s orientation (ability to locate himself or herself) with regard to person,
place, and time; long- and short-term memory; ability to perform simple arithmetic (e.g., serial sevens);
general intellectual level or fund of knowledge (e.g., identifying the last several U.S. presidents, or similar
questions); ability to think abstractly (explaining a proverb); ability to name specific objects and read or write
complete sentences; ability to understand and perform a task with multiple steps (e.g., showing the examiner
how to brush one’s teeth, throw a ball, or follow simple directions); ability to draw a simple map or copy a
design or geometrical figure; ability to distinguish between right and left.
Judgment The examiner asks the person what he or she would do about a commonsense problem, such as running out of
shampoo.
Insight The examiner evaluates degree of insight (ability to recognize a problem and understand its nature and severity)
demonstrated by the client.
Intellectual The examiner assesses fund of knowledge, calculation skills (e.g., through simple math problems), and
abstract thinking (e.g., through proverbs or verbal similarities).
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
9
behaviors. Surprisingly little information about
behavioral observations appears in the empirical or
practice-based literature, despite its traditional
inclusion as a section in assessment reports (Leicht-
man, 2002; Tallent, 1988). Although difficult to
standardize and quantify, many psychologists con-
sider the observations and interpretations of an
examinee’s behavior during testing vital to under-
standing the client (Oakland, Glutting, & Watkins,
2005). Only a few standardized assessments of test
behavior have been developed, sometimes associated
with a specific test. For example, Glutting and Oak-
land (1993) developed the Guide to the Assessment
of Test Session Behavior and normed it on the stan-
dardization samples of the Wechsler Intelligence
Scale for Children (3rd ed.; Wechsler, 1993) and the
Wechsler Individual Achievement Test (Psychologi-
cal Corporation, 1992). To date, standardized mea-
sures of test session behavior have not been widely
adopted.
Counseling and clinical psychologists typically
have sufficient and specialized training to allow
them to observe and record an examinee’s verbal
and nonverbal behaviors. Notations usually are
made for several behavioral dimensions including
physical appearance, attitude toward testing, con-
tent of speech, quality and amount of motor activity,
eye contact, spontaneity, voice quality, effort (gener-
ally and in the face of challenge), fatigue, coopera-
tion, attention to tasks, willingness to offer guesses
(if applicable), and attitude toward success and fail-
ure (if applicable). Leichtman (2002) cautioned
against either (a) including observations of every-
thing a test taker thinks, feels, says, and does; or
(b) reducing behavioral descriptions to such an
extent that the resulting narrative fails to provide
any real sense of what the test taker is like.
Behavior during clinical and counseling testing is
unavoidably influenced by interactions between the
test taker and the test giver. As Masling (1992)
observed, “the psychologist is simultaneously a par-
ticipant in the assessment process and an observer
of it” (p. 54). A common expectation and responsi-
bility of psychologists who administer such tests is
to establish rapport with the test taker before imple-
menting test procedures. Rapport is vital to ensure a
test taker’s cooperation and best effort, attitudes that
contribute to test results that provide an accurate
portrayal of the test taker’s characteristics. However,
rapport differs from one dyad to another, as stylistic
and personality factors vary across both examiners
and examinees and affect the quality of their interac-
tions. Although adherence to standardized adminis-
tration procedures during testing is vital to preserve
the integrity of assessment process and test score
interpretability (e.g., AERA, APA, & NCME, 1999;
Geisinger & Carlson, 2009), practitioners are not
automatons who simply set specific tasks before
examinees while reciting specific instructions.
Actions taken by examiners during individual test
administration must be responsive to test-taker
behaviors and the examiner’s interpretation of those
behaviors. Some of these actions are scripted in the
test administration procedures, whereas others are
subtle, nonverbal—possibly unconscious—ones that
serve to allay anxiety or encourage elaboration of a
response. Other actions follow logically from an
examinee’s behavior, such as when the examiner
offers a short break after noting the examinee’s
failed attempt to stifle several yawns. In this vein,
Leichtman (2002) suggested that test administration
procedures and instructions are “like a play. Exam-
iners are bound by the script, but there is wide lati-
tude for how they and their clients interpret their
roles” (p. 209). The traditions of testing encourage
the notion that an examiner, “like the physical sci-
entist or engineer, is ‘measuring an object’ with a
technical tool. But the ‘object’ before him [sic] is a
person, and the testing involves a complex psycho-
logical relationship” (Cronbach, 1960, p. 602).
Formal Testing
Tests are used by counseling and clinical psycholo-
gists at various points in therapeutic contexts. Some
tests may be administered during an intake session,
before the establishment of a therapeutic relation-
ship, to check for a broad range of possible issues
that may need clinical attention. These screening
measures represent a “first pass” over the variety of
issues that may concern a person who seeks mental
health assistance. They are meant to provide a gross
indication of level of symptom severity in select
areas and, often, to indicate where to focus subse-
quent assessment efforts (Kessler et al., 2003).
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
10
Screening measures typically are quite brief and are
seldom, if ever, validated for use as diagnostic
instruments. Rather, these measures provide a
glimpse into the nature and intensity of a client’s
concerns. As such, they may reveal problems that
need immediate attention as well as areas needing
further assessment. An example of a screening mea-
sure designed for use in college counseling centers is
the Inventory of Common Problems (ICP; Hoffman
& Weiss, 1986), a 24-item inventory of specific
problems college students may encounter. Respon-
dents use a 5-point Likert-type scale to indicate the
extent to which they have been bothered or worried
by the stated problem over the past few weeks. Areas
assessed include depression, anxiety, academic
problems, interpersonal problems, physical health
problems, and substance use problems. High scores
suggest topics that may be explored further in
counseling.
The Symptom Check List-90-R (SCL-90-R; Dero-
gatis, 1994) is a clinical screening inventory with
broader applicability than the ICP. The inventory
consists of 90 items, each of which presents a symp-
tom of some sort to which respondents indicate the
extent to which they were distressed by that symp-
tom over the past week, using a 5-point scale. The
SCL-90-R yields scores on nine scales (Somatization,
Obsessive-Compulsive, Interpersonal Sensitivity,
Depression, Anxiety, Hostility, Phobic Anxiety, Para-
noid Ideation, and Psychoticism) and total scores on
three scales (Global Severity Index, Positive Symp-
tom Total, and Positive Symptom Distress Index).
Norms are differentiated by age (adolescent and
adult) for nonpatients and by psychiatric patient sta-
tus (nonpatient, inpatient, and outpatient) for adults,
with each norm keyed by gender. Some brief clinical
measures may be used to screen for problems in a
single area of potential concern. For example, the
Beck Depression Inventory—II (Beck, Steer, &
Brown, 1996) and the State–Trait Anxiety Inventory
(Spielberger, Gorsuch, Lushene, Vagg, & Jacobs,
1983) screen for elevated levels of symptom severity
in depression and anxiety, respectively. Overall,
these and other screening measures are most useful
for detecting cases in need of further examination.
The assessment procedures described thus far are
used routinely at or near the outset of a therapeutic
relationship to help specify or clarify the clinical sit-
uation that prompted the client to seek treatment.
More extensive, formal testing may prove beneficial
at an early stage of intervention or anytime during
therapy to specify, clarify, or differentiate diagnoses;
to monitor treatment progress; or to predict psycho-
therapy or mental health outcomes (Kubiszyn et al.,
2000; see also Chapter 13, this volume, concerning
psychological assessment in treatment). Counseling
and clinical testing can be used to illuminate a vari-
ety of dimensions that may help clinicians to deliver
effective treatment for a particular client, including
measures of cognitive ability, values, interests, aca-
demic achievement, psychopathology, personality,
and attitudes. The sheer number of tests available in
each of these areas makes it impractical to review
(or even mention) every test that may have clinical
salience, particularly in light of the coverage
afforded these measures in other chapters of this
handbook. Thus, in the section that follows, tests
are described according to several different ways of
grouping them, with implications for clinical and
counseling tests highlighted.
DIMENSIONS OF CLINICAL AND
COUNSELING TESTING
Various characteristics of tests may be used to dis-
tinguish among them. Such distinctions go beyond
merely grouping or categorizing tests. For example,
tests differ in administration format, nature of the
respondent’s tasks, and whether the stakes associ-
ated with the use of test scores are high or low.
These dimensions influence the testing process in
counseling and clinical contexts, by affecting expec-
tations and behaviors of test givers and test takers as
well as how the tests may be used and the confi-
dence testing professionals may have in the results.
Test administration format is one way to distin-
guish among tests. Some tests require one-to-one or
individual administration, whereas other tests are
designed for group administration. Generally speak-
ing, it is possible to administer group tests using an
individual format, although the examiner’s role in
these situations is often reduced as he or she serves
primarily as a monitor of the session. As suggested
near the beginning of this chapter, clinical measures
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
11
focus intensely on individual concerns. It follows
that many—although by no means all—clinical
measures were developed for individual administra-
tion. Individually administered tests are highly
dependent on the clinical skills of the examiner. As
Meyer et al. (2001) observed, “a psychological test is
a dumb tool, and the worth of the tool cannot be
separated from the sophistication of the clinician
who draws inferences from it and then communi-
cates with patients and other professionals” (p.
153). Among other things, the responsibility to
establish and maintain rapport rests with the clini-
cian, and there is no magic formula by which to
achieve it and no established criteria by which to
establish that a reasonable level of rapport has been
achieved. That determination depends on clinical
judgment.
At the outset of a testing session, examiners need
to ensure that a sufficient level of comfort and com-
munication exists with the test taker to foster his or
her best and sustained effort. Examiners need to
exude a businesslike manner yet remain responsive
to queries from the test taker and aware of fluctua-
tion in the test taker’s energy, focus, and attitude.
They need to help test takers understand that testing
is important but must avoid overstating this point,
lest the test taker become overly anxious about per-
forming well on the test tasks. Test takers differ in
terms of their readiness to engage in the assessment
process and to give it their best effort: Some are
eager to begin, some are anxious, some are irritated,
some are suspicious or confused, and so forth. The
clinician must keep a finger on the pulse of the test-
ing session and take action as needed to restore rap-
port and keep motivation high and performance
optimal.
Standardized individual administration of tests is
vital for the vast majority of tests to assure that test-
ing conditions are the same for all test takers; there-
fore, results from different test takers may be
meaningfully compared (Geisinger & Carlson,
2009). However, given the interpersonal context
within which clinical and counseling measures are
administered, this procedural sameness is difficult to
ensure for all aspects of testing. For example, most
projective (performance-based) measures are
untimed. How long examiners wait before moving
on to the next stimulus is a matter of judgment and,
likely, varies a great deal from one examiner to the
next. Some standardized measures include “scripts”
for the examiner, in an effort to make administra-
tion more uniform across examiners. Despite
appearances, there is room for interpretation in the
scripts nevertheless (Leichtman, 2002). How scru-
pulously examiners follow standardized procedures
for administration is an open question (Geisinger &
Carlson, 2009; Masling, 1992), as studies of even
highly scripted individually administered tests
reveal many departures (e.g., Moon, Blakey, Gor-
such, & Fantuzzo, 1991; Slate, Jones, & Murray,
1991; Thompson & Bulow, 1994).
On the other hand, group-administered tests
are not monitored as closely as individually admin-
istered tests and do not depend on rapport to
ensure optimal performance. Directions for group-
administered tests must be clear to all test takers
before the beginning of the test (or inventory or
questionnaire) because missteps by examinees
cannot be corrected easily. The same instructions
and practice procedures are used for everyone. An
individual who perhaps would benefit from one
more practice item will not get it, and there will be
no follow-up opportunities to test limits.
The nature of the tasks that constitute individual
tests is another way to distinguish tests. In Chapter
10 of this volume, which addresses performance-
based measures (often referred to as projective tech-
niques), Irving B. Weiner describes a major
distinction between test types—that is, between
performance-based measures and self-report mea-
sures. The former test type requires test takers to act
upon stimuli presented to them (e.g., Rorschach ink-
blots, TAT cards), to create or construct responses,
or to formulate responses to specific questions (e.g.,
Wechsler scales of intelligence) presented to them,
whereas self-report measures ask respondents to
answer questions about themselves by selecting
responses from a preset array of options. As sug-
gested by Weiner, neither test type is inherently
superior, as the test types seek and provide different
kinds of information. A test’s clinical value is unre-
lated to the nature of the tasks that constitute it.
Performance-based measures typically use scor-
ing systems or rubrics that ultimately depend on
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
12
some degree of subjectivity in scoring. The tasks
that constitute performance-based measures are
open-ended and offer wide latitude to test takers as
far as how they choose to respond. Some tests or
tasks require constructed responses (e.g., TAT, fig-
ure drawings), whereas others require retrieval or
application of specific information (e.g., Vocabulary
and Arithmetic subtests on the Wechsler tests).
Self-report measures require examinees to select
or endorse a response presented in a predefined set
of possibilities. In part because responses are
selected rather than constructed by the examinee,
systematic distortion of responses is a concern in
many self-report inventories (Graham, 2006).
Detecting such response sets is important because,
when they occur, they may undermine the validity
of the test scores. Validity scales were big news
when they were first introduced in the original
MMPI (Hathaway & McKinley, 1943); now they are
commonplace in many personality and other types
of inventories. Scoring of self-report measures is
considered to be objective and typically involves the
use of either computer software or scoring tem-
plates. Other than human errors (e.g., misaligning a
scoring template), objective scoring produces test
scores that do not require clinical judgment.
Detailed discussion of self-report measures is pro-
vided later in Chapter 11, this volume.
The level of impact that the use of tests scores may
have varies and forms another way to distinguish
groups of tests. High-stakes testing refers to the situa-
tion where test scores are used to make important
decisions about an individual. The impact level of
such decisions is substantial, sometimes rising to the
level of life altering. Tests whose results are used to
render such decisions must be psychometrically
sound. Evidence supporting the reliability and valid-
ity of test scores must surpass the level typically seen
in measures used for lesser purposes, such as research
or screening. Custody evaluations used to determine
parental fitness (for further information, see Chapter
34, this volume) and forensic evaluations used to
establish competency to stand trial (for further infor-
mation, see Chapters 6 and 16, this volume) are but
two examples of high-stakes testing situations.
In clinical decision making, the specific test used
does not automatically determine the stakes. Rather,
the use to which the test scores are put dictates
whether the testing should be considered high
stakes. For example, practitioners may use the
results of an assessment simply to confirm a diagno-
sis and formulate interventions. This use of tests is a
rather routine practice aimed at improving the men-
tal health of a particular client. In this situation, the
stakes likely are low, because the individual is
already engaged in treatment and the differential
diagnosis that is sought will enhance the clinician’s
understanding and treatment of his or her psycho-
logical difficulties. If the same test results were used
as the basis for denying disability benefits, then the
testing context would be regarded as high stakes.
Low-stakes measures often include those related
to documenting values and interests. The human
interest value of these measures notwithstanding,
low-stakes situations simply do not have the same
level of impact as high-stakes decisions. Test takers
frequently are curious to review the assessment
results, but many are not surprised by them. How-
ever, low-stakes measures may contribute to impor-
tant decisions that an individual may make
concerning career or relationship pursuits or other
quality-of-life choices.
INTERPRETING AND INTEGRATING
ASSESSMENT RESULTS
Interpreting and integrating test results requires a
tenacious, disciplined, and thorough approach. It
follows the collection of data from various sources,
none of which should be ignored or dismissed. Like
test administration, test interpretation represents
an interpersonal activity [that] may be
considered part of the influence process
of counseling. The counselor commu-
nicates his or her own understanding of
the client’s test data to the client, antici-
pating that the client will adopt and
apply some part of that understanding
as self-understanding. (Claiborn &
Hanson, 1999, p. 151)
An important objective in interpreting assess-
ment results is to account for as much test data as
possible. Formulating many tenable hypotheses at
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
13
the outset of test interpretation facilitates this goal.
With regard to enhancing clinical judgment, Garb
(1989) encouraged clinicians to become more will-
ing to consider alternative hypotheses and to revise
their initial views of a client’s behavior. Although
Garb’s point referred broadly to clinical judgment
and not specifically to clinical assessment, it applies
equally well to test interpretation. For example, an
overarching ennui reported by an adult client at
intake could stem from numerous causes, including
psychological and physical ones. Subsequent results
from a comprehensive assessment consisting of a
multitude of tests and sources of data may suggest
(a) depression or a related derivative, (b) bereave-
ment, (c) malingering, (d) anemia, (e) reaction to
situational (e.g., job related) stress, (f) passive–
aggressive coping strategy, (g) insomnia, (h) a side
effect of a new medication, (i) a combination of two
or more of the foregoing, or (j) something else
entirely. An intake interview and routine screening
measures may rule out several of the possible expla-
nations. Interpretations stemming from more com-
prehensive measures may be compared against the
remaining competing hypotheses to ascertain which
hypothesis best accounts for the evidence. In the
end, the best explanation is the one that explains
most (or all) of the evidence accumulated and con-
sidered in the assessment process.
An important first step in evaluating test data
often takes place while assessment procedures are
under way, in the presence of the test taker or before
he or she leaves the premises where testing
occurred. This step involves reviewing the examin-
ee’s responses to any “critical items” that are
included on any of the measures. These items are so
called because their content has been judged to be
indicative of serious maladjustment, signifying grave
concerns such as the propensity for self-harm.
Although empirical scrutiny has not tended to offer
much support for the utility of critical items for
this purpose (Koss, 1980; Koss, Butcher, & Hoff-
man, 1976), many practitioners consider the items
worthy of follow-up efforts, perhaps because failing
to act on such a blatant appeal for assistance would
be unconscionable and the possible outcome irre-
versible. Moreover, base-rate problems cloud the
issue, as low-base-rate events such as suicide are
notoriously difficult to predict (Sandoval, 1997),
especially when one tries to predict such an event
on the basis of responses to a small handful of items.
Also at issue is the absence of an adequate criterion
against which to judge test validity (Hunsley &
Meyer, 2003). A client who does not commit suicide
after his or her responses to critical items suggested
a high risk of suicide was present was not necessar-
ily misjudged. Individuals at high risk for a given
outcome do not unerringly suffer that outcome;
such is the nature of risk.
Base-rate and criterion problems persist in the
area of suicide risk assessment and are unlikely to be
resolved. Measures developed to assess suicide risk
are intended to be used to avert acts of self-harm and
cannot be easily validated in the usual manner
because lives are at stake. Critical items denote risk;
they do not predict behavior. Recommended practice
is to avoid treating critical items as a scale or brief
assessment of functioning, but rather consider the
items as offering possible clues to content themes
that may be important to the client (Butcher, 1989).
After considering a client’s responses to critical
items, integration of findings obtained from the vari-
ous methods used in an assessment moves to a
review of evidence collected during the assessment,
including test and nontest data, from each individ-
ual source. Scoring and interpreting or evaluating
individual procedures that were implemented con-
stitutes an important first step because it is at this
stage that clinicians begin to weigh the credibility of
the evidence. Specifically, it is essential to note for
each procedure whether the test taker’s approach to
that procedure allows further consideration of the
results. Tests that include validity scales can make
this task more objective and fairly straightforward.
However, many assessment procedures do not have
built-in components to help examiners evaluate
whether responses should be considered valid indi-
cators of the test taker’s functioning. In these cases,
examiners must render a judgment, often based on
the test taker’s demeanor, attitude toward the proce-
dures, and behaviors demonstrated during the
assessment. Obviously and unfortunately, this judg-
ment process is not standardized and is quite open
to subjective interpretations. Even so, it is probably
safe to conclude that most practitioners would at
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
14
least question the validity of assessment results from
a client who arrived to the session 20 minutes late,
looked at his watch no fewer than 25 times,
neglected to respond to half of the items on two test
forms, and sighed audibly throughout the assess-
ment while mumbling about how “ridiculous this
is.” In any case,
psychologists must consider whether
there is a discernible reason for test tak-
ers to be less than forthright in their
responses, and whether that reason
might constitute a motive for faking. If
so, the test giver must . . . interpret test
findings with these possibilities in mind.
(Carlson & Geisinger, 2009, p. 83)
In the early stages of interpretation, possible
explanations for the results should be treated as ten-
tative, because various hypotheses may be offered to
explain individual test outcomes. All reasonable
explanations for the observed results should be con-
sidered while examining evidence from other
sources. In the face of additional data, some hypoth-
eses will be discarded and some will be retained. Evi-
dence from other sources—test and nontest—that
confirms or disconfirms active hypotheses is particu-
larly important, as this type of evidence helps to bol-
ster (i.e., rule in) or weaken (i.e., rule out) putative
explanations, respectively. Typically, a small number
of hypotheses survive this iterative process, and
these viable explanations of the observed results
form the prominent themes of a written report.
PROVIDING ASSESSMENT FEEDBACK
Providing test feedback to test takers is an ethical
responsibility (e.g., APA, 2010) that appears to be
taken lightly by some practitioners according to
some published reports (Pope, 1992; Smith, Wig-
gins, & Gorske, 2007). As Smith et al. (2007)
observed, there is surprisingly little written about
assessment feedback and “little published research
on the assessment feedback practices of psycholo-
gists” (p. 310). These researchers surveyed some 719
clinicians (neuropsychologists and members of the
Society for Personality Assessment) about their psy-
chological assessment feedback practices to find that
some 71% reported that they frequently provided in-
person feedback, either to clients or clients’ family
members. The researchers also queried respondents
about the time they spent providing feedback, how
useful they found the practice, and what kind of
feedback they provided (e.g., written, oral).
Although most practitioners reported that they do
provide feedback, nearly 41% reported that they pro-
vided no direct feedback to clients or their families.
Nearly one third of respondents reported that they
mailed a report to clients, a practice that Harvey
(1997) denounced, because recipients often lack the
background and technical knowledge to understand
and interpret the results. Even so, Smith et al.
viewed the survey results positively overall and sug-
gested that the status of psychological assessment
feedback practices may not be as dire as suggested
several years ago (Pope, 1992). Interested readers
may refer to Chapter 3, this volume, for further
guidance on communicating assessment results.
Test feedback may serve several important pur-
poses, not the least of which is to help bring about
behavioral changes (Butcher, 2010; Finn & Tonsager,
1997). In discussing the importance of providing test
feedback, Pope (1992) suggested that the feedback
process offers opportunities on several fronts that
bear directly on the therapeutic process and that, in
essence, extend the assessment to include the feed-
back component. Empirical evidence accumulated
thereafter, which demonstrated treatment effects of
assessment feedback (Kubiszyn et al., 2000). Specifi-
cally, several studies compared therapeutic gains
made by clients in treatment who received feedback
about their test results on the MMPI–2 (Butcher,
Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) to
those of similar clients who did not receive such feed-
back (e.g., Finn & Tonsager, 1992, 1997; Fischer,
2000; Newman & Greenway, 1997). Clients who
received assessment feedback demonstrated thera-
peutic improvements, as noted by their higher levels
of hope and decrease in reported symptoms.
CONCLUDING THOUGHTS
Assessment methods used in counseling and clinical
contexts focus tightly on an individual client’s
condition and seek to identify ways in which his or
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
15
her concerns may be addressed or resolved. Broadly
speaking, the methods used include interview tech-
niques, behavioral observations, and formal tests
that place different demands on the examinee as
well as the examiner. Information gathered from
multiple sources then must be interpreted and inte-
grated into a cohesive explanation of the test data
and, by extension, the client’s functioning and fea-
tures. The end goal of assessment in counseling and
clinical contexts is to produce an accurate portrayal
of the client’s functioning that is useful for planning
and implementing interventions. Providing feedback
to the client about assessment results is vital to pro-
moting the client’s interests and effecting treatment.
Cates (1999) observed that clinical assessment is
best regarded as providing a “snapshot not a film” of
an individual’s functioning, that “describes a
moment frozen in time, described from the view-
point of the psychologist” (p. 637). When an
observer says something like, “that’s a good picture
of her,” the speaker means that the image represents
the subject as she truly is. Good pictures depend on
using good tools and good techniques. Clinical
assessment, too, uses tools and techniques to reflect
the characteristics of the client as he or she exists
and functions every day.
References
Aiken, L. S., West, S. G., Sechrest, L., & Reno, P. R.
(1990). Graduate training in statistics, methodology
and measurement in psychology: A survey of Ph.D.
programs in North America. American Psychologist,
45, 721–734. doi:10.1037/0003-066X.45.6.721
American Counseling Association. (2005). ACA code of
ethics. Washington, DC: Author. Retrieved from
http://72.167.35.179/Laws_and_Codes/ACA_Code_
of_Ethics
American Educational Research Association, American
Psychological Association, & National Council
on Measurement in Education. (1999). Standards
for educational and psychological testing (3rd ed.).
Washington, DC: American Educational Research
Association.
American Psychiatric Association. (2000). Diagnostic and
statistical manual of mental disorders (4th ed., text
revision). Washington, DC: Author.
American Psychological Association. (2010). Ethical
principles of psychologists and code of conduct (2002,
Amended June 1, 2010). Retrieved from http://www.
apa.org/ethics/code/index.aspx
Anastasi, A., & Urbina, S. (1997). Psychological testing
(7th ed.). Upper Saddle River, NJ: Prentice Hall.
Ball, J. D., Archer, R. P., & Imhof, E. A. (1994). Time
requirements of psychological testing: A survey of
practitioners. Journal of Personality Assessment, 63,
239–249. doi:10.1207/s15327752jpa6302_4
Bauman, S. (2008). Essential topics for the helping profes-
sional. Boston, MA: Pearson.
Beck, A. T. (1988). Beck Hopelessness Scale. San Antonio,
TX: Psychological Corporation.
Beck, A. T., Schuyler, D., & Herman, I. (1974).
Development of suicidal intent scales. In A. T. Beck,
H. L. P. Resnik, & D. J. Lettieri (Eds.), The prediction
of suicide (pp. 45–56). Bowie, MD: Charles Press.
Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck
Depression Inventory manual (2nd ed.). San Antonio,
TX: Psychological Corporation.
Ben-Porath, Y. S. (1997). Use of personality instru-
ments in empirically guided treatment planning.
Psychological Assessment, 9, 361–367. doi:10.1037/
1040-3590.9.4.361
Butcher, J. N. (1989). The Minnesota report: Adult Clinical
System MMPI–2. Minneapolis, MN: University of
Minnesota Press.
Butcher, J. N. (2010). Personality assessment from the
nineteenth to the early twenty-first century: Past
achievements and contemporary challenges. Annual
Review of Clinical Psychology, 6, 1–20. doi:10.1146/
annurev.clinpsy.121208.131420
Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen,
A., & Kaemmer, B. (1989). Manual for administra-
tion and scoring: Minnesota Multiphasic Personality
Inventory—2 (MMPI–2). Minneapolis, MN:
University of Minnesota Press.
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000).
Psychological test usage: Implications in professional
psychology. Professional Psychology: Research and
Practice, 31, 141–154. doi:10.1037/0735-7028.
31.2.141
Carlson, J. F., & Geisinger, K. F. (2009).
Psychodiagnostic testing. In R. Phelps (Ed.),
Correcting fallacies about educational and psychologi-
cal testing (pp. 67–88). Washington, DC: American
Psychological Association. doi:10.1037/11861-002
Cates, J. A. (1999). The art of assessment in psychology:
Ethics, expertise, and validity. Journal of Clinical
Psychology, 55, 631–641. doi:10.1002/(SICI)1097-
4679(199905)55:5<631::AID-JCLP10>3.0.CO;2-1
Claiborn, C. D., & Hanson, W. E. (1999). Test inter-
pretation: A social-influence perspective. In J. W.
Lichtenberg & R. K. Goodyear (Eds.), Scientist-
practitioner perspectives on test interpretation (pp.
151–166). Needham Heights, MA: Allyn & Bacon.
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Janet F. Carlson
16
Committee on the Revision of the Specialty Guidelines for
Forensic Psychology. (2011). Specialty guidelines for
forensic psychology (6th draft). Retrieved from http://
www.ap-ls.org/aboutpsychlaw/3182011sgfpdraft
Cottone, R. R., & Tarvydas, V. M. (2007). Counseling
ethics and decision-making (3rd ed.). Upper Saddle
River, NJ: Pearson Education.
Cronbach, L. J. (1960). Essentials of psychological testing
(2nd ed.). New York, NY: Harper.
Derogatis, L. R. (1994). Administration, scoring, and pro-
cedures manual for the SCL-90-R. Minneapolis, MN:
National Computer Systems.
Eisman, E. J., Dies, R., Finn, S. E., Eyde, L. D., Kay, G.
G., Kubiszyn, T. W., . . . Moreland, K. L. (2000).
Problems and limitations in the use of psychologi-
cal assessment in contemporary health care delivery.
Professional Psychology: Research and Practice, 31,
131–140. doi:10.1037/0735-7028.31.2.131
Eyde, L. D., Robertson, G. J., & Krug, S. E. (2010).
Responsible test use: Case studies for assessing human
behavior (2nd ed.). Washington, DC: American
Psychological Association.
Finn, S. E., & Martin, H. (1997). Therapeutic assessment
with the MMPI–2 in managed health care. In J. N.
Butcher (Ed.), Objective personality assessment in man-
aged health care: A practitioner’s guide (pp. 131–152).
New York, NY: Oxford University Press.
Finn, S. E., & Tonsager, M. E. (1992). Therapeutic effects
of providing MMPI–2 test feedback to college stu-
dents awaiting therapy. Psychological Assessment, 4,
278–287. doi:10.1037/1040-3590.4.3.278
Finn, S. E., & Tonsager, M. E. (1997). Information-
gathering and therapeutic models of assessment:
Complementary paradigms. Psychological Assessment,
9, 374–385. doi:10.1037/1040-3590.9.4.374
First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B.
W. (2002). Structured Clinical Interview for DSM–IV–
TR Axis I disorders, research version, patient edition
(SCID-I/P). New York, NY: Biometrics Research,
New York State Psychiatric Institute.
Fischer, C. T. (2000). Collaborative, individualized
assessment. Journal of Personality Assessment, 74,
2–14. doi:10.1207/S15327752JPA740102
Fong, M. L. (1995). Assessment and DSM–IV diagnosis
of personality disorders: A primer for counselors.
Journal of Counseling and Development, 73, 635–639.
doi:10.1002/j.1556-6676.1995.tb01808.x
Ford, G. G. (2006). Ethical reasoning for mental health
professionals. Thousand Oaks, CA: Sage.
Garb, H. N. (1989). Clinical judgment, clinical training,
and professional experience. Psychological Bulletin,
105, 387–396. doi:10.1037/0033-2909.105.3.387
Garb, H. N. (2003). Incremental validity and the assess-
ment of psychopathology in adults. Psychological
Assessment, 15, 508–520. doi:10.1037/1040-3590.
15.4.508
Garb, H. N. (2005). Clinical judgment and decision mak-
ing. Annual Review of Clinical Psychology, 1, 67–89.
doi:10.1146/annurev.clinpsy.1.102803.143810
Geisinger, K. F., & Carlson, J. F. (2009). Standards and
standardization. In J. N. Butcher (Ed.), Oxford hand-
book of personality assessment (pp. 99–111). New
York, NY: Oxford University Press.
Glutting, J., & Oakland, T. (1993). Guide to the assess-
ment of test session behavior: Manual. San Antonio,
TX: Psychological Corporation.
Graham, J. R. (2006). MMPI–2: Assessing personality and
psychopathology (4th ed.). New York, NY: Oxford
University Press.
Griffith, L. (1997). Surviving no-frills mental health care:
The future of psychological assessment. Journal of
Practical Psychiatry and Behavioral Health, 3, 255–258.
Harvey, V. S. (1997). Improving readability of psychological
reports. Professional Psychology: Research and Practice,
28, 271–274. doi:10.1037/0735-7028.28.3.271
Hathaway, S. R., & McKinley, J. C. (1943). The Minnesota
Multiphasic Personality Inventory. Minneapolis, MN:
University of Minnesota Press.
Hayes, S. C., Nelson, R. O., & Jarrett, R. B. (1987). The
treatment utility of assessment: A functional approach
to evaluating assessment quality. American Psychologist,
42, 963–974. doi:10.1037/0003-066X.42.11.963
Hoffman, J. A., & Weiss, B. (1986). A new system for
conceptualizing college students’ problems: Types
of crises and the Inventory of Common Problems.
Journal of American College Health, 34, 259–266.
doi:10.1080/07448481.1986.9938947
Hood, A. B., & Johnson, R. W. (2007). Assessment
in counseling: A guide to the use of psychological
assessment procedures. Alexandria, VA: American
Counseling Association.
Hunsley, J., & Meyer, G. J. (2003). The incremen-
tal validity of psychological testing and assess-
ment: Conceptual, methodological, and statistical
issues. Psychological Assessment, 15, 446–455.
doi:10.1037/1040-3590.15.4.446
International Test Commission. (2001). International
guidelines for test use. International Journal of Testing,
1, 93–114. doi:10.1207/S15327574IJT0102_1
Jobes, D. A., Eyman, J. R., & Yufit, R. I. (1990, April).
Suicide risk assessment survey. Paper presented at
the annual meeting of the American Association of
Suicidology, New Orleans, LA.
Kessler, R. C., Barker, P. R., Cople, L. J., Epstein, J.
F., Gfroerer, J. C., Hiripi, E., . . . Zaslavsky, A. M.
(2003). Screening for serious mental illness in the
general population. Archives of General Psychiatry,
60, 184–189. doi:10.1001/archpsyc.60.2.184
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Clinical and Counseling Testing
17
Koss, M. P. (1980). Assessment of psychological emergen-
cies with the MMPI. Nutley, NJ: Roche.
Koss, M. P., Butcher, J. N., & Hoffman, N. (1976). The
MMPI critical items: How well do they work? Journal
of Consulting and Clinical Psychology, 44, 921–928.
doi:10.1037/0022-006X.44.6.921
Kubiszyn, T. W., Meyer, G. J., Finn, S. E., Eyde, L.
D., Kay, G. G., Moreland, K. L., . . . Eisman, E. J.
(2000). Empirical support for psychological assess-
ment in clinical health care settings. Professional
Psychology: Research and Practice, 31, 119–130.
doi:10.1037/0735-7028.31.2.119
Leichtman, M. (2002). Behavioral observations. In J.
N. Butcher (Ed.), Clinical personality assessment:
Practical approaches (pp. 303–318). New York, NY:
Oxford University Press.
Masling, J. M. (1992). Assessment and the therapeu-
tic narrative. Journal of Training and Practice in
Professional Psychology, 6, 53–58.
Meyer, G. J., Finn, S. E., Eyde, L., Kay, G. G., Moreland, K.
L., Dies, R. R., . . . Reed, G. M. (2001). Psychological
testing and psychological assessment: A review of evi-
dence and issues. American Psychologist, 56, 128–165.
doi:10.1037/0003-066X.56.2.128
Moon, G. W., Blakey, W. A., Gorsuch, R. L., & Fantuzzo,
J. W. (1991). Frequent WAIS–R administration
errors: An ignored source of inaccurate measure-
ment. Professional Psychology: Research and Practice,
22, 256–258. doi:10.1037/0735-7028.22.3.256
National Association of School Psychologists. (2010).
Principles for professional ethics. Retrieved from http://
www.nasponline.org/standards/2010standards/1_%20
Ethical%20Principles
Naugle, K. A. (2009). Counseling and testing: What
counselors need to know about state laws on assess-
ment and testing. Measurement and Evaluation in
Counseling and Development, 42, 31–45. doi:10.1177/
0748175609333561
Newman, M. L., & Greenway, P. (1997). Therapeutic
effects of providing MMPI–2 test feedback to clients
at a university counseling service: A collaborative
approach. Psychological Assessment, 9, 122–131.
doi:10.1037/1040-3590.9.2.122
Oakland, T., Glutting, J., & Watkins, M. W. (2005).
Assessment of test behaviors with the WISC–IV. In
A. Prifitera, D. H. Saklofske, & L. G. Weiss (Eds.),
WISC–IV clinical use and interpretations: Scientist-
practitioner perspectives (pp. 435–467). San Diego,
CA: Elsevier Academic Press.
Pope, K. S. (1992). Responsibilities in providing psy-
chological test feedback to clients. Psychological
Assessment, 4, 268–271. doi:10.1037/1040-
3590.4.3.268
Psychological Corporation. (1992). Wechsler Individual
Achievement Test. San Antonio, TX: Author.
Sandoval, J. (1997). Critical thinking in test interpreta-
tion. In J. Sandoval, C. L. Frisby, K. F. Geisinger,
J. D. Scheuneman, & J. R. Grenier (Eds.), Test inter-
pretation and diversity: Achieving equity in assess-
ment (pp. 31–49). Washington, DC: American
Psychological Association.
Slate, J. R., Jones, C. H., & Murray, R. A. (1991).
Teaching administration and scoring of the Wechsler
Adult Intelligence Scale—Revised: An empirical
evaluation of practice administrations. Professional
Psychology:, Research and Practice, 22, 375–379.
doi:10.1037/0735-7028.22.5.375
Smith, S. R., Wiggins, C. M., & Gorske, T. T. (2007).
A survey of psychological assessment feedback
practices. Assessment, 14, 310–319. doi:10.1177/
1073191107302842
Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg,
P. R., & Jacobs, G. A. (1983). Manual for State–
Trait Anxiety Inventory. Palo Alto, CA: Consulting
Psychologists Press.
Stolberg, R., & Bongar, B. (2002). Assessment of sui-
cide risk. In J. N. Butcher (Ed.), Clinical personality
assessment: Practical approaches (pp. 376–406). New
York, NY: Oxford University Press.
Tallent, N. (1988). Psychological report writing (3rd ed.).
Englewood Cliffs, NJ: Prentice-Hall.
Thompson, A. P., & Bulow, C. A. (1994). Administration
error in presenting the WAIS–R blocks:
Approximating the impact of scrambled presenta-
tions. Professional Psychology: Research and Practice,
25, 89–91. doi:10.1037/0735-7028.25.1.89
Wechsler, D. (1993). Wechsler Intelligence Scale for
Children (3rd ed.). San Antonio, TX: Psychological
Corporation.
Willer, J. (2009). The beginning psychotherapist’s compan-
ion. Lanham, MD: Rowman & Littlefield.
Yates, B. T., & Taub, J. (2003). Assessing the costs,
benefits, cost-effectiveness, and cost–benefit of
psychological assessment: We should, we can, and
here’s how. Psychological Assessment, 15, 478–495.
doi:10.1037/1040-3590.15.4.478
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
19
DOI: 10.1037/14048-002
APA Handbook of Testing and Assessment in Psychology: Vol. 2. Testing and Assessment in Clinical and Counseling Psychology,
K. F. Geisinger (Editor-in-Chief)
Copyright © 2013 by the American Psychological Association. All rights reserved.
C h a P t e r 2
ThE assEssmEnT ProCEss
Sara Maltzman
This chapter reviews the historical purposes of psy-
chological assessment, the components and process
of psychological assessment, current issues, and
emerging trends. In keeping with the emphases of
this handbook, the discussion focuses on the use of
assessments and the assessment process within clini-
cal, counseling, and forensic psychology.
THE HISTORY OF PSYCHOLOGICAL
ASSESSMENTS
McGuire (1990) traced the development of formal
psychological testing to James McKean Cattell in the
1890s and early 20th century. McGuire noted that
Cattell and the first few experimental psychologists
who came to define themselves as clinical psycholo-
gists advocated for education, training, and the
establishment of professional standards for the
assessment of intellectual and personality function-
ing. Thus, the assessment and diagnosis of intellec-
tual functioning and personality were the
fundamental functions of clinical psychologists.
Witmer, who made significant contributions to the
development of clinical, developmental, and educa-
tional psychology, established the first psychological
clinic in 1896 (Baker, 1988). The clinic assessed and
treated children who presented with possible mental
retardation, learning disabilities, or emotional concerns
that prevented attainment of their academic potential.
Witmer utilized a multidimensional, functional
approach that included a comprehensive psychosocial
history taking as well as behavioral observations in
multiple environments (e.g., home, school) over time.
A physician completed the physical examination,
and often the behavioral observations were made by
a social worker. These data were summarized into
an integrative assessment of the child’s deficiencies,
along with treatment recommendations (Baker,
1988). Thus, a primary focus within clinical psy-
chology at the beginning of the 20th century was the
multimodal assessment, diagnosis, and treatment of
children and youths.
The treatment recommendations made for these
youths often included vocational direction (Baker,
2002). With the stock market crash and high unem-
ployment of the 1930s, the vocational needs of
adults began to predominate and the vocational
assessment of youths transitioned to adult voca-
tional counseling and later into the field of counsel-
ing psychology for adults (Baker, 2002; Super,
1955). The assessment of aptitudes as well as of abil-
ities emerged out of the necessity to assist the unem-
ployed. At the same time, Rogerian theory and its
associated nondirective, client-centered therapeutic
approach began to emerge. The Rogerian approach
was applied to vocational counseling in recognition
that such an orientation was theoretically compati-
ble with counseling focused on the achievement of
vocational aspirations (Super, 1955). These three
foci—the assessment of aptitudes, the assessment of
abilities, and a Rogerian conceptualization of the
person and the therapeutic relationship—converged
into a cohesive approach for addressing the psycho-
social concerns of the unemployed. Over time, this
approach was modified to address the needs of
returning World War II (WWII) veterans and to
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
20
assist them in maximizing their psychosocial
strengths. Addressing the vocational, educational,
and adjustment needs of returning WWII veterans
led to the establishment of counseling psychology as
a distinct position within the U.S. Veterans Admin-
istration (VA) system (Meara & Myers, 1999). To
meet the needs of returning veterans, the VA
encouraged the American Psychological Association
(APA) to accredit counseling as well as clinical psy-
chology programs to ensure the training of compe-
tent psychologists for the VA system. The VA also
was instrumental in encouraging the development of
university-based counseling centers to assist veter-
ans with educational and work-related adjustment
issues (Meara & Myers, 1999). For these reasons,
counseling psychology has historical roots and
expertise in career and vocational counseling.
Assessments in these areas consider individual dif-
ferences in career development needs, interests, and
barriers to career or employment (Armstrong &
Rounds, 2008; Whiston & Rahardja, 2008). Coun-
seling psychologists are in a unique position to
address the mental health, educational, and career-
planning needs of military veterans and their fami-
lies because of this historical role and the number of
counseling psychologists in college and university
settings (Danish & Antonides, 2009).
Currently, one of the primary distinctions
between clinical and counseling psychology is the
historical focus in clinical psychology on research
and practice in the assessment, diagnosis, and treat-
ment of clients with significant psychopathology
and emotional disorders. Forensic psychology devel-
oped as a subdiscipline within clinical psychology.
Although the provision of legal testimony by psy-
chologists dates back to the 1900s, it was not until
2001 that the APA formally recognized forensic psy-
chology as a distinct psychological specialty (Ogloff
& Douglas, 2003). In comparison, counseling psy-
chology historically has focused on leveraging and
maximizing psychosocial functioning and strengths
in individuals who are not experiencing significant
psychopathology but are experiencing transitional
life stressors (Meara & Myers, 1999).
Thus, the development of clinical and counseling
psychology initially was based on the needs of dis-
tinct populations. Over time, each discipline has
expanded in scope, and each has contributed to
assessment process research and practice on the basis
of the respective specialty’s history and strengths.
THE PURPOSE OF THE PSYCHOLOGICAL
ASSESSMENT
The purpose of a psychological assessment is to
answer particular questions related to an individu-
al’s intellectual, psychological, emotional–behavioral,
or psychosocial functioning, or some combination
of these domains. These questions are determined
by the assessment context and referral source. As
Fernandez-Ballesteross (1997) described, a psycho-
logical assessment typically is driven by a particular
problem or referral question. A psychological assess-
ment includes more than psychological testing. His-
torically, the purpose of a psychological assessment
has been to gather information directly from the cli-
ent, obtain collateral information, administer psy-
chological test instruments, interpret the test
results, and provide a conceptualization of the client
that integrates the test data with the collateral and
interview data. This conceptualization is summa-
rized, a diagnosis or diagnostic rule-out is offered
(as applicable), and recommendations are made for
consideration related to decision-making (e.g., in
career- or education-related choices, personnel
decision-making, or parental capacity assessments)
and, where appropriate, for treatment. In contrast,
psychological testing is one component of a psycho-
logical assessment. It is measurement oriented. The
purpose of testing is to provide a standardized
administration of an instrument that has research
evidence substantiating the reliability of its scores
and the validity of these scores in identifying, quan-
tifying, and describing particular characteristics or
abilities when used with a specified population
within a specified context. These test scores are
interpreted within the context of the client’s history
and the additional data gathered as part of the
assessment process.
THE ASSESSMENT PROCESS
Weiner (2003) described the assessment process
as consisting of three phases: information input,
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
21
information evaluation, and information output.
Each is described here.
Information Input
Information input is the collection of information. It
is influenced by the assessment context, referral
questions, and referral source. These factors inform
why the assessment is requested and what questions
are expected to be answered. Such a contextual
assessment considers the client’s culture and lan-
guage proficiency when selecting instruments and
interpreting instrument scales (Butcher, Cabiya,
Lucio, & Garrido, 2007). The referral source and
assessment context also influence which instru-
ments are appropriate for use. For example, some
instruments appropriate for personality assessment
in an outpatient counseling or clinical setting have
been found to be inappropriate in a forensic setting
because of compromised validity (Carr, Moretti, &
Cue, 2005). Selecting appropriate instruments, on
the basis of the client’s cultural context and the
referral context, is the first step in ensuring that the
assessment provides valid results for answering the
particular referral questions for that particular indi-
vidual (e.g., Perlin & McClain, 2009).
The Assessment Context and
Referral Questions
The referral questions addressed by the assessment
are determined by the assessment context. The
assessment context also determines the potential
sources of collateral information. In turn, the con-
text and referral source determine what requisite
education, training, and supervised experience are
necessary to conduct the assessment as well as
which additional professional standards and guide-
lines for specialized practice might be applicable.
The assessment context and referral source rep-
resent key factors in determining which formal
instruments are appropriate, on the basis of the nor-
mative sample and ability to identify response pat-
terns. For example, the Millon Clinical Multiaxial
Inventory (MCMI; Millon, 1977) was normed and
standardized on clients engaged in mental health
services. It was not normed on a general population
standardization sample (Butcher, 2009). The test
developers subsequently reported that the third
edition of the MCMI (MCMI-III; Millon, 1994) later
was normed on a large sample of newly incarcerated
prison inmates for the purpose of predicting adjust-
ment to prison and treatment needs while incarcer-
ated. However, the use of the MCMI-III with
populations outside of these standardization sam-
ples and for other purposes would be questionable
(Butcher, 2009). For further discussion of self-
report inventories (and the MCMI-III in particular),
readers are referred to Chapter 11, this volume.
Conducting assessments consistent with profes-
sional standards and guidelines necessitates staying
current with the relevant research. For example,
Carr et al. (2005) reported that the Personality
Assessment Inventory (PAI; Morey, 1996, as
reported in Carr et al., 2005) failed to detect positive
self-presentation bias adequately in a sample of 164
parents completing capacity evaluations. This find-
ing suggests that caution should be used in consid-
ering the PAI for this type of assessment. However,
Boccaccini, Murrie, and Duncan (2006) reported
that the PAI Negative Impression Management scale
performed as well as the comparison scale (Minne-
sota Multiphasic Personality Inventory—2 [MMPI–2]
F scale) in screening for malingering in a sample of
defendants undergoing pretrial evaluations in fed-
eral criminal court. Although cross-validation of the
results of both studies is important for verifying
these conclusions, they underscore the point that an
instrument may be appropriate for addressing the
referral question in one population yet not perform
adequately when the referral question changes and
the population differs. Thus, psychologists must pay
particular attention to the specific population char-
acteristics, context, and referral questions when
selecting test instruments.
Standards and guidelines specific to the type of
assessment required and population assessed pro-
vide guidance for the selection of appropriate instru-
ments. For example, the APA’s Guidelines for
Psychological Practice with Older Adults (2003) rec-
ommend an interdisciplinary approach to the assess-
ment of psychological functioning in older adults.
Such an approach facilitates consideration of medi-
cation effects and medical conditions on cognitive
and emotional functioning. Additional assessment
considerations pertinent to this population include
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
22
behavioral analyses to identify potential inappropri-
ate or harmful behaviors and interventions to
address these behaviors, and a repeated-measures
approach to distinguish between stable cognitive
and emotional characteristics versus characteristics
that are temporally or situation dependent.
The APA (2009) also has issued guidelines for
child custody evaluations. A custody evaluation is
requested most often when the dissolution of the
partner relationship is contentious. What is signifi-
cant about these evaluations is that the parental
assessment is from the perspective of the best psy-
chological interests of the child. The psychologist’s
role is to provide an impartial opinion that addresses
the ability of the parent to provide caretaking con-
sistent with the child’s best interests. This task
requires that professional opinions or recommenda-
tions are based on sufficient objective data to sup-
port the psychologist’s conclusions (Martindale &
Gould, 2007). The assessment assists the court in
decision-making concerning the parent’s role
regarding the physical care, access to, and legal
decision-making for the child (APA, 2009).
Parental capacity assessments often are requested
in juvenile dependency cases to determine whether
a parent’s mental health concerns are so severe and
incapacitating that the parent cannot safely parent
the child or the parent is unable to benefit from ser-
vices to mitigate the risk of future abuse or neglect
of the child. Such assessments require not only req-
uisite education, training, and experience in assess-
ing serious mental illness, including character
pathology, but an understanding of judicial and
administrative regulations and timelines. Relevant
guidelines include the Guidelines for Psychological
Evaluations in Child Protection Matters (APA, 2011)
and the Specialty Guidelines for Forensic Psychology
(APA, in press). Additional information concerning
legal issues in clinical and counseling testing and
assessment is provided in Chapter 6, this volume.
Information Evaluation
Information evaluation refers to the interpretation
of the assessment data (Weiner, 2003). Accurate
interpretation of testing data requires that the
psychologist interpret instrument responses and
scores according to the test developer’s instructions.
The general standards and guidelines applicable to
conducting psychological assessments across set-
tings and the interpretation of test data include the
Standards for Educational and Psychological Testing
(American Educational Research Association, APA,
& National Council on Measurement in Education,
1999, currently under revision) and the Ethical Prin-
ciples of Psychologists and Code of Conduct (APA, 2010).
The psychologist should consult additional relevant
professional standards and guidelines on the basis of
the referral source, assessment context, and client
characteristics.
An evaluation of the assessment data involves
more than scoring and interpreting the instruments
administered during the data collection phase of the
assessment. The evaluation of assessment data
requires a critical evaluation and synthesis of the
testing data with the collateral data within the con-
text of the specific referral: the reason for the assess-
ment, the referral source, and referral questions
(APA, 2010). Ideally, the psychological assessment
utilizes a multidimensional, multisource approach
(Allen, 2002; Lachar, 2003) consistent with the
multitrait–multimethod matrix developed for con-
struct validation by Campbell and Fiske (1959). A
multidimensional, multisource approach entails
obtaining formal collateral data by persons close to
the client (e.g., family, teacher, probation officer, pro-
tective services worker) by means of interview,
records, or standardized instruments. Mental health
records, school report cards, court reports, and crimi-
nal history logs are examples of collateral records.
The clinical interview of the client and behavioral
observations during the assessment process are addi-
tional important sources of data. All of these data pro-
vide both convergent and divergent data that can be
integrated, synthesized, and summarized to address
the referral question. Disconfirming data are particu-
larly useful for guarding against the influence of bias
and in assisting in the development of an objective
conceptualization of the client (Meyer et al., 2001).
The clinical interview. The client in interview is a
central component of the psychological assessment.
An unstructured clinical interview allows the psy-
chologist to obtain psychosocial history, psychiatric
symptomatology, and the perceived rationale for
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
23
the assessment from the client’s perspective. These
data reflect the client’s particular perspective and
can be compared with test data and collateral infor-
mation to assess consistency or divergence across
data sources. However, if collateral data are scant or
missing, an unstructured interview loses the value
of reflecting the client’s perspective as clinically
relevant information. The unstructured interview
may not query symptomatology in a systematic
manner. Structured and semistructured interview
formats typically include critical diagnostic criteria
to facilitate differential diagnosis. Client symptoms
are assessed and scores are compared against norma-
tive data. However, semistructured and structured
interviews still rely on client self-report without the
ability to assess response style and test-taking atti-
tude. Thus, all three interview formats are subject to
distortion and response bias (Bagby, Wild, & Turner,
2003). Because of this shortcoming, inclusion of
formal testing is recommended for inclusion in psy-
chological assessments.
Behavioral observations. Another potential
important source of information is the psychologist’s
careful description of client behavior, test-taking
attitude, interactive style, and any special needs that
necessitate accommodation or modification of the
assessment process or standardized testing proce-
dure. As Leichtman (2009) noted, these behavioral
observations can be a rich source of data. In spite of
this possibility, Leichtman noted that the behavioral
observations section of most assessment reports typ-
ically consists of just a few sentences, and training
in behavioral observation and reporting tends to be
given only superficial treatment in graduate training
and supervision. Additionally, despite its descrip-
tive name, the reporting of behavioral observations
is prone to subjectivity and bias, another reason
why this assessment component warrants care-
ful attention in training as well as self-monitoring
by the psychologist during the assessment process
(Leichtman, 2009). The psychologist’s interpretation
and documentation of client behaviors as well as
interactive style can be influenced in several ways,
such as lack of knowledge or misapplication of
base rates for that population and the level of train-
ing and competence in assessing clients from that
particular population. These topics are discussed
in more detail in the section General Assessment
Considerations.
Information Output
Information output refers to the utilization of the
assessment data to derive conclusions and recom-
mendations that address the referral questions
(Weiner, 2003). Accurately synthesizing these data is
a complex process that requires critical thinking
skills; knowledge of psychological principles, guide-
lines, and standards related to testing and working
with diverse populations; and competence in devel-
oping an effective working alliance. These critical
thinking skills include an awareness of the relative
weight to give to clinical judgment versus actuarial or
statistical prediction rules in formulating one’s con-
clusions and guarding against various types of bias in
the interpretation and reporting of assessment data.
GENERAL ASSESSMENT CONSIDERATIONS
There are general considerations that apply to all
three phases of the assessment process (information
input, information evaluation, and information out-
put). For this reason, awareness of these issues
guides an appropriate, objective assessment of the
client and mitigates the potential for inaccuracy in
assessment, synthesis, reporting, and recommenda-
tions. These issues include the potential for the
introduction of bias and moderator and mediator
variables that may influence the working alliance or
assessment validity. These two issues may affect any
or all of the three phases of the assessment process.
Bias
Test popularity may be considered a type of bias
because common usage perpetuates the mistaken
belief that an instrument is valid and reliable. For
example, the Thematic Apperception Test and other
projective techniques are used frequently in clinical
and forensic settings, although their use has been
seriously questioned (Hunsley & Mash, 2007). An
exception may be the Rorschach inkblot method,
which has received research support regarding
test protocol validity when compared with MMPI
protocols (Hiller, Rosenthal, Bornstein, Berry, &
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
24
Brunell-Neulieb, 1999). The use of the Rorschach in
clinical and forensic settings also has been endorsed
by the Society for Personality Assessment (SPA). A
thoughtful review of the relevant literature and dis-
cussion of the appropriate uses of the Rorschach can
be found in the 2005 SPA position statement.
Psychologists also should be aware of the poten-
tial for confirmatory bias, in which one selectively
attends to behaviors that are consistent with the psy-
chologist’s expectations or theoretical orientation.
These assumptions may be based on the client’s cul-
tural or clinical group membership (Sandoval,
1998). A closely related phenomenon is the avail-
ability bias, in which recent behavior or extreme,
vivid behavior is weighted more heavily and is more
influential than is warranted by its frequency or
clinical significance. These biases result in overinter-
pretation of assessment data and the potential for
overpathologizing the client’s behavior or presenta-
tion. Seeking out and evaluating sources of potential
divergent, as well as convergent (confirmatory), data
during the assessment process assists in guarding
against confirmatory and availability biases.
Theoretical orientation. The practitioner’s theo-
retical orientation influences the assessment process
in terms of instrument selection, questions asked
during the clinical interview, and interpretation of
client responses and assessment data (Craig, 2009).
For these reasons, the psychologist is encouraged
to consider the potential for bias. This potential is
particularly salient if the psychologist has a back-
ground in counseling or clinical mental health
and decides to develop competence in complet-
ing parental capacity or forensic risk assessments.
Theoretical orientation may guide the selection of
particular instruments (Lambert & Lambert, 1999).
Theoretical orientation or adherence to a particular
clinical model also may influence the psychologist’s
interpretation of test results, resulting in interpre-
tive error regarding diagnosis, etiology, or treatment
recommendations. Such errors were first described
by Rosenthal (1966) and constitute a phenomenon
distinct from experimenter expectancy because
they do not influence the client’s behavior. This
phenomenon also is distinct from test bias because
score differences may be statistically and clinically
significant (Reynolds & Ramsay, 2003). However,
this phenomenon may be associated with (a) the
failure to consider relevant base rates (Weiner,
2003); (b) environmental impressions, a bias that is
based on the particular assessment environment
within which the psychologist works (Weiner,
2003); or (c) failure to consider the client’s social
context, environment, and person–environment
interaction (Wright, Lindgren, & Zakriski, 2001).
Base rates. Base rate refers to the actuarial proba-
bility that a particular clinical phenomenon, such as
a particular diagnosis, will be present in a particular
population or assessment context. For example, psy-
chotic disorders are more prevalent in acute inpa-
tient psychiatric settings than in student counseling
centers. Bias is introduced when the psychologist
inadvertently, or consciously, erroneously applies a
base rate probability and fails to consider compet-
ing hypotheses or fails to conduct an appropriate
differential diagnosis when evaluating assessment
data (Weiner, 2003). Understanding the base rates
within a particular population also provides a con-
text for evaluating the sensitivity and specificity—
and, hence, clinical utility and predictive power—of
a particular instrument (Faust, Grimm, Ahern, &
Sokolik, 2010).
Assessment of diverse populations. The validity
of assessment results generally and test scores in
particular may be attenuated when instruments are
used inappropriately cross-culturally. In addition
to culture, ethnicity, and race, variables known to
influence test results and thereby warranting consid-
eration when selecting instruments, include client’s
primary language, socioeconomic status, and level
of education (Gray-Little & Kaplan, 1998).
A starting point in developing cross-cultural
competence may be a self-assessment of one’s own
cultural membership(s). Hays (2008) articulated a
clear and structured process for this self-evaluation,
which can serve to identify potential biases as a first
step in the development of cross-cultural competen-
cies. Migration or immigration history, level of
acculturation, and acculturative stress are just three
areas of knowledge with which the psychologist
should be familiar (Acevedo-Polakovich et al.,
2007). When working with culturally diverse
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
25
clients, it is important for psychologists to be aware
of an instrument’s conceptual equivalence—that is,
the test’s ability to measure the same construct
across cultures in order to determine its validity for
use with a particular client population (Geisinger,
2003). This ability can be determined by comparing
evidence of construct validity collected in the
“host” language and culture with evidence of con-
struct validity collected in additional linguistic and
cultural populations (Geisinger, 2003). Because
psychological assessments go beyond test adminis-
tration and interpretation, Acevedo-Polakovich et al.
(2007) suggested “proactive steps” related to initial
training that were first offered by Hansen (2002, as
reported in Acevedo-Polakovich et al., 2007). These
suggestions were specific to the Latina/o population
but reflect general principles that could be applied
to working with other populations. They include
the need to (a) develop an understanding of Latina/o-
specific cultural variables, constructs, and syn-
dromes to promote accurate assessment and mitigate
the potential for misinterpreting culture-specific
beliefs or behaviors; (b) be familiar with instruments
of known, and acceptable, validity and reliability
with U.S. Latina/os; (c) interpret tests and complete
assessments that are consistent with, and relevant
to, Latina/o culture; and (d) provide test feedback in
a language and style that meet the needs of the
client.
The client’s personal history and context also
influence decision-making regarding the direction of
the clinical interview, types of collateral information
to collect, and appropriate testing (Comas-Diaz &
Grenier, 1998). For example, assessing newcomers
(refugees and asylum seekers) includes a careful but
nonthreatening querying of where the client came
from, when the client left his or her country of ori-
gin, and what was going on in that country at that
time. The responses to these questions provide a
context within which to evaluate the probability that
the client experienced torture and consequent men-
tal health symptomatology (Maltzman, 2004).
Sandoval (1998) made the following recommen-
dations to facilitate critical thinking and to guard
against bias, particularly when assessing clients from
diverse populations: (a) Identify one’s own precon-
ceptions in advance to better guard against their
influence, (b) ensure that conclusions are drawn
after careful consideration, (c) seek appropriate cul-
tural consultation to prevent the misinterpretation
of normal behaviors, and (d) ensure that careful
notes are taken to prevent reliance on memory.
Moderator and Mediator Variables
Moderator and mediator variables may influence
the assessment process in a manner similar to the
effects seen in counseling and psychotherapy. Mod-
erator variables include client and psychologist
expectations and attitudes about the assessment
process. Mediator variables include the behaviors
(covert and overt) and client–psychologist interac-
tion that occur during the assessment (Hill & Wil-
liams, 2000). Both moderator (input) and mediator
(process) variables influence the development of
rapport and thus can influence the assessment pro-
cess and the validity of the collected data and data
interpretation.
Developing and maintaining rapport and an
effective working alliance is critical to facilitating
the assessment process. Despite this necessity, the
psychologist has limited time within which to estab-
lish a working relationship with the client that pro-
motes cooperation, motivation, and forthrightness
in the assessment process.
Client factors. The client’s affective state can
influence testing and self-report. Anxiety or fear
about the testing process may negatively affect
attention and concentration and may contribute
to mistakes and accidental random responding. In
their description of obstacles to establishing rap-
port from the client’s perspective, Lerner and Lerner
(1998) described Schafer’s (1954, as cited in Lerner
& Lerner, 1998) observation that the assessment
process requires the client to cede control over what
information to hold private and allows intrusiveness
by the psychologist without the establishment of a
requisite level of trust. The assessment context also
can influence the client’s approach to participating
in the assessment. For example, clients may attempt
to minimize symptoms to facilitate discharge from
the hospital (Bagby et al., 1997) or present with a
defensive style in forensic settings, such as parental
custody evaluations (e.g., Bagby, Nicholson, Buis,
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
26
Radovanovic, & Fidler, 1999). Traumatized clients
may experience the assessment as inherently stress-
ful. They may minimize or deny symptoms in an
attempt to avoid remembering and discussing the
traumatic events, resulting in the denial of symp-
toms during the clinical interview and suppressed
test scores (Briere, 2004).
Psychologist factors. The psychologist is chal-
lenged to engage the client quickly and effectively
to promote a collaborative, nondefensive style. In
forensic settings, this goal may be difficult to achieve
because of the investigative nature of forensic assess-
ments that necessitates a probing, neutral stance in
comparison with a more supportive, collaborative
role appropriate for a clinical setting (Craig, 2009).
In clinical contexts, development of a collaborative
working alliance may be impeded if the psychologist
is perceived as too distant or inappropriately sym-
pathetic (Briere, 2004). Creed and Kendall (2005)
identified therapist variables associated with a posi-
tive alliance in therapy with children. These factors
included a collaborative stance (in which the thera-
pist encouraged child involvement), not pushing the
child to talk when the child was not ready to do so,
and emphasizing common ground. Although these
variables predicted child ratings of the strength of
the therapeutic relationship early in therapy, they
did not predict therapist ratings (Creed & Kendall,
2005). This finding suggests that therapists may not
be sufficiently sensitive to client responses and reac-
tions in therapy that may mediate the working rela-
tionship. These same variables and processes also
may be present in and affect the assessment process
with children and youths.
As noted earlier, allowing insufficient time to
develop a collaborative working relationship and
pushing prematurely or inappropriately for informa-
tion are two psychologist-related variables that may
negatively affect the assessment process. Perhaps
these behaviors are due, at least in part, to the pres-
sure that psychologists feel to obtain the necessary
and sufficient data to answer the referral questions
(Lerner & Lerner, 1998). This pressure may feel
more acute when the assessment is initiated by a
third-party referral who is the payer and the assess-
ment is time sensitive.
Clinical Judgment, Actuarial Prediction,
and Utilization of Empirical Guidelines
Meehl’s 1954 monograph was the first description
of the equivalence or superiority of actuarial predic-
tion in comparison with clinical judgment. Garb
(2003) described actuarial prediction as decision
rules that are based on empirical data. Actuarial
prediction is equivalent to statistical prediction
when the latter refers to mathematical equations
that are based on empirical data (Garb, 2003). The
superiority of actuarial prediction has been con-
firmed consistently in research, particularly in
forensic settings (Ægisdóttir et al., 2006; Garb,
2003). Applied to the assessment process, actuarial
prediction is consistent with the utilization of
empirical guidelines for deriving assessment con-
clusions. Weiner (2003) described empirical guide-
lines as the utilization of decision rules “derive[d]
from the replicated results of methodologically
sound research” (p. 12). Applying these rules facili-
tates objective decision-making and mitigates the
potential for biases. Empirical guidelines, including
the application of appropriate cutoff scores applied
within the particular referral context, also mitigate
the potential for false-positive or false-negative con-
clusions (Weiner, 2003). The adoption of an empir-
ical approach also assists in guarding against the
influence of confirmatory and personal biases in
clinical and counseling settings (Garb, 2003; Heilb-
run, DeMatteo, Marczyk, & Goldstein, 2008;
Strohmer & Arm, 2006). Despite these findings,
psychologists have tended to resist adoption of an
empirical approach to assessment and diagnosis
(Graham & Naglieri, 2003).
This perceived resistance has been attributed to
two primary considerations that reflect an apparent
scientist–practitioner split: (a) the need to ensure
the construct validity of clinical diagnoses in clinical
research versus the time and resource limitations
encountered by the clinician in practice and (b) the
suboptimal utility of the Diagnostic and Statistical
Manual of Mental Disorders (4th ed.; DSM–IV) to
facilitate treatment planning versus the paramount
need for clinical utility of an assessment in treatment
settings (Mullins-Sweatt & Widiger, 2009). What
do not appear to have consistent support in the
literature are the hypotheses that practitioners are
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
27
reluctant to adopt empirically derived assessment
practices because of philosophical differences or
that practitioners believe that empirically derived
diagnoses are simplistic or invalid (Widiger &
Samuel, 2009).
Conversely, researchers have acknowledged that
the psychological assessment of an adult or child in
a clinical mental health setting must address diag-
nostic clarification for the purpose of treatment
planning, prediction of response to treatment, and
prognosis for future level of functioning (Bagby et al.,
2003; Lachar, 2003). In other words, the clinical
utility of the assessment is paramount (Mullins-
Sweatt & Widiger, 2009). Despite the research and
general consensus supporting the superiority of
empirically based assessment, formal psychological
testing and structured or semistructured interviews
are not always utilized in clinical practice (Widiger
& Samuel, 2009). Failure to use standardized
assessment procedures potentially compromises the
validity and reliability of the resulting clinical diag-
noses. This possibility is magnified if, as reported,
clinicians do not consistently and routinely adhere
to DSM–IV diagnostic criteria when utilizing an
unstructured interview format (Mullins-Sweatt &
Widiger, 2009; Widiger & Samuel, 2009). Such
lapses may occur because the client’s self-report
may not be candid or because the clinician may not
adequately query the client. For this reason, there is
an increased risk that the assessment will be com-
promised, resulting in a diagnosis (or diagnoses)
that does not fully describe the client’s presentation
and functioning. The resulting diagnoses may, in
turn, result in inappropriate or inadequate treat-
ment. In particular, failure to assess for the presence
of personality disorder or maladaptive personality
traits may compromise not only appropriate treat-
ment but also the accuracy of the predicted
response to treatment and posttreatment prognosis
(Widiger & Samuel, 2009). Widiger and Samuel
suggested a tiered approach to the assessment of
personality disorder to bridge this schism. The ini-
tial tier would be administration of a self-report
inventory, such as the MMPI–2–RF (Restructured
Form) (Ben-Porath & Tellegen, 2008) or the
MCMI-III (Millon, 1994), which would be followed
by a semistructured interview targeting personality
traits identified as maladaptive through the self-report
inventory. The goal of this tiered approach is to
shorten the semistructured interview to target more
carefully the personality traits that appear most
salient, thus saving the practitioner time. Whether
this approach is disseminated and adopted within
the practice community remains to be seen. How-
ever, a potential obstacle to this approach may be
the reluctance of third-party payers to reimburse for
any testing or low reimbursement rates when test-
ing is authorized.
Therefore, rather than a philosophical reluc-
tance, it may be that reimbursement and resource
issues are primary factors contributing to practitio-
ners’ reluctance to implement empirical assessment
approaches.
EMERGING TRENDS
Multiple factors, including the mental health con-
sumer movement, government oversight, and reim-
bursement policies of third-party payers, have
contributed to the call for psychology to demon-
strate that its services are cost effective, are measur-
able, and benefit clients in tangible ways. Three
emerging trends in assessment are particularly
salient within this context: assessing psychosocial
functioning, assessing outcomes, and utilizing the
assessment as treatment.
Assessment of Psychosocial Functioning
Over the past 20 years, there has been increasing
emphasis within clinical settings to assess the cli-
ent’s psychosocial functioning in addition to psychi-
atric symptomatology. Psychosocial functioning
includes assessment of the client’s hobbies, leisure
activities, and pursuit of values that are hypothe-
sized to contribute to psychological and subjective
well-being (Robbins & Kliewer, 2000). Thus, psy-
chosocial functioning as a construct is expanded to
include the assessment of self-enhancing activities in
addition to traditional areas of basic functioning
such as activities of daily living, interpersonal rela-
tionships, and participation in work or school. This
conceptualization of psychosocial functioning more
clearly articulates the assessment of client strengths
in addition to deficits. This strengths-based
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
28
approach is the result of several converging areas of
research and public policy, including the following:
■■ the mental health consumer movement (e.g.,
Campbell & Leaver, 2003; Pulice & Miccio,
2006),
■■ the rise of the biopsychosocial model in psychol-
ogy (e.g., Maltzman, 2012), and
■■ developmental research in the physiological and
psychosocial bases of resilience (e.g., Greenberg,
2006; Werner, 2005).
Ro and Clark (2009) described their initial
efforts to clarify the construct of psychosocial func-
tioning. The goal of the factor analysis was to initi-
ate the development of a psychometrically sound
instrument that could be used to assess the psycho-
social deficits associated with DSM Axis I and Axis II
psychopathology. A community sample (N = 429)
that included almost equivalent numbers of students
and nonstudent residents completed measures
assessing quality of life, daily functioning, and per-
sonality functioning. Two principal-axis factor anal-
yses with promax rotation were conducted that
included measures of functioning across a variety of
domains and with varying levels of specificity and
breadth. The first factor analysis excluded the two
measures of personality functioning, the Measure of
Disordered Personality and Functioning (MDPF;
Parker et al., 2004, as cited in Ro & Clark, 2009)
and the Severity Indices of Personality Problems
(SIPP; Verheul et al., 2008, as cited in Ro & Clark,
2009). By excluding and then including these mea-
sures, these investigators were able to explore
whether personality functioning, as defined by these
instruments, improved the factor solution. A four-
factor solution, which included these personality
functioning measures, yielded the most psychologi-
cally interpretable solution (Ro & Clark, 2009). The
resulting four dimensions reflected Basic Function-
ing (activities of daily living and microlevel func-
tioning), Well-Being (subjective sense of well-being,
satisfaction, and high social functioning), and two
factors on which the MDPF and SIPP loaded: Self-
Mastery (impulsivity, inability to learn from experi-
ence, and lack of self-control) and Interpersonal
and Social Relationships (lack of empathy or caring
for others, difficulty fitting in socially). These two
personality functioning measures were interpreted
by these investigators as reflecting social and envi-
ronmental functioning associated with personality
traits. Ro and Clark noted that they could only
include general measures of psychosocial function-
ing that were applicable across a range of client
populations.
The growing emphasis on psychosocial function-
ing reflects the growing imperative to demonstrate
the clinical utility of the assessment, defined as the
ability to demonstrate that the assessment “makes a
difference with respect to the accuracy, outcome, or
efficiency of clinical activities” (Hunsley & Mash,
2007, p. 45). This imperative has been an impetus
for developing assessment instruments with ade-
quate external validity to ensure that assessment
results reflect the client’s capacity to function in
“real-world” settings (Kubiszyn et al., 2000). Neuro-
psychologists have acknowledged this need as their
field has shifted from an emphasis on descriptive
diagnosis toward clarifying functional capacity and
recommending specific rehabilitative interventions
(Rabin, Burton, & Barr, 2007). In particular, there is
increased emphasis in ensuring instrument ecologi-
cal validity defined as the generalizability of test
results assessed in a controlled setting to the actual
skill sets required in daily living (Rabin et al., 2007).
A potential advantage of developing and utilizing
ecologically oriented instruments (EOIs) is that they
could minimize the potential for the misinterpreta-
tion of test scores on the basis of client variables
known to influence neuropsychological test results.
The confluence of three factors—(a) the growing
emphasis on psychosocial functioning, (b) the emer-
gence of EOIs in neuropsychology, and (c) the
acknowledgment of the superiority of actuarial and
evidence-based assessment measures—may provide
the impetus to look beyond self-report instruments
in clinical psychology toward the development of
more ecologically valid assessments of psychological
functioning.
Assessment as Treatment
As noted earlier in this chapter, the assessment con-
text as well as psychologist-related and client-related
variables can influence the establishment of rapport
and the working alliance. In clinical settings, the
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
29
psychological assessment is often the precursor to
treatment. One consistent finding in psychotherapy
process and outcomes research is that a strong posi-
tive working alliance established early in therapy
correlates with a decreased probability of early ter-
mination and predicts achievement of treatment
goals and positive therapy outcomes (Hilsenroth &
Cromer, 2007). Extrapolating from these findings,
Finn and colleagues (e.g., Finn & Tonsager, 1997)
developed the Therapeutic Model of Assessment
(TMA), the goal of which is the use the assessment
process as a treatment intervention. For a detailed
treatment of therapeutic assessment, readers should
consult Chapter 26, this volume.
The TMA integrates the multimethod approach
to information gathering with an empathic, collab-
orative approach in which the test feedback session
becomes an intervention: “The major goal is for
clients to leave their assessments having had new
experiences or gained new information about them-
selves that subsequently helps them make changes
in their lives” (Finn & Tonsager, 1997, p. 378). This
client-empowering, collaborative, strengths-based
approach to clinical assessment is consistent with
counseling psychology’s historical approach to voca-
tional, career, and personal counseling (Delworth,
1977; Fretz, 1985; Super, 1955). In the TMA, the
assessment and, particularly, the test feedback ses-
sion become the first phase of treatment. Because
the TMA facilitates treatment by means of the
assessment process, it may be viewed favorably by
third-party payers who otherwise might be reluctant
to preauthorize and pay for a formal psychological
assessment. The TMA assumes that the same psy-
chologist conducts the assessment and provides the
therapy. In some clinical settings, this may be appro-
priate from an ethical perspective (APA, 2010). In
other settings and contexts, particularly forensic set-
tings, the provision of assessment and treatment by
the same psychologist could be considered a viola-
tion of professional standards (APA, 2010, in press).
Assessing Outcomes
With the advent of managed health care and time-
limited treatments, there is increased interest on the
part of third-party payers for psychologists to dem-
onstrate the clinical utility of the psychological
assessment to justify its cost (Hunsley & Mash,
2005). The public sector (i.e., government agencies)
and the private sector (behavioral health care insur-
ance companies) have increased the pressure on
mental health professionals to demonstrate the
effectiveness of their treatments and interventions
(e.g., APA Practice Directorate, 2007; Cavaliere,
1995). This pressure is not likely to abate as finan-
cial resources dwindle and public scrutiny regarding
the expenditure of government money increases.
Although these external bodies are cited as the
sources of this pressure, psychology as a profession
also historically has demanded that services demon-
strate effectiveness to justify reimbursement and
inclusion in national health care initiatives. These
pressures, from outside and within psychology, were
a significant impetus for the development of treat-
ment outcomes research (Maltzman, 2012).
Hill and Corbett (1993) defined outcomes as the
changes that result, either directly or indirectly,
from the treatment utilized in counseling or psycho-
therapy. Assessment instruments that can monitor
progress in treatment as well as address the referral
question have fundamental advantages over instru-
ments that can be used as part of the assessment but
whose cost, time, length, or other factors preclude
their use over the course of treatment. In addition to
tracking individual client progress over time, instru-
ments that can be used as a repeated measure for
tracking individual progress over time also can facil-
itate continuous quality improvement efforts at the
organizational or system level by aggregating and
analyzing data across clients. The assessment of out-
comes necessitates a multimodal approach to ensure
that the clinically salient variables targeted in treat-
ment are adequately assessed over time and suffi-
ciently sensitive to detect change over time
(Lambert & Lambert, 1999).
Historically, the assessment of outcomes has
focused on Axis I clinical disorders, which exclude
personality disorders and mental retardation (Ameri-
can Psychiatric Association, 2000), and areas of func-
tioning compromised by these disorders. However,
development of instruments for the assessment and
change over time of personality functioning, such as
capacity for empathy and tendency toward impulsiv-
ity, would be enormously helpful in mitigating risk
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
30
in forensic settings as well as for potentiating treat-
ment of Axis I disorders in clinical and counseling
settings (Widiger & Samuel, 2009). For additional
discussion of risk assessment in forensic settings,
readers are directed to Chapter 16, this volume.
SUMMARY AND DISCUSSION
The assessment process historically has consisted of
a multimethod approach integrating interview, col-
lateral, and formal test data. Both clinical and coun-
seling psychology have brought strengths to the
process that are based on the historical populations
served by each discipline and referral questions
addressed. Clinical psychology introduced psycho-
logical testing and the multimethod approach for
the assessment of emotional disturbance in children.
Counseling psychology emerged to address the
vocational needs of these youths. Both disciplines
transitioned into the assessment of adults with clini-
cal psychology focusing on the assessment and treat-
ment of major psychopathology. Counseling
psychology historically has focused on the assess-
ment and treatment of life-associated stressors in
individuals functioning along the continuum of nor-
mal psychological functioning. Both specialties have
strong bases in empiricism and formal psychological
testing. Their historical convergence may be in the
assessment process itself. One emerging trend is the
increasing focus on psychologist–client collabora-
tion during the assessment, which essentially
becomes the initiation of treatment (e.g., Tharinger
et al., 2009). Counseling psychologists historically
have collaborated with clients, together reviewing
test data and their application to vocational and
career choices (Swanson & Gore, 2000). This
approach has naturally segued into personal coun-
seling for adjustment issues. Clinical psychology
appears to be adapting this approach to the process
of the clinical assessment for psychotherapy. The
assessment context and referral questions will deter-
mine the extent to which this collaborative approach
is appropriate. In most forensic settings, it may be
very limited or inconsistent with applicable profes-
sional standards and guidelines.
Balancing the use of subjective sources of data
(e.g., the clinical interview and most self-report
instruments) and objective sources of data (e.g.,
behavioral analyses, test instruments with validity
scales) is a topic of continuing discussion and varied
practice. Ensuring that multiple methods are used
for data collection helps guard against the introduc-
tion of biases that can occur if subjective data
sources predominate. Adherence to professional
standards and guidelines, education and training in
assessing diverse populations, and awareness of vari-
ous sources of bias also facilitate an assessment pro-
cess that results in a data synthesis and report that
can objectively address the referral questions.
References
Acevedo-Polakovich, I. D., Reynaga-Abika, G., Garriott,
P. O., Derefinko, K. J., Wimsatt, M. K., Gudonis,
L. C., & Brown, T. L. (2007). Beyond instrument
selection: Cultural considerations in the psycho-
logical assessment of U.S. Latinas/os. Professional
Psychology: Research and Practice, 38, 375–384.
doi:10.1037/0735-7028.38.4.375
Ægisdóttir, S., White, M. J., Spengler, P. M., Maugherman,
A. S., Anderson, L. A., Cook, R. S., . . . Rush, J. D.
(2006). The meta-analysis of clinical judgment
project: Fifty-six years of accumulated research on
clinical versus statistical prediction. The Counseling
Psychologist, 34, 341–382. doi:10.1177/001100000
5285875
Allen, J. B. (2002). Treating patients with neuropsychologi-
cal disorders: A clinician’s guide to assessment and
referral. Washington, DC: American Psychological
Association.
American Educational Research Association, American
Psychological Association, & National Council
on Measurement in Education. (1999). Standards
for educational and psychological testing (3rd ed.).
Washington, DC: American Educational Research
Association.
American Psychiatric Association. (2000). Diagnostic and
statistical manual of mental disorders (4th ed., text
revision). Washington, DC: Author.
American Psychological Association. (2003). Guidelines
for psychological practice with older adults.
Washington, DC: Author. Retrieved from http://
www.apa.org/practice/guidelines/older-adults
American Psychological Association. (2009). Guidelines
for child custody evaluations in family law proceed-
ings. Washington, DC: Author. Retrieved from http://
www.apa.org/practice/guidelines/child-custody
American Psychological Association. (2010). Ethical
principles of psychologists and code of conduct (2002,
amended June 1, 2010). Washington, DC: Author.
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
31
Retrieved from http://www.apa.org/ethics/code/
index.aspx
American Psychological Association. (2011). Guidelines
for psychological evaluations in child protection mat-
ters. Washington, DC: Author. Retrieved from http://
www.apa.org/practice/guidelines/child-protection
American Psychological Association. (in press). Specialty
guidelines for forensic psychology. Washington, DC:
Author.
American Psychological Association Practice Directorate.
(2007). APA group to propose pay-for-performance
policy. Monitor on Psychology, 38(4), 33. Retrieved
from http://psycnet.apa.org/psycextra/599732
007-024
Armstrong, P. I., & Rounds, J. B. (2008). Vocational psy-
chology and individual differences. In S. D. Brown &
R. W. Lent (Eds.), Handbook of counseling psychology
(4th ed., pp. 375–391). Hoboken, NJ: Wiley.
Bagby, R. M., Nicholson, R. A., Buis, T., Radovanovic,
H., & Fidler, B. J. (1999). Defensive respond-
ing on the MMPI-2 in family custody and access
evaluation. Psychological Assessment, 11, 24–28.
doi:10.1037/1040-3590.11.1.24
Bagby, R. M., Rogers, R., Nicholson, R. A., Buis, T.,
Seeman, M. V., & Rector, N. A. (1997). Effectiveness
of MMPI–2 validity indicators in the detection
of defensive responding in clinical and nonclini-
cal samples. Psychological Assessment, 9, 406–413.
doi:10.1037/1040-3590.9.4.406
Bagby, R. M., Wild, N., & Turner, A. (2003).
Psychological assessment in adult mental health
settings. In J. R. Graham & J. A. Naglieri (Eds.),
Handbook of psychology: Vol. 10. Assessment psychol-
ogy (pp. 213–234). Hoboken, NJ: Wiley.
Baker, D. B. (1988). The psychology of Lightner
Witmer. Professional School Psychology, 3, 109–121.
doi:10.1037/h0090552
Baker, D. B. (2002). Child saving and the emergence
of vocational psychology [Abstract]. Journal of
Vocational Behavior, 60, 374–381. doi:10.1006/jvbe.
2001.1837
Ben-Porath, Y. S., & Tellegen, A. (2008). MMPI–2–RF:
Manual for administration, scoring, and interpretation.
Minneapolis, MN: University of Minnesota Press.
Boccaccini, M. T., Murrie, D. C., & Duncan, S. A.
(2006). Screening for malingering in a criminal-
forensic sample with the Personality Assessment
Inventory. Psychological Assessment, 18, 415–423.
doi:10.1037/1040-3590.18.4.415
Briere, J. (2004). Psychological assessment of adult post-
traumatic states: Phenomenology, diagnosis, and
measurement (2nd ed.). Washington, DC: American
Psychological Association. doi:10.1037/10809-000
Butcher, J. N. (2009). Overview and future directions. In
J. N. Butcher (Ed.), Oxford handbook of personality
assessment (pp. 707–718). New York, NY: Oxford
University Press.
Butcher, J. N., Cabiya, J., Lucio, E., & Garrido, M. (Eds.).
(2007). The challenge of assessing clients with
different cultural and language backgrounds. In
Assessing Hispanic clients using the MMPI–2 and
MMPI–A (pp. 3–23). Washington, DC: American
Psychological Association. doi:10.1037/11585-001
Campbell, D. T., & Fiske, D. W. (1959). Convergent
and discriminant validation by the multitrait-
multimethod matrix. Psychological Bulletin, 56, 81–
105. doi:10.1037/h0046016
Campbell, J., & Leaver, J. (2003). Emerging new practices
in organized peer support. Retrieved from http://www.
nasmhpd.org/nasmhpd_collections/collection5/
publications/ntac_pubs/reports/peer%20support%20
practices%20final
Carr, G. D., Moretti, M. M., & Cue, B. J. H. (2005).
Evaluating parenting capacity: Validity problems
with the MMPI–2, PAI, CAPI, and ratings of child
adjustment. Professional Psychology: Research and
Practice, 36, 188–196. doi:10.1037/0735-7028.
36.2.188
Cavaliere, F. (1995). Measuring outcomes: Payers
demand increased provider documentation. APA
Monitor, 26(10), 41.
Comas-Díaz, L., & Grenier, J. R. (1998). Migration and
acculturation. In J. Sandoval, C. L. Frisby, K. F.
Geisinger, J. D. Scheuneman, & J. R. Grenier (Eds.),
Test interpretation and diversity: Achieving equity
in assessment (pp. 213–239). Washington, DC:
American Psychological Association. doi:10.1037/
10279-008
Craig, R. J. (2009). The clinical interview. In J. N. Butcher
(Ed.), Oxford handbook of personality assessment (pp.
201–225). New York, NY: Oxford University Press.
Creed, T. A., & Kendall, P. C. (2005). Therapist alliance-
building behavior within a cognitive behavioral treat-
ment for anxiety in youth. Journal of Consulting and
Clinical Psychology, 73, 498–505. doi:10.1037/0022-
006X.73.3.498
Danish, S. J., & Antonides, B. J. (2009). What counseling
psychologists can do to help returning veterans. The
Counseling Psychologist, 37, 1076–1089. doi:10.1177/
0011000009338303
Delworth, U. (1977). Counseling psychology. The
Counseling Psychologist, 7, 43–45. doi:10.1177/
001100007700700219
Faust, D., Grimm, P. W., Ahern, D. C., & Sokolik, M.
(2010). The admissibility of behavioral science
evidence in the courtroom: The translation of legal
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
32
to scientific concepts and back. Annual Review of
Clinical Psychology, 6, 49–77.
Fernandez-Ballesteross, R. (1997). Guidelines for the
assessment process. European Psychologist, 2, 352–
355. doi:10.1027/1016-9040.2.4.352
Finn, S. E., & Tonsager, M. E. (1997). Information-
gathering and therapeutic models of assessment:
Complementary paradigms. Psychological Assessment,
9, 374–385. doi:10.1037/1040-3590.9.4.374
Fretz, B. R. (1985). Counseling psychology. In E. M.
Altmaier & M. E. Meyer (Eds.), Applied specialties
in psychology (pp. 45–73). New York, NY: Random
House.
Garb, H. N. (2003). Clinical judgment and mechanical
prediction. In J. R. Graham & J. A. Naglieri (Eds.),
Handbook of psychology: Vol. 10. Assessment psychol-
ogy (pp. 27–42). Hoboken, NJ: Wiley.
Geisinger, K. F. (2003). Testing and assessment in cross-
cultural psychology. In J. R. Graham & J. A. Naglieri
(Eds.), Handbook of psychology: Vol. 10. Assessment
psychology (pp. 95–117). Hoboken, NJ: Wiley.
Graham, J. R., & Naglieri, J. A. (2003). Current status
and future directions of assessment psychology. In
J. R. Graham & J. A. Naglieri (Eds.), Handbook of
psychology: Vol. 10. Assessment psychology (pp. 579–
592). Hoboken, NJ: Wiley.
Gray-Little, B., & Kaplan, D. A. (1998). Interpretation
of psychological tests in clinical and forensic evalu-
ations. In J. Sandoval, C. L. Frisby, K. F. Geisinger,
J. D. Scheuneman, & J. R. Grenier (Eds.), Test
interpretation and diversity: Achieving equity in assess-
ment (pp. 141–178). Washington, DC: American
Psychological Association. doi:10.1037/10279-006
Greenberg, M. T. (2006). Promoting resilience in children
and youth: Preventive interventions and their inter-
face with neuroscience. In B. M. Lester, A. Masten, &
B. McEwen (Eds.), Annals of the New York Academy
of Sciences: Vol. 1094. Resilience in children (pp. 139–
150). New York: New York Academy of Sciences.
Hays, P. A. (2008). Looking into the clinician’s mir-
ror: Cultural self-assessment. In P. A. Hays
(Ed.), Addressing cultural complexities in practice:
Assessment, diagnosis, and therapy (2nd ed., pp.
41–62). Washington, DC: American Psychological
Association. doi:10.1037/11650-003
Heilbrun, K., DeMatteo, D., Marczyk, G., & Goldstein, A.
M. (2008). Standards of practice and care in forensic
mental health assessment. Psychology, Public Policy,
and Law, 14, 1–26. doi:10.1037/1076-8971.14.1.1
Hill, C. E., & Corbett, M. M. (1993). A perspective on the
history of process and outcome research in counsel-
ing psychology. Journal of Counseling Psychology, 40,
3–24. doi:10.1037/0022-0167.40.1.3
Hill, C. E., & Williams, E. N. (2000). The process of indi-
vidual therapy. In S. D. Brown & R. W. Lent (Eds.),
Handbook of counseling psychology (3rd ed., pp. 670–
710). New York, NY: Wiley.
Hiller, J. B., Rosenthal, R., Bornstein, R. F., Berry,
D. T. R., & Brunell-Neulieb, S. (1999). A com-
parative analysis of Rorschach and MMPI validity.
Psychological Assessment, 11, 278–296. doi:10.1037/
1040-3590.11.3.278
Hilsenroth, M. J., & Cromer, T. D. (2007). Clinician
interventions related to alliance during the ini-
tial interview and psychological assessment.
Psychotherapy: Theory, Research, Practice, Training,
44, 205–218. doi:10.1037/0033-3204.44.2.205
Hunsley, J., & Mash, E. J. (2005). Introduction to the
special section on developing guidelines for the
evidence-based assessment (EBA) of adult disorders.
Psychological Assessment, 17, 251–255. doi:10.1037/
1040-3590.17.3.251
Hunsley, J., & Mash, E. J. (2007). Evidence-based assess-
ment. Annual Review of Clinical Psychology, 3, 29–51.
Kubiszyn, T. W., Meyer, G. J., Finn, S. E., Eyde, L. D.,
Kay, G. G., Moreland, K. L., . . . Eisman, E. J.
(2000). Empirical support for psychological assess-
ment in clinical health care settings. Professional
Psychology: Research and Practice, 32, 119–130.
doi:10.1037MJ73S-702S.31.2.119
Lachar, D. (2003). Psychological assessment in child
mental health settings. In J. R. Graham & J. A.
Naglieri (Eds.), Handbook of psychology: Vol. 10.
Assessment psychology (pp. 235–260). Hoboken, NJ:
Wiley.
Lambert, M. J., & Lambert, J. M. (1999). Use of psycho-
logical tests for assessing treatment outcome. In
M. E. Maruish (Ed.), The use of psychological testing
for treatment planning and outcomes assessment (2nd
ed., pp. 115–151). Mahwah, NJ: Erlbaum.
Leichtman, M. (2009). Behavioral observations. In J. N.
Butcher (Ed.), Oxford handbook of personality
assessment (pp. 187–200). New York, NY: Oxford
University Press.
Lerner, P. M., & Lerner, H. D. (1998). An experien-
tial psychoanalytic approach to the assessment
process. In J. W. Barron (Ed.), Making diagnosis
meaningful: Enhancing evaluation and treatment of
psychological disorders (pp. 247–266). Washington,
DC: American Psychological Association.
doi:10.1037/10307-009
Maltzman, S. (2004, July). Newcomer women: Co-morbid
mental health and physical health concerns. In K. L.
Norsworthy (Chair), Feminist perspectives in interna-
tional psychology: Building partnerships and creative
collaboration. Roundtable conducted at the 112th
Annual Convention of the American Psychological
Association, Honolulu, HI.
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
The Assessment Process
33
Maltzman, S. (2012). Process and outcomes in counseling
and psychotherapy. In E. M. Altmaier & J. C. Hansen
(Eds.), The Oxford handbook of counseling psychol-
ogy (pp. 95–127). New York, NY: Oxford University
Press.
Martindale, D. A., & Gould, J. W. (2007). Custody evalu-
ation reports: The case for empirically derived infor-
mation. Journal of Forensic Psychology Practice, 7,
87–99. doi:10.1300/J158v07n03_06
McGuire, F. L. (1990). Psychology aweigh! A history of
clinical psychology in the United States Navy, 1900–
1988. Washington, DC: American Psychological
Association. doi:10.1037/10069-001
Meara, N. M., & Myers, R. A. (1999). A history of
Division 17 (Counseling Psychology): Establishing
stability amid change. In D. A. Dewsbury (Ed.),
Unification through division: Histories of the divisions
of the American Psychological Association
(Vol. 3, pp. 9–41). Washington, DC: American
Psychological Association. doi:10.1037/10281-001
Meehl, P. E. (1954). Clinical vs. statistical prediction:
A theoretical analysis and a review of the evidence.
Minneapolis: University of Minnesota Press.
doi:10.1037/11281-000
Meyer, G. J., Finn, S. D., Eyde, L. D., Kay, G. G.,
Moreland, K. L., Dies, R. R., . . . Reed, G. M. (2001).
Psychological testing and psychological assess-
ment: A review of evidence and issues. American
Psychologist, 56, 128–165. doi:10.1037/0003-066X.
56.2.128
Millon, T. (1977). Millon Clinical Multiaxial Inventory.
Minneapolis, MN: National Computer Systems.
Millon, T. (1994). Millon Clinical Multiaxial Inventory—
III: Manual. Minneapolis, MN: Pearson Assessments.
Mullins-Sweatt, S. N., & Widiger, T. A. (2009). Clinical
utility and DSM–V. Psychological Assessment, 21,
302–312. doi:10.1037/a0016607
Ogloff, J. R., & Douglas, K. S. (2003). Psychological
assessment in forensic psychology. In J. R. Graham
& J. A. Naglieri (Eds.), Handbook of psychology:
Vol. 10. Assessment psychology (pp. 345–363).
Hoboken, NJ: Wiley.
Perlin, M. L., & McClain, V. (2009). “Where souls are
forgotten”: Cultural competencies, forensic evalua-
tions, and international human rights. Psychology,
Public Policy, and Law, 15, 257–277. doi:10.1037/
a0017233
Pulice, R. T., & Miccio, S. (2006). Patient, client, con-
sumer, survivor: The mental health consumer
movement in the United States. In J. Rosenberg
& S. Rosenberg (Eds.), Community mental health:
Challenges for the 21st century (pp. 7–14). New York,
NY: Routledge.
Rabin, L. A., Burton, L. A., & Barr, W. B. (2007).
Utilization rates of ecologically oriented instruments
among clinical neuropsychologists. The Clinical
Neuropsychologist, 21, 727–743. doi:10.1080/
13854040600888776
Reynolds, C. R., & Ramsay, M. C. (2003). Bias in psycho-
logical assessment: An empirical review and recom-
mendations. In J. R. Graham & J. A. Naglieri (Eds.),
Handbook of psychology: Vol. 10. Assessment psychol-
ogy (pp. 67–93). Hoboken, NJ: Wiley.
Ro, E., & Clark, L. A. (2009). Psychosocial functioning
in the context of diagnosis: Assessment and theoreti-
cal issues. Psychological Assessment, 21, 313–324.
doi:10.1037/a0016607
Robbins, S. B., & Kliewer, W. L. P. (2000). Advances in
theory and research on subjective well being. In S. D.
Brown & R. W. Lent (Eds.), Handbook of counseling
psychology (3rd ed., pp. 310–345). New York, NY:
Wiley.
Rosenthal, R. (1966). Interpretation of data. In R.
Rosenthal (Ed.), Experimenter effects in behavioral
research (pp. 16–26). New York, NY: Meredith.
Sandoval, J. (1998). Critical thinking in test interpreta-
tion. In J. Sandoval, C. L. Frisby, K. F. Geisinger, J.
D. Scheuneman, & J. R. Grenier (Eds.), Test inter-
pretation and diversity: Achieving equity in assess-
ment (pp. 31–49). Washington, DC: American
Psychological Association. doi:10.1037/10279-002
Society for Personality Assessment. (2005). The status
of the Rorschach in clinical and forensic settings:
An official statement by the Board of Trustees of
the Society for Personality Assessment. Journal of
Personality Assessment, 85, 219–237. doi:10.1207/
s15327752jpa8502_16
Strohmer, D. C., & Arm, J. R. (2006). The more things
change, the more they stay the same: Reaction to
Ægisdóttir et al. The Counseling Psychologist, 34,
383–390. doi:10.1177/0011000005285879
Super, D. E. (1955). Transition: From vocational guid-
ance to counseling psychology. Journal of Counseling
Psychology, 2, 3–9. doi:10.1037/h0041630
Swanson, J., & Gore, P., Jr. (2000). Advances in voca-
tional psychology theory and research. In S. D.
Brown & R. W. Lent (Eds.), Handbook of counseling
psychology (3rd ed., pp. 233–269). New York, NY:
Wiley.
Tharinger, D. J., Finn, S. E., Gentry, L., Hamilton, A.,
Fowler, J., Matson, M., . . . Walkowiak, J. (2009).
Therapeutic assessment with children: A pilot
study of treatment acceptability and outcome.
Journal of Personality Assessment, 91, 238–244.
doi:10.1080/00223890902794275
Weiner, I. B. (2003). The assessment process. In J. R.
Graham & J. A. Naglieri (Eds.), Handbook of psy-
chology: Vol. 10. Assessment psychology (pp. 3–25).
Hoboken, NJ: Wiley.
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
Sara Maltzman
34
Werner, E. E. (2005). Resilience research: Past, present,
and future. In R. DeV. Peters, B. Leadbeater, & R.
J. McMahon (Eds.), Resilience in children, families,
and communities: Linking context to practice and
policy (pp. 3–11). New York, NY: Kluwer Academic/
Plenum.
Whiston, S. C., & Rahardja, D. (2008). Vocational
counseling process and outcome. In S. D. Brown &
R. W. Lent (Eds.), Handbook of counseling psychology
(4th ed., pp. 444–461). Hoboken, NJ: Wiley.
Widiger, T. A., & Samuel, D. B. (2009). Evidence-based
assessment of personality disorders. Personality
Disorders: Theory, Research, and Treatment, S(1),
3–17. doi:10.1037/1949-2715.S.1.3
Wright, J. C., Lindgren, K. P., & Zakriski, A. L. (2001).
Syndromal versus contextualized personality assess-
ment: Differentiating environmental and disposi-
tional determinants of boys’ aggression. Journal of
Personality and Social Psychology, 81, 1176–1189.
doi:10.1037/0022-3514.81.6.1176
Co
py
ri
gh
t
Am
er
ic
an
P
sy
ch
ol
og
ic
al
A
ss
oc
ia
ti
on
.
No
t
fo
r
fu
rt
he
r
di
st
ri
bu
ti
on
.
1st Peer
The case that will be focused on developing a proper diagnosis within 48 hours to ensure that insurance can cover this client is a 33 year old male. The patient is coming for help due to suicidal intentions. There are some other aspects to acknowledge to help the patient they are currently suffering from a divorce in the process as well as fear of losing their means of income. These are big factors because due to these major situations, the patient also has an emotional impact with anger, sadness, stress, and agitation.
To make any diagnosis an assessment is usually done first. According to Carlson (2013), in counseling and or clinical testing, an interview is the first contact to be made, and be within an hour. During this time frame, is when the professional seeks to obtain consent from the patient to provide further care. During the meeting with a patient, the professional follows and informs the patient of the ethical principles and code of conduct (American Psychological Association, 2010)
After obtaining proper consent, the assessment technique for this specific case would be an interview and survey to help narrow down what proper diagnosis it is. For instance, some questions that would be asked in an interview process are how often to never on situations and emotions. Based on the case and the openness of the client and with their proper consent, a professional focusing on questions of emotions and reactions can determine what time of mood disorder they have. Based on cognitive and behavioral theoretical orientation would help aid the diagnosis. The patient has suicidal intentions, as well as more negative emotions that have physically affected themselves. This then shows me that the concerns are not only cognitive but behavioral as well.
With the timeframe given, it can be possible to achieve a diagnosis if all proper ethical principles and codes of conduct are made. This is only if the patient is consenting to care and provides more information during the assessment. However, if the patient does consent and only provides the information given then providing a proper diagnosis might be difficult but there are very key characteristics that are presented that meet criteria such as depressive disorder. I believe this ethical to enter due to the fact the patient has mentioned they have suicidal intentions which is a criterion for depression as well as the other emotions presented. This is important because if a person is wanting to commit suicide it is the responsibility of the professional to provide care assistance. It is justifiable to obtain a third-party payment because this patient meets the criteria of depression and also due to the suicidal intention needs more care.
Reference
American Psychological Association. (2010).
Standard 9: Assessment Links to an external site
Links to an external site.
. Retrieved from http://www.apa.org/ethics/code/index.aspx?item=12
American Psychiatric Association. (2013). DSM-5: The diagnostic and statistical manual of mental disorders (5th ed.)[E-book]. Washington, D.C.: American Psychiatric Publishing.
Carlson, J. F. (2013). Clinical and counseling testing. In K. F. Geisinger, B. A. Bracken, J. F. Carlson, J.-I. C. Hansen, N. R. Kuncel, S. P. Reise, & M. C. Rodriguez (Eds.), APA handbook of testing and assessment in psychology, Vol. 2: Testing and assessment in clinical and counseling psychology. (pp. 3–17). American Psychological Association. https://doi.org/10.1037/14048-001
Maltzman, S. (2013). The assessment process. In APA handbook of testing and assessment in psychology, Vol. 2: Testing and assessment in clinical and counseling psychology. (pp. 19–34). American Psychological Association. https://doi.org/10.1037/14048-002
In your responses, evaluate whether your peer took into account the ethical guidelines outlined in the APA’s Ethical Principles of Psychologist and Code of Conduct when he or she assessed and diagnosed the client. Suggest additional questions your peer might ask the client. Propose an alternative diagnosis that might arise from the additional questions you have suggested.
2nd Peer
The Case of Charles
My client, Charles, is an African American male who is going through a divorce. He is concerned about the possibility of losing his job. My role as a psychologist is to provide Charles with a diagnosis. Understanding that Charles is experiencing a devastating blow from his divorce and feeling his employment is in jeopardy is necessary. However, Charles is at my agency seeking help due to suicidal ideation.
Assessment
The purpose behind a psychological assessment is to find answers questions. These answers can be linked to any combination Charles’ intellectual, psychological, emotional, behavioral, or psychosocial functioning. From these answers, psychologists interpret test results, and provide Charles with this information (Maltzman, 2013). Section 9.01 Ethical Principles of Psychologists and Code of Conduct states that psychologists have the responsibility to disclose opinions regarding the psychological characteristics of Charles after the psychologist has conducted an examination that support the psychologist’s statements or conclusions (APA, 2017).
For Charles, this assessment could be used to better evaluate him and his determined needs. By gaining more information regarding Charles’ personality characteristics, symptoms, and problems; a better conclusion made in order by the psychologist to make more practical decisions regarding Charles’ behavior (“Psychological assessment,” 2004). In my opinion, the use of an interview and observational data could be used in an effort to assist Charles.
Review your case with DSM-V and make a tentative diagnosis.
As a psychologist, based upon the information provided, I would feel Charles could be diagnosed with depression or adjustment disorder. Depression is one the most commonly diagnosed mental health disorders. In fact, almost one in ten individuals will be diagnosed with depression during the individual’s life (Fowells, 2016). The depression Charles is experiencing can range anywhere from mild to moderate to severe (Tallent, 2012). For Charles his divorce is a substantial risk factor for his poor mental health (Zineldin, 2019). It is possible for Charles due to his divorce to fear for the loss of his employment. Counseling from a qualified psychologist will help Charles better cope with the stress of divorce, and rebuild his life.
Charles could be diagnosed with adjustment disorder. Adjustment disorder is commonly diagnosed in the event that an individual has experienced a stressful or traumatic event. Thereafter these same individual experiences an exaggerated response. Those with adjustment disorder may suffer from impairment in day-to-day functioning. In order to be diagnosed with adjustment disorder, a trigger must be identified (Casey et al., 2016). Adjustment disorder can be defined as a response to an identifiable stressor or stressors. These stressors result in functional damage. Adjustment disorder is linked to a higher risk of suicide. Additionally, those with adjustment disorder are also known to have higher tendencies of substance misuse. Adjustment disorder is extremely common (Wright, 2009). For Charles this stressor or trigger can probably be linked to his divorce.
Review the timeline you have regarding insurance reimbursement
The scenario detailed that Charles has eight sessions to assist him with his mental health. These eight sessions could provide Charles with the needed direction he might need moving forward. Charles is probably experiencing feelings of abandonment, hurt and betrayal. Charles divorce is a risk factor or trigger to his depression.
Whether or not it is ethical to render a diagnosis within the required timeframe.
As a psychologist working for an agency, I have a policy that states that an assessment and diagnosis must be given for Charles within 48 hours of an initial session with a client. This would not be ethical to render a diagnosis within this timeframe. Additionally, I would be giving my agency and Charles a disservice by providing a diagnosis within this timeframe. In order to give Charles, the correct diagnosis, more time interviewing, observing, and assessing would be necessary.
Describe any additional information you would need to help formulate your diagnosis.
Additional information could be used be used in order to formulate a diagnosis:
· Interviews from family, friends, if possible ex-spouse
· Charles’ drug/medication/dietary supplement history
· Charles’ clinical history
· Charles’ family clinical history
Specific questions to ask Charles
1. Talk to me about your biggest fear regarding your relationship/end of your marriage?
2. What is it that is making you seek professional help?
3. What are your expectations to receive from our sessions?
4. What do you consider to be your biggest problem? Tell me what you think was the biggest issue/problem in your relationship?
5. What makes you feel the most stressed/depressed?
6. What are some things you wish your spouse would have started doing? What are some things you wish you could have started doing?
7. What are some things you wish your spouse would have stopped doing? What are some things you wish you could have stopped doing in your relationship?
8. What are some things you would have done differently in your relationship?
9. Can you tell me how you feel about your own life?
10. What are some of the aspects of your life that may make you feel/think that your life is not worth living?
11. What are some of the aspects of your life that make it worth living?
Conclusion
After attending sessions with a psychologist, one noticeable problem with the psychological diagnostic systems is the struggle in determining what constitutes a mental illness and what does not (Garfield, 2004). In my opinion, Charles is acting in a similar manner a male would experience if going through an unwanted divorce. I feel diagnosing him with a mental illness could provide more harm than good. From reading the scenario, I feel that divorce was probably unwanted by Charles, and his significant other was probably the one initiating the divorce.
Reference
American Psychological Association. (2017). Ethical principles of psychologists and code of conduct (2002, amended effective June 1, 2010, and January 1, 2017).
http://www.apa.org/ethics/code/index.htmlLinks to an external site.
Casey, P. Strain, J. (2016). When somebody has an Adjustment Disorder
. Psychiatric News,
https://doi.org/10.1176/appi.pn.2016.1a18Links to an external site.
Fowells, A. (2016). Diagnosing depression.
Chemist & Druggist, 285(6974), 18–20. https://web.p.ebscohost.com/ehost/detail/detail?vid=12&sid=a6e928f9-4f8b-4f7d-8e62-1e1ee6436aa5%40redis&bdata=JkF1dGhUeXBlPXNoaWImc2l0ZT1laG9zdC1saXZl#db=bsh&AN=116884755
Garfield, S. L. (2004). Methodological issues in clinical diagnosis. In P. B. Sutker, & H. E. Adams (Eds.),
Comprehensive handbook of psychopathology (3rd ed.). Springer Science+Business Media. Credo Reference:
https://go.openathens.net/redirector/ashford.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fcontent%2Fentry%2Fsprhp%2Fmethodological_issues_in_clinical_diagnosis%2F0%3FinstitutionId%3D3165Links to an external site.
Maltzman, S. (2013). The assessment process. In
APA handbook of testing and assessment in psychology, Vol. 2: Testing and assessment in clinical and counseling psychology. (pp. 19–34). American Psychological Association. https://doi.org/10.1037/14048-002
Maye, M. (2021). Psychological assessment. In F. R. Volkmar (Ed.),
Springer reference: Encyclopedia of autism spectrum disorders (2nd ed.). Springer Science+Business Media. Credo Reference:
https://go.openathens.net/redirector/ashford.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fcontent%2Fentry%2Fsprautismdis%2Fpsychological_assessment%2F0%3FinstitutionId%3D3165Links to an external site.
Psychological assessment. (2004). In W. E. Craighead, & C. B. Nemeroff (Eds.),
The concise Corsini encyclopedia of psychology and behavioral science (3rd ed.). Wiley. Credo Reference: https://go.openathens.net/redirector/ashford.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fcontent%2Fentry%2Fwileypsych%2Fpsychological_assessment%2F0%3FinstitutionId%3D3165
Tallent, R. J. (2012). Report it right: Depression and traumatic brain injuries.
Quill, 100(3), 30. https://web.p.ebscohost.com/ehost/detail/detail?vid=23&sid=a6e928f9-4f8b-4f7d-8e62-1e1ee6436aa5%40redis&bdata=JkF1dGhUeXBlPXNoaWImc2l0ZT1laG9zdC1saXZl#db=lkh&AN=77709703
Wright, E. J. (2009). Adjustment disorder. In E. R. Ingram,
The international encyclopedia of depression. Springer Publishing Company. Credo Reference:
https://go.openathens.net/redirector/ashford.edu?url=https%3A%2F%2Fsearch.credoreference.com%2Fcontent%2Fentry%2Fspiedep%2Fadjustment_disorder%2F0%3FinstitutionId%3D3165Links to an external site.
Zineldin, M. (2019). TCS is to Blame: The Impact of Divorce on Physical and Mental Health.
International Journal of Preventive Medicine, 1–4.
https://doi.org/10.4103/ijpvm.IJPVM_472_18
In your responses, evaluate whether your peer took into account the ethical guidelines outlined in the APA’s Ethical Principles of Psychologist and Code of Conduct when he or she assessed and diagnosed the client. Suggest additional questions your peer might ask the client. Propose an alternative diagnosis that might arise from the additional questions you have suggested.