Readings
Read or review the following:
- Navarro, D. J., Foxcroft, D. R., & Faulkenberry, T. J. (2019). Learning statistics with JASP: A tutorial for psychology students and other beginners. https://learnstatswithjasp.com/
Chapter 4, “Descriptive Statistics [PDF].”
- Grinnell, R. M., Jr., Gabor, P. A., & Unrau, Y. A. (2016). Program evaluation for social workers: Foundations of evidence-based programs (7th ed.). Oxford University Press.
Chapter 3, “The Process.”
Chapter 7, “The Program.”
Chapter 8, “Theory of Change and Program Logic Models.”
Chapter 9, “Preparing for an Evaluation.”
Drawing and citing from Chapters 7 and 8 in your Program Evaluation for Social Workers e-book, describe the program theory and create a logic model of a social work program. It could be one from you own work or internship experience, a program at which you are interested in working or interning in the future, or one described in an academic journal.
Note that your post should be substantive and be 500–750 words. It should be well-organized and proofread.
141
C h a p t e r
THE PROGR A
M
W ith the background of the previous six chap-
ters in mind, you’re now in an excellent posi-
tion to see how social work programs are
actually designed. Remember, your evaluation will be
done within a program so you have no other alterna-
tive but to understand how your evaluation will be
influenced by its design.
We begin this chapter with the immediate envi-
ronment of your program—the larger organization
that it’s housed within, commonly referred to as a
social service agency.
THE AGENCY
A social service agency is an organization that exists
to fill a legitimate social purpose such as:
• To protect children from physical, sexual, and
emotional harm
• To enhance quality of life for developmentally
delayed adolescents
• To improve nutritional health for
housebound senior citizens
Agencies can be public and funded entirely by the
state and/or federal government or private and funded
by private funds, deriving some monies from govern-
mental sources and some from client fees, charitable
bodies, private donations, fund-raising activities, and
so forth. It’s common for agencies to be funded by
many different types of funding sources. When sev-
eral sources of funding are provided to an agency, the
agency’s funds (in their totality) are called “blended
funds.” Regardless of the funding source(s), agencies
obtain their unique identities by their:
• Mission statements
•
Goals
Mission Statements
All agencies have mission statements that provide the
unique written philosophical perspective of what they
are all about and make explicit the reasons for their
existence. Mission statements sometimes are called
philosophical statements or simply an agency’s philos-
ophy. Whatever it’s called, a mission statement articu-
lates a common vision for the agency in that it provides
a point of reference for all major planning decisions.
You cannot do a meaningful evaluation of a
social work program without first knowing
how the program has been designed around
its mission statement.
A mission statement is like a lighthouse in that it
exists to provide a general direction. It not only pro-
vides clarity of purpose to persons within the agency
but also helps them to gain an understanding and
support from the stakeholders outside the agency who
are unquestionably influential to the agency’s overall
success (see Chapter 1).
Mission statements are usually given formal
approval and sanction by legislators for public agen-
cies or by executive boards for private ones. They can
range from one sentence to 10 pages or more and are
as varied as the agencies they represent such as,
7
A nation that continues year after
year to spend more money on military
defense than on programs of social
uplift is approaching spiritual doom.
~ Martin Luther King, Jr.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
142 Par t II: Designing Programs
• This agency strives to provide a variety of
support services to families and children in
need, while in the process maintaining their
rights, their safety, and their human dignity.
• The mission of this agency is to promote and
protect the mental health of the elderly people
residing in this state by offering quality
and timely programs that will deliver these
services.
• The mission of this agency is to treat clients
as partners in their therapy, and all services
should be short-term, intensive, and focus on
problems in day-to-day life and work.
• The mission of this agency is to protect and
promote the physical and social well-being
of this city by ensuring the development and
delivery of culturally competent services that
encourage and support individual, family,
and community independence, self-reliance,
and civic responsibility to the greatest degree
possible.
In short, an agency’s mission statement lays the
overall conceptual foundation for all of the programs
housed within it because each program (soon to be
discussed) must be logically connected to the over-
arching intent of the agency as declared by its mission
statement. Note that mission statements capture the
general type of clients to be served as well as com-
municate the essence of the services they offer their
clients. Creating mission statements is a process of
bringing interested stakeholders together to agree on
the overall direction and tone of the agency.
A mission statement articulates a common
vision for the agency in that it provides a
point of reference for all major planning
decisions.
The process of creating mission statements is
affected by available words in a language as well as
the meaning given to those words by individual
stakeholders. Because mission statements express the
broad intention of an agency, they set the stage for all
program planning within the agency and are essential
to the development of the agency’s goal.
Goals
As should be evident by now, social service agencies
are established in an effort to reduce gaps between the
current and the desired state of a social problem for a
specific client population. Mission statements can be
lofty and include several philosophical declarations,
but the agency goal is more concise; there is only one
goal per agency. An agency’s goal is always defined at
a conceptual level, and it’s never measured directly.
Its main ambition is to guide us toward effective and
accountable service delivery.
Requirements for Goals
It’s essential that an agency’s goal reflects the
agency’s mandate and is guided by its mission state-
ment. This is achieved by forming a goal with the fol-
lowing four components:
1. The nature of the current social problem to be
tackled
2. The client population to be served
3. The general direction of anticipated client
change (desired state)
4. The means by which the change is supposed
to be brought about
Agency goals can be broad or narrow. Let’s look
at two generic examples:
• Agency Goal—National: The goal of this
agency is to enhance the quality of life of
this nation’s families (client population to
be served) who depend on public funds
for day-to-day living (social problem to be
tackled). The agency supports reducing
long-term dependence on public funds
(general direction of anticipated client change)
by offering innovative programs that increase
the self-sufficiency and employability of
welfare-dependent citizens (means by which
the change is supposed to be brought about).
• Agency Goal—Local: The goal of this agency
is to help youth from low socioeconomic
households in this city (client population to
be served) who are dropping out of school
(current social problem to be tackled) to stay
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 143
in school (general direction of anticipated
client change) by providing mentorship and
tutoring programs in local neighborhoods
(means by which the change is supposed to be
brought about).
As discussed in Chapter 1, national agencies, for
example, are clearly broader in boundary and size
than local ones. Additionally, more complex agencies
such as those serving multiple client populations or
addressing multiple social problems will capture a
more expansive population or problem area in their
goal statements.
An agency’s goal statement must be broad
enough to encompass all of its programs; that
is, each program within an agency must have
a direct and logical connection to the agency
that governs it.
However small or large, an agency functions as a
single entity and the agency’s goal statement serves to
unify all of its programs.
THE PROGRAM
Whatever the current social problem, the desired
future state of the problem, or the population that the
agency wishes to serve, an agency sets up programs
to help work toward its intended result—the agency’s
goal. There are as many ways to organize social service
programs as there are people willing to be involved in
the task. And just about everyone has an opinion on
how agencies should structure the programs housed
within them.
Mapping out the relationship among programs is
a process that is often obscured by the fact that the
term program can be used to refer the different lev-
els of service delivery within an agency (e.g., Figures
7.1, 7.2, and 7.3, ). In other words, some programs can
be seen as subcomponents of larger ones; for exam-
ple, in Figure 7.3, “Public Awareness Services” falls
under the “Nonresidential Program” for the Women’s
Emergency Shelter.
Figure 7.1 presents a simple structure of a fam-
ily service agency serving families and children. Each
program included in the Family Service Agency is
expected to have some connection to serving fami-
lies. The Family Support Program and the Family
Counseling Program have an obvious connection,
given their titles. The Group Home Program, how-
ever, has no obvious connection; its title reveals noth-
ing about who resides in the group home or for what
purpose.
Because the Group Home Program operates
under the auspices of “family services,” it’s likely that
it temporarily houses children and youth who even-
tually will return to their families. Most important,
the agency does not offer programs that are geared
toward other target groups such as the elderly, veter-
ans, refugees, or the homeless.
By glancing at Figure 7.1, it can be easily seen
that this particular family service agency has five pro-
grams within it that deal with families and children,
Family Service Agency
Group Home Program
Family Counseling
Program
Adoption Program
Treatment Foster Care
Progam
Family Support Program
Figure 7.1: Simple organizational chart of a family service agency.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
144 Par t II: Designing Programs
the agency’s target population: a group home
program
for children, a family counseling program, a child
adoption program, a treatment foster care program,
and a family support program.
Figure 7.2 provides another example of an agency
that also deals with families and children.
This
agency (Richmond Family Services) has only two pro-
grams, a Behavioral Adaptation Treatment Program
and a Receiving and Assessment Family Home
Program. The latter is further broken down into two
components—a Family Support Component and a
Receiving and Assessment Component. In addition,
the Receiving and Assessment Component is further
broken down into Crisis Support Services, Child Care
Services, and Family Home Provider Services.
How many programs are there in Figure 7.2?
The answer is two—however, we need to note that
this agency conceptualized its service delivery much
more thoroughly than did the agency outlined in
Figure 7.1. Richmond Family Services has conceptu-
alized the Receiving and Assessment Component of
its Receiving and Assessment Family Home Program
into three separate subcomponents: Crisis Support
Services, Child Care Services, and Family Home
Provider Services. In short, Figure 7.2 is more detailed
in how it delivers its services than is the agency rep-
resented in Figure 7.1. Programs that are more clearly
defined are generally easier to implement, operate,
and evaluate.
Another example of how programs can be orga-
nized under an agency is presented in Figure 7.3.
This agency, the Women’s Emergency Shelter, has a
Residential Program and a Nonresidential Program.
Its Residential Program has Crisis Counseling
Services and Children’s Support Services, and the
Nonresidential Program has Crisis Counseling
Services and Public Awareness Services. This
agency distinguishes the services it provides
between the women who stay within the shelter (its
Residential Program) and those who come and go
(its Nonresidential Program). The agency could have
conceptualized the services it offers in a number of
different ways.
A final example of how an agency can map out
its services is presented in Figure 7.4. As can be seen,
the agency’s Child Welfare Program is broken down
Richmond Family
Services
Behavioral Adaption
Treatment Program
Receiving and Assessment
Family Home Program
Family Support Component Receiving and Assessment
Component
Crisis Support Services
Child Care Services
Family Home Provider
Services
Figure 7.2: Organizational chart of a family service agency (highlighting the Receiving and Assessment Family Home Program).
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 145
into three services, and the Native Child Protection
Services is further subdivided into four compo-
nents: an Investigation Component, a Family Service
Child in Parental Care Component, a Family Services
Child in Temporary Alternate Care Component, and
a Permanent Guardianship Component.
The general rule of ensuring that programs
within an agency are logically linked together may
seem simple enough that you might be wondering
why we are emphasizing this point. The reality is that
way too many programs are added to agencies on a
haphazard, chaotic, and disorganized basis. This
is because new programs spring out of last-minute
funding opportunities that come available for new,
but totally dissimilar, programs (to the agency’s goal,
that is). While a social service administrator must
constantly seek new resources to provide better and/
or additional services within the agency’s programs,
it’s important that new and additional programs do
not compromise existing ones.
By simply glancing at Figures 7.1–7.4 it can be
seen that how an agency labels its programs and sub-
programs is arbitrary. For example, the agency that
represents Figure 7.2 labels its subprograms as com-
ponents and its sub-subprograms as services. The
agency that represents Figure 7.3 simply labels its
subprograms as services. The main point is that an
agency must design its programs, components, and
services in a logical way that makes the most sense in
view of the agency’s overall goal, which is guided by
its mission statement and mandate.
Naming Programs
There is no standard approach to naming programs
in the social services, but there are themes that may
assist with organizing an agency’s programs. We pres-
ent four themes and suggest, as a general rule, that
you pick only one (or one combination) to systemati-
cally name all of its programs:
• Function, such as Adoption Program or
Family Support Program
• Setting, such as Group Home Program or
Residential Program
• Target population, such as Services for the
Handicapped Program
• Social problem, such as Child Sexual Abuse
Program or Behavioral Adaptation Treatment
Program
Program names can include acronyms such as
P.E.T. (Parent Effectiveness Training), IY (Incredible
Years: A Parent Training Program), or catchy titles
such as Incredible Edibles (a nutritional program for
children). The appeal of such program names is that
Women’s Emergency
Shelter
Residential Program
Crisis Counseling
Services
Children’s Support
Services
Nonresidential Program
Crisis Counseling
Services
Public Awareness
Services
Figure 7.3: Organizational chart of a women’s emergency shelter.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
146 Par t II: Designing Programs
they are endearing to the program’s staff and clients
alike who only are familiar with the program’s ser-
vices in the first place. Other’s will not have clue to
what the program is all about. However, unless the
chic acronym (the program’s name) is accompanied
by a substantial marketing strategy, the program
will go unnoticed by the general public, other social
service providers, potential funders, and potential
clients alike.
Therefore, the primary purpose of a program
should be reflected in the program’s name. Including
the target social problem (or the main client need)
in the program’s name simplifies communication of
a program’s purpose. In this way, a program’s name
is linked to its goal, and there is less confusion about
what services it offers.
Nondescript program names can lead to con-
fusion in understanding a program’s purpose. The
Group Home Program in Figure 7.1, for example, sug-
gests that this program aims to provide a residence for
clients. In fact, all clients residing in the group home
are there to fulfill a specific purpose. Depending on
the goal of the program, the primary purpose could
be to offer shelter and safety for teenage runaways. Or
the program’s aim might be the enhanced function-
ing of adolescents with developmental disabilities, for
example.
An Agency Versus a Program
What’s the difference between an agency and a pro-
gram? Like an agency, a program is an organization
that also exists to fulfill a social purpose. There is
one main difference, however: a program has a nar-
rower, better defined purpose and is always nested
within an agency. Nevertheless, sometimes an
agency may itself have a narrow, well-defined pur-
pose. The sole purpose of a counseling agency, for
example, may be to serve couples who struggle with
a sexual dysfunction.
In this case, the agency comprises only one pro-
gram, and the terms agency and program refer to the
same thing. If the clientele happens to include a high
Social Services
(Region A)
Income Security
Program
Child Welfare
Program
Child Protection
Services
Native Child
Protection Services
Investigation Component
Family Service Child in
Paretnal Care Component
Family Service Child in Temporary
Alternate Care Component
Permanent Guardianship Component
Placement &
Counseling Services
Services for the
Handicapped Program
Figure 7.4: Organizational chart of a state’s social service delivery system (highlighting the Native Child Protection Services).
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 147
proportion of couples who are infertile, for example, it
may later be decided that some staff members should
specialize in infertility counseling (with a physician
as a co-counselor) while other workers continue to
deal with all other aspects of sexual dysfunction.
In this case, there would then be two distinct sets
of social work staff (or the same staff who provide two
distinct independent interventions), each focusing on
a different goal, and two separate types of clients; that
is, there would be two programs (one geared toward
infertility counseling and the other toward sexual
dysfunction). Creating programs that target specific
problems and populations facilitates the develop-
ment of evidence-based knowledge because workers
can hone the focus of their professional development
on specialized knowledge and skills. However, the
agency, with its board, its senior administrator (exec-
utive director), and its administrative policies and
procedures, would remain as a single entity.
DESIGNING PROGRAM
S
Building or creating a social work program involves
general and specific thinking about a program. The
process begins by articulating a program’s general
intentions for solving identified social problems—the
conceptualization or idea of the program’s purpose.
It also involves setting specific plans for how the pro-
gram is to accomplish what it sets out to do.
A program for children who are sexually aggres-
sive, for example, may aim to reduce the deviant sex-
ual behavior of its young clients (i.e., the intention)
by providing individual counseling (i.e., the plan for
achieving the intention). A major purpose of a pro-
gram’s design is to easily communicate a model of
service delivery to interested stakeholders. A pro-
gram’s design, via the use of a logic model, provides
a blueprint for implementing its services, monitoring
its activities, and evaluating both its operations and
achievements.
Program designs present plausible and logical plans
for how programs aim to produce change for their cli-
ents. Therefore, implicit in every program logic model
is the idea of theory—an explanation for how client
change is suppose to be brought about (to be discussed
in depth in the following chapter). The program for chil-
dren who are sexually aggressive, for example, suggests
that such children will reduce their sexual perpetration
by gaining understanding or insight through sessions
with an individual counselor. Programs that articulate a
specific theoretical approach, such as psychoanalytic or
behavior counseling, make their program theory more
explicit. And, the more explicit, the better.
Figure 7.5 displays the four major components
that are used to describe how programs deliver their
services.
Box 7.1 displays a concise example of how the
logic of Figure 7.5 is actually carried out within an
evidence-based family support program. Included are:
• Program’s goa
l
• Mission statement
• Three of the program’s objectives (with
literary support)
• Workers’ sample activities to meet program
objectives
Evidence-Based Programs
The knowledge we need to evaluate our programs
is generally derived from your social work courses.
There are many evidence-based interventions, or pro-
grams, in use today. All of them have been evaluated,
to various degrees. Some have been evaluated in a
rigorous manner—some less so. Some are very effec-
tive (e.g., Incredible Years) and some are downright
dreadful (e.g., Scared Straight). The point is, however,
that they all have been evaluated and have provided
evidence of their degree of effectiveness. Go to the
following websites to get a flavor of what social work
programs are about and how they have been evaluated
to be labeled “evidence based:”
• The Office of Juvenile Justice and Delinquency
Prevention’s Model Programs Guide
http://www.ojjdp.gov/mpg
• National Registry of Evidence-Based
Programs and Practices
http://www.nrepp.samhsa.gov
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
http://www.ojjdp.gov/mpg
http://www.nrepp.samhsa.gov
148 Par t II: Designing Programs
• Center for the Study and Prevention of
Violence
http://www.colorado.edu/cspv/blueprints
• Center for the Study of Social Policy
http://www.cssp.org
• Promising Practices Network on Children,
Families, and Communities
http://www.promisingpractices.net/
programs.asp
• Social Programs That Work
• Social Development Research Group
http://www.sdrg.org/rhcsummary.asp#
6
• The Campbell Collaboration: C2-Ripe
Library
http://www.campbellcollaboration.org/
selected_presentations/index.php
• The Cochrane Library
http://www.thecochranelibrary.com/view/0/
index.html
• National Prevention Dropout Center
• What Works Clearinghouse
http://ies.ed.gov/ncee/wwc
• Performance Well
http://www.performwell.org
• Center for AIDS Prevention Studies
(CAPS)
http://caps.ucsf.edu
• Positive Behavior Supports and
Interventions
• Expectant and Parenting Youth in Foster
Care: A Resource Guide 2014
http://www.cssp.org/reform/child-welfare/
pregnant-and-parenting-youth/Expectant-
and-Parenting-Youth-in-Foster-Care_A-
Resource-Guide
Selecting an Evidence-Based Program
As you can see from that preceding websites,
there are hundreds of evidenced-based social work
programs that you can implement within your agency.
We suggest that all agencies should consider imple-
menting evidence-based programs whenever possible.
The following are 23 criteria that you need to consider
when selecting one to implement within your local
community’s social service delivery system:
Program match
1. How well do the program’s goals and
objectives reflect what your agency hopes
to achieve?
2. How well do the program’s goals match
those of your intended participants?
3. Is the program of sufficient length and
intensity (i.e., “strong enough”) to be
effective with your particular group of
participants?
Program
Goal and Mission
Statement
Pr
og
ra
m
Le
ve
l
Se
rv
ic
e
Co
nc
ep
tu
al
iza
tio
n
Ca
se
Le
ve
l
Program
Objectives
(including measurements)
Practice Activities
Practice Objectives
Figure 7.5: How a program’s services are conceptualized from the case level to the program level.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
http://www.colorado.edu/cspv/blueprints
http://www.cssp.org
http://www.promisingpractices.net/programs.asp
http://www.promisingpractices.net/programs.asp
http://www.sdrg.org/rhcsummary.asp#6
http://www.campbellcollaboration.org/selected_presentations/index.php
http://www.campbellcollaboration.org/selected_presentations/index.php
http://www.thecochranelibrary.com/view/0/index.html
http://www.thecochranelibrary.com/view/0/index.html
http://ies.ed.gov/ncee/wwc
http://www.performwell.org
http://caps.ucsf.edu
http://www.cssp.org/reform/child-welfare/pregnant-and-parenting-youth/Expectant-and-Parenting-Youth-in-Foster-Care_A-Resource-Guide
http://www.cssp.org/reform/child-welfare/pregnant-and-parenting-youth/Expectant-and-Parenting-Youth-in-Foster-Care_A-Resource-Guide
http://www.cssp.org/reform/child-welfare/pregnant-and-parenting-youth/Expectant-and-Parenting-Youth-in-Foster-Care_A-Resource-Guide
http://www.cssp.org/reform/child-welfare/pregnant-and-parenting-youth/Expectant-and-Parenting-Youth-in-Foster-Care_A-Resource-Guide
Chapter 7: The Program 149
4. Are your potential participants willing
and able to make the time commitment
required by the program?
5. Has the program demonstrated effectiveness
with a target population similar to yours?
6. To what extent might you need to adapt
this program to fit the needs of your local
community? How might such adaptations
affect the effectiveness of the program?
7. Does the program allow for adaptation?
8. How well does the program complement
current programming both in your orga-
nization and in your local community?
Program quality
9. Has this program been shown to be
effective? What is the quality of this
evidence?
10. Is the level of evidence sufficient for your
organization?
11. Is the program listed on any respected
evidence‐based program registries? What
rating has it received on those registries?
12. For what audiences has the program been
found to work?
13. Is there information available about
what adaptations are acceptable if you
BOX 7.1 EXAMPLE OF AN EVIDENCE-BASED FAMILY SUPPORT INTERVENTION (FROM FIGURE 7.5)
Program Goal
The goal of the Family Support Program is to help children
who are at risk for out-of-home placement due to physical
abuse (current social problem to be tackled) by providing
intensive home-based services (means by which the change
is supposed to be brought about) that will strengthen
the interpersonal functioning (desired state) of all family
members (client population to be served)
Mission Statement
This program strives to provide a variety of support services
to families and children in need while also maintaining their
rights, their safety, and their human dignity.
Program Objectives
1. Increase positive social support for parents by the end of
the fourth week after the start of the intervention.
• Literary Support: A lack of positive social support has
been repeatedly linked to higher risk for child abuse.
Studies show that parents with greater social support
and less stress report more pleasure in their parenting
roles.
• Sample of Activities: Refer to support groups;
evaluate criteria for positive support; introduce to
community services; reconnect clients with friends
and family.
• Measuring Instrument: Social Support
Scale.
2. Increase problem-solving skills for family members
by the end of the eighth week after the start of the
intervention.
• Literary Support: Problem-solving is a tool for
breaking difficult dilemmas into manageable
pieces. Enhancing individuals’ skills in
systematically addressing problems increases the
likelihood that they will successfully tackle new
problems as they arise. Increasing
problem-solving
skills for parents and children equips family
members to handle current problems, anticipate
and prevent future ones, and advance their social
functioning.
• Sample of Activities: Teach steps to
problem-solving; role play problem-solving
scenarios; use supportive counseling.
• Measuring Instrument: The Problem-Solving
Inventory.
3. Increase parents’ use of noncorporal child
management
strategies by the end of the intervention.
• Literary Support: Research studies suggest that
deficiency in parenting skills is associated with higher
recurrence of abuse. Many parents who abuse their
children have a limited repertoire of ways to discipline
their children.
• Sample of Activities: Teach noncorporal discipline
strategies; inform parents about the criminal
implications of child abuse; assess parenting
strengths; and provide reading material about
behavior
management.
• Measuring Instrument: Checklist
of Discipline
Strategies.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
150 Par t II: Designing Programs
do not implement this program
exactly
as designed? Is adaptation assistance
available from the program’s developer?
14. What is the extent and quality of training
offered by the program’s developers?
15. Do the program’s designers offer
technical assistance? Is there a charge for
this assistance?
16. What is the opinion and experience of
others who have used the program?
Organizational resources
17. What are the training, curriculum, and
implementation costs of the program?
18. Can your organization afford to implement
this program now and in the long term?
19. Do you have staff capable of
implementing this program? Do they
have the qualifications recommended (or
required) to facilitate the program?
20. Would your staff be enthusiastic about a
program of this kind, and are they willing
to make the necessary time commitment?
21. Can this program be implemented in the
time available?
22. What’s the likelihood that this program
will be sustained in the future?
23. Are your stakeholders supportive of your
implementation of this program?
WRITING PROGRAM GOALS
A program goal has much in common with an agency
goal, which was discussed previously:
• Like an agency goal, a program goal must
also be compatible with the agency’s mission
statement as well as the agency goal and at
least one agency objective. Program goals
must logically flow from the agency as they
are announcements of expected outcomes
dealing with the social problem that the
program is attempting to prevent, eradicate,
or ameliorate.
• Like an agency goal, a program goal is not
intended to be measurable; it simply provides
a programmatic direction for the program to
follow.
• A program goal must also possess four major
characteristics:
1. It must identify a current social
problem area.
2. It must include a specific target population
within which the problem resides.
3. It must include the desired future state for
this population.
4. It must state how it plans to achieve the
desired state.
• In addition to the aforementioned four major
criteria for writing program goals, there are
seven additional minor criteria:
5. Easily understood—write it so the
rationale for the goal is apparent.
6. Declarative statement—provide a complete
sentence that describes a goal’s intended
outcome.
7. Positive terms—frame the goal’s outcomes
in positive terms.
8. Concise—get the complete idea of your
goal across as simply and briefly as
possible while leaving out unnecessary
detail.
9. Jargon-free—use language that most
“non–social work people” are likely to
understand.
10. Short—use as few words as possible.
11. Avoid the use of double negatives.
In sum, a program goal reflects the intention
of social workers within the program. For example,
workers in a program may expect that they will
“enable adolescents with developmental disabilities
to lead full and productive lives.” The program goal
phrase of “full and productive lives,” however, can
mean different things to different people.
Some may believe that a full and productive life
cannot be lived without integration into the commu-
nity; they may, therefore, want to work toward plac-
ing these youth in the mainstream school system,
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 151
enrolling them in community activities, and finally
returning them to their parental homes, with a view
to making them self-sufficient in adult life. Others
may believe that a full and productive life for these
adolescents means the security of institutional teach-
ing and care and the companionship of children with
similar needs. Still others may believe that institu-
tional care combined with community contact is the
best compromise.
Program goal statements are meant to be suffi-
ciently elusive to allow for changes in service deliv-
ery approach or clientele over time. Another reason
that goals have intangible qualities is because we want
enough flexibility in our programs to adjust program
conceptualization and operation as needed. Indeed,
by establishing a program design, we begin the pro-
cess of crafting a theory of client change. By evalu-
ating the program, we test the program’s theory—its
plan for creating client change. Much more will be
said about this in the next chapter.
Preparing for Unintended Consequences
Working toward a program’s goal may result in a
number of unintended results that emerge in the
immediate environment that surrounds the program.
For example, a group home for adolescents with
developmental disabilities may strive to enable resi-
dents to achieve self-sufficiency in a safe and support-
ive environment. This is the intended result, or goal.
Incidentally, however, the very presence of the group
home may produce organized resistance from local
neighbors—a negative unintended result.
The resistance may draw the attention of the
media, which in turn draws a sympathetic response
from the general public about the difficulties associ-
ated with finding a suitable location for homes caring
for youth with special needs—a positive unintended
result.
On occasion, the unintended result can thwart
progress toward the program’s goal; that is, youth
with developmental disabilities would not feel safe or
supported if neighbors act in unkind or unsupportive
ways. This condition would almost certainly hamper
the youths’ ability to achieve self-sufficiency in the
community.
PROGRAM GOALS VERSUS
AGENCY GOALS
Perhaps the group home mentioned earlier is run by
an agency that has a number of other homes for ado-
lescents with developmental disabilities (see Figure
7.6). It’s unlikely that all of the children in these
homes will be capable of self-sufficiency as adults;
some may have reached their full potential when they
have learned to feed or bathe themselves.
The goal of self-sufficiency will, therefore, not
be appropriate for the agency as a whole, although it
might do very well for Group Home X, which serves
children who function at higher levels. The agency’s
goal must be broader to encompass a wider range of
situations—and because it’s broader, it will probably
be more vague.
To begin, the agency may decide that its goal
is “to enable adolescents with developmental dis-
abilities to reach their full potential” as outlined in
Figure 7.6:
• Group Home X, one of the programs within
the agency, can then interpret “full potential”
to mean self-sufficiency and can formulate a
program goal based on this interpretation.
• Group Home Y, another program within the
agency serving children who function at lower
levels, may decide that it can realistically do no
more than provide a caring environment for
the children and emotional support for their
families. It may translate this decision into
another program goal: “To enable adolescents
with developmental disabilities to experience
security and happiness.”
• Group Home Z, a third program within
the agency, may set as its program goal
“To enable adolescents with developmental
disabilities to acquire the social and
vocational skills necessary for satisfying and
productive lives.”
In short, Figure 7.6 illustrates the relationship
among the individual goal of each of the three homes
to the single goal of the agency. Note how logical
and consistent the goals of the three programs are
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
152 Par t II: Designing Programs
with the agency’s single overall goal. This example
illustrates three key points about the character of a
program goal:
• A program goal simplifies the reason for the
program to exist and provides direction for
its
workers.
• Program goals of different but related
programs within the same agency may differ,
but they must all be linked to the agency’s
overall goal. They must all reflect both their
individual purpose and the purpose of the
agency of which they are a part.
• Program goals are not measurable. Consider
the individual goals of the three group homes
in Figure 7.6; none of them are measurable in
their present form.
Concepts such as happiness, security, self-
sufficiency, and full potential mean different things
to different people and cannot be measured until
they have been clearly defined. Many social work
goals are phrased in this way, putting forth more of
an elusive intent than a definite, definable, measur-
able purpose. Nor is this a flaw; it’s simply what a
goal is, a statement of an intended result that must
be clarified before it can be measured. As we will see
next, program goals are clarified by the objectives
they formulate.
PROGRAM OBJECTIVES
A program’s objectives are derived from its goal. As
you will see shortly, program objectives are measur-
able indicators of the program’s goal; they articulate
the specific client outcomes that the program wishes
to achieve; stated clearly and precisely, they make it
possible to tell to what degree the program’s results
have been achieved.
All program objectives must be client-centered;
they must be formulated to help a client in relation to
the social problem articulated by the program’s goal.
Programs often are designed to change client systems
in three nonmutually exclusive areas:
• Knowledge
• Affects
• Behaviors
Knowledge-Based Objectives
Knowledge-based program objectives are commonly
found within educational programs, where the aim
Agency Goal
To enable adolescents with
developmental disabilities to
reach their full potentialreach their full potential
Program X’s Goal
To enable adolescents with
developmental disabilities to
become self-sufficient adultsbecome self-sufficient adults
Program Y’s Goal
To enable adolescents with
developmental disabilities to
experience security and
happiness
experience security and
happiness
Program Z’s Goal
To enable adolescents with
developmental disabilities to
acquire the social and
vocational skills necessary
for satisfying and productive
lives
acquire the social and
vocational skills necessary
for satisfying and productive
lives
Figure 7.6: Organizational chart of an agency with three highly related programs.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 153
is to increase the client’s knowledge in some specific
area. The words “to increase knowledge” are criti-
cal here: They imply that the recipient of the educa-
tion will have learned something, for example, “to
increase teenage mother’s knowledge about the stages
of child development between birth and 2 years.” The
hoped-for increase in knowledge can then be mea-
sured by assessing the mother’s knowledge levels
before and after the program. The program
objective
is achieved when it can be demonstrated (via mea-
surement) that learning has occurred.
Affect-Based Objectives
Affect-based program objectives focus on chang-
ing either feelings about oneself or awareness about
another person or thing. For example, a common
affect-based program objective in social work is
to raise a client’s self-esteem, or interventions are
designed to decrease feelings of isolation, increase
marital satisfaction, and decrease feelings of depres-
sion. In addition, feelings or attitudes toward other
people or things are the focus of many social work
programs.
All program objectives are derived from its
single goal.
To give just a few examples, programs may try to
change negative views toward people of color, homo-
sexuality, or gender roles. “Affects” here includes atti-
tudes because attitudes are a way of looking at the
world. It’s important to realize that, although particu-
lar attitudes may be connected to certain behaviors,
they are two separate constructs.
Behaviorally Based Objectives
Very often, a program objective is established to
change the behavior of a person or group: for example,
to reduce drug abuse among adolescents, to increase
the use of community resources by seniors, or to
reduce the number of hate crimes in a community.
Sometimes knowledge or affect objectives are used as
a means to this end. In other words, the expectation
is that a change in attitude or knowledge will lead to a
change in behavior.
The social worker might assume that adolescents
who know more about the effects of drugs will use or
abuse them less, that seniors who know more about
available community resources will use them more
often, or that citizens that have more positive feelings
toward each other will be less tolerant of prejudice and
discrimination. Sometimes these assumptions are valid;
sometimes they are not. In any case, when behaviorally
based objectives are used, the program must verify that
the desired behavior change has actually occurred.
WRITING PROGRAM OBJECTIVES
Whether program objectives are directed at knowl-
edge levels, affects, or behaviors, they have to be
SMART ones too; that is, they have to be Specific,
Measurable, Achievable, Realistic, and Time phased.
All evidence-based social work programs cannot exist
without SMART program objectives.
SMAR
T
objectives
S
Specific
M
Measurable
A
Achievable
R
Realistic
T
Time-phased
Specific (S)
In addition to being meaningful and logically linked to
the program’s goal (to be discussed shortly), program
objectives must be specific. They must be complete and
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
154 Par t II: Designing Programs
clear in their wording. Following are two columns.
The left column contains active verbs that your pro-
gram objective can start out with. The column on the
right contains examples of possible types of program
objectives you could be trying to achieve.
You need to mix and match to form appropriate objec-
tives. For example, you could write the following:
• Increase self-esteem levels
• Decrease feelings of loneliness
Now that we know how to make a program objec-
tive specific, we turn to its measurability, the second
quality required of a SMART program objective.
Simply put, just ask the question, “Is the objective
measurable?” If it can’t be measured then it cannot
be a program objective. As we know by now, the pur-
pose of measurement is to gather data. A measure is
usually thought of as a number: an amount of money
in dollars, a numerical rating representing a level of
intensity, or scores on simple self-administered stan-
dardized measuring instruments.
Measurable (M)
The purpose of setting a program objective is to bring
focus to the desired change, which, if obtained, will
contribute to the obtainment of the program’s goal.
One of the main purposes of making a measurement
is to define a perceived change, in terms of either
numbers or clear words.
A measurement might show, for example, that
the assertiveness of a woman who has been previ-
ously abused has increased by 5 points on a stan-
dardized measuring instrument (a program
objective), or that a woman’s feelings of safety in her
neighborhood have increased by 45 points (another
program objective).
Learn more about how to
measure program objectives in
Tools L and M in the Evaluation
Toolkit.
If the hoped-for change cannot be measured,
then it’s not a SMART program objective—it’s miss-
ing the “M.” Tools L and M present ways of mea-
suring program objectives, but, for the time being,
we turn to the third quality of a SMART program
objective: achievability.
Achievable (A)
Not only must a program objective be specific and
measureable, it must be achievable as well. Objectives
should be achievable within a given time frame and
with available current program resources and con-
straints. There is nothing worse than creating an
unrealistic program objective that cannot be realisti-
cally reached by the client group it was written for.
This unfortunately happens way more times than
we wish to acknowledge. Just ask and answer the
question, “Can the program’s objective be reached
given: (1) the clients’ presenting problems, (2) the pro-
gram’s current overall resources, (3) the skill level of
the workers, and (4) the amount of time the interven-
tion is suppose to take?”
Realistic (R)
In addition to being specific, measurable, and achiev-
able, program objectives must also be realistic.
Having realistic program objectives ties in heav-
ily with having achievable ones (mentioned ear-
lier). A program objective is realistic when it bears a
Examples of Active
Verbs
Examples of Measureable Program
Objectives
• Increase • Social skills
• Decrease • Feeling of depression
• Maintain • Feelings of loneliness
• Obtain • Attitudes toward authority
• Improve • Aggressiveness
• Access • Self-esteem levels
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 155
sensible relationship to the longer term result to be
achieved—the program goal.
If a program’s goal is to promote self-sufficiency
of teenagers living on the street, for example, improv-
ing their ability to balance a monthly budget may be a
realistic program objective; however, increasing their
ability to recite the dates of the reigns of English mon-
archs is not, because it bears no relation to the pro-
gram’s goal of self-sufficiency.
The point here—and a point that will be stressed
over and over in this text—is that an effective
evidence-based program must demonstrate realistic
and meaningful linkages among its overall goal (its
reason for being) and its programs’ objectives.
Time Phased (T)
Program objectives need to provide a time frame
indicating when the objective will be measured or a
time by which the objective will be met. Box 7.2 pres-
ents how the three program objectives in our Family
Support Program illustrated in Box 7.1 were mea-
sured with SMART objectives. Notice that the three
program objectives indirectly measure the program’s
goal; that is, the goal is achieved by the success of the
three program’s objectives.
INDICATORS
An indicator is a measurable gauge that shows (or
indicates) the progress made toward achieving a
SMART program objective. Some indicators include
participation rates, income levels, poverty rates, atti-
tudes, beliefs, behaviors, community norms, policies,
health status, and incidence and prevalence rates. In
the simplest of terms, indicators ultimately are used to
measure your program objectives.
Sometimes these program objectives are called
dependent variables, outcome variables, or criterion
variables. The most important thing to remember is
that your indicators must be based off your program’s
logic model (to be discussed shortly). A program
objective can be measured with only one indicator,
such as the following:
And at other times, a program objective can be
measured with more than one indicator, such as the
following:
PRACTICE OBJECTIVES
Program objectives can be thought of as formal state-
ments of a declaration of desired change for all clients
served by a program. In contrast, practice objectives
refer to the personal objectives of an individual client,
whether that client is a community, couple, group,
individual, or institution. Practice objectives are
also commonly referred to as treatment objectives,
Program Objective Single Indicator
Client obtains
more stable
housing
A. Percentage of clients
who move to a
transitional shelter,
long-term housing,
rehabilitative setting, or
the home of a friend or
family member.
Increase
self-esteem
A. Hudson’s Index of
Self-Esteem (see Figure
L.1 in Tool L)
Program Objective Multiple Indicators
Clients accesses
needed services
A. Percentage of clients
who agree to a
recovery/treatment
service plan by the end
of their 30th day of
shelter at that site.
B. Percentage of clients
who, as a result of their
service plan, connected
with supportive
services within 30 days
of the start of case
management.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
BOX 7.2 GRID FOR SMART PROGRAM OBJECTIVES (FROM BOX 7.1)
SPECIFIC MEASURABLE ACHIEVABLE REALISTIC TIME PHASED
Program objectives It says
exactly
what you
are going to
do. It can’t
be too broad
or vague.
There is a way of
measuring the
objective. It must
be able to produce
indicators.
The program objective can
be actually achieved with
your given resources and
constraints.
The program
objective is directly
related to your
program’s goal.
The objective
must have a
date for its
achievement.
To increase positive
social support for
parents by the end
of the fourth week
after the start of
the
intervention
This
program
objective
is very
specific. It is
not vague.
This objective can
produce a number of
indicators. We have
chosen two:
(1) client logs, and
(2) The Provision
of Social Relations
Scale.
This program objective
can be easily achieved
by the end of the first
four weeks after the
intervention starts given
our resources and the
skill levels of the social
workers.
This program
objective is directly
related to the
program’s goal, which
is to support family
units where children
are at risk for out-of-
home placement
due to problems with
physical abuse.
“By
the end of the
fourth week after
the intervention
starts” is very
specific in
reference to time
frames.
To increase
problem-solving
skills for family
members by the
end of the eighth
week after the start
of the intervention
This
program
objective
is very
specific. It is
not vague.
This objective can
produce a number
of indicators.
We have chosen
one: The Problem
Solving Inventory.
This program objective
can be easily achieved by
the end of the eighth week
after the intervention
starts given our resources
and the skill levels of the
social workers. We also
believe that the clients
have the motivation and
capacity for the desired
change to
occur.
This program
objective is directly
related to the
program’s goal, which
is to support family
units where children
are at risk for out-of-
home placement
due to problems with
physical abuse.
“By the end of the
eighth week after
the intervention
starts” is very
specific in
reference to time
frames.
G
rinnell, R
. M
., G
abor, P
. A
., &
U
nrau, Y
. A
. (2015). P
rogram
evaluation for social w
orkers : F
oundations of evidence-based program
s. O
xford U
niversity P
ress, Incorporated.
C
reated from
capella on 2023-01-30 18:20:46.
Copyright © 2015. Oxford University Press, Incorporated. All rights reserved.
To increase
parent’s use of
noncorporal child
management
strategies by
the end of the
intervention
This
program
objective
is very
specific. It is
not vague.
This objective can
produce a number
of indicators.
We have chosen
two: (1) Goal
Attainment Scaling
and (2) Checklist
of Discipline
Strategies.
This program objective
can be easily achieved by
the end of the intervention
given our resources and
the skill levels of the social
workers. We also believe
that the clients have the
motivation and capacity
for the desired change to
occur.
This program
objective is directly
related to the
program’s goal, which
is to support family
units where children
are at risk for out-of-
home placement
due to problems with
physical abuse.
“By the end of the
intervention” is
very specific in
reference to time
frames.
BOX 7.2 CONTINUED
G
rinnell, R
. M
., G
abor, P
. A
., &
U
nrau, Y
. A
. (2015). P
rogram
evaluation for social w
orkers : F
oundations of evidence-based program
s. O
xford U
niversity P
ress, Incorporated.
C
reated from
capella on 2023-01-30 18:20:46.
Copyright © 2015. Oxford University Press, Incorporated. All rights reserved.
158 Par t II: Designing Programs
individual objectives, therapeutic objectives, client
objectives, client goals, and client target problems.
All practice objectives formulated by the social
worker and the client must be logically related to the
program’s objectives, which are linked to the pro-
gram’s goal. In other words, all practice objectives for
all clients must be delineated in such a way that they
are logically linked to one or more of the program’s
objectives. If not, then it’s unlikely that the clients’
needs will be met by the program.
If a social worker formulates a practice objective
with a client that does not logically link to one or more
of the program’s objectives, the social worker may be
doing some good for the client but without program
sanction or support. In fact, why would a program
hire a social worker to do something the worker was
not employed to do?
At the risk of sounding redundant, a program is
always evaluated on its program objectives. Thus we
must fully understand that it’s these objectives that
we must strive to attain—all of our “practice” efforts
must be directly linked to them.
Example: Bob’s Self-Sufficiency
Let’s put the concept of a practice objective into con-
crete terms. Following is a simple diagram of how
three practice objectives, if met, lead to increased life
skills, which in turn leads to self-sufficiency. Is the
diagram logical to you? If so, why? If not, why not?
These three interrelated practice objectives for
Bob demonstrate a definite link with the program’s
objective, which in turn is linked to the program’s
goal. It should be evident by now that defining a
practice objective is a matter of stating what is to be
changed. This provides an indication of the client’s
current state, or where the client is. Unfortunately,
knowing this is not the same thing as knowing where
one wants to go. Sometimes the destination is appar-
ent, but in other cases it may be much less clear.
PRACTICE ACTIVITIES
So far we have focused on the kinds of goals and objec-
tives that social workers hope to achieve as a result
of their work. The question now arises: What is that
work? What do social workers do in order to help cli-
ents achieve the program’s objectives such as obtain-
ing knowledge (e.g., knowing how to make nutritional
meals), feelings (feeling less anxious), or behaviors
(reduce the number of truancies per school year)?
The question remains: What practice activities do
social workers engage in to meet a program’s objectives?
The answer, of course, is that they do many different
things. They show films, facilitate group discussions,
hold therapy sessions, teach classes, and conduct indi-
vidual interviews. They attend staff meetings, do paper-
work, consult with colleagues, and advocate for clients.
The important point about all such activities
is that they are undertaken to move clients forward
on one or more of the program’s objectives. All of
evidence-based programs have SMART program
objectives where each objective has practice activities
associated with it.
A social worker who teaches a class on nutrition,
for example, hopes that class participants will learn
certain specific facts about nutrition. If this learn-
ing is to take place, the facts to be learned must be
included in the material presented. In other words,
our practice activities must be directly related to our
client’s practice objectives which are directly related
to our program’s objectives. It’s critically important
that social workers engage in practice activities that
have the best chance to create positive client change.
Defining practice activities is an essential ingre-
dient to understanding what interventions work.
The list of practice activities is endless and dynamic
in that workers can add, drop, and modify them to
suit the needs of individual clients. Reviewing a list of
practice activities with stakeholder groups gives them
Three Pracitice Objectives for Bob
1. Increase personal self-management skills
2. Increase general social skills
3. Increase drug resistance skills
Program Objective
Increase life skills
Program Goal
Become self-
sufficient adults
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7: The Program 159
an idea of the nature of client service delivery offered
by the program. Above is a diagram that presents the
preceding discussion in graphic form.
LOGIC MODELS
Your program must have a logic model if it’s to have
any creditability. As you briefly saw in Chapter 3
and will see in depth in the following chapter, logic
models are tools that help people physically see the
interrelations among the various components of your
program. A logic model is nothing more than a con-
cept map that visually describes the logic of how your
program is supposed to work.
Positions Your Program for Success
The W. K. Kellogg Foundation (2004) suggests that use
of the logic model is an effective way to ensure a pro-
gram’s success. This would be a good time to review
Figures 3.2 and 3.3 in Chapter 3. Using a logic model
throughout the design and implementation of your
program helps organize and systematize your program
planning, management, and evaluation functions:
• In Program Design and Planning, a logic
model serves as a planning tool to develop
program strategy and enhance your ability
to clearly explain and illustrate program
concepts and approach for key stakeholders,
including funders. Logic models can help
craft structure and organization for program
design and build in self-evaluation based on
shared understanding of what is to take place.
During the planning phase, developing a
logic model requires stakeholders to examine
best-practice research and practitioner
experience in light of the strategies and
activities selected to achieve results.
• In Program Implementation, a logic model
forms the core for a focused management
plan that helps you identify and collect
the data needed to monitor and improve
programming. Using the logic model during
program implementation and management
requires you to focus energies on achieving
and documenting results. Logic models
help you to consider and prioritize the
program aspects most critical for tracking
and reporting and make adjustments as
necessary.
• For Program Evaluation and Strategic
Reporting, a logic model presents program
information and progress toward goals in
ways that inform, advocate for a particular
program approach, and teach program
stakeholders.
We all know the importance of reporting results
to funders and to community stakeholders alike.
Communication is a key component of a program’s
success and sustainability. Logic models can help
strategic marketing efforts in three primary ways:
1. Describing programs in language clear
and specific enough to be understood and
evaluated.
2. Focusing attention and resources on
priority program operations and key
results for the purposes of learning and
program improvement.
3. Developing targeted communication and
marketing strategies.
Social worker engages in
practice activities
in order to meet…
the client’s
practice
objective(s)
in order to meet…
the program’s
objective(s)
in order to meet…
the program’s
goal.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
160 Par t II: Designing Programs
Simple and Straightforward Pictures
A picture is worth a thousand words. The point of
developing a logic model is to come up with a relatively
simple image that reflects how and why your program
will work. Doing this as a group brings the power of
consensus and group examination of values and beliefs
about change processes and program results.
Reflect Group Process and Shared Understanding
A logic model developed by all of a program’s stake-
holders produces a useful tool and refines the pro-
gram’s concepts and plans during the process. We
recommend that a logic model be developed collab-
oratively in an inclusive, collegial process that engages
as many key stakeholders as possible.
Change Over Time
Like programs, logic models change over time. Thus
as a program grows and develops, so does its logic
model. A program logic model is merely a snap-
shot of a program at one point in time. It’s a work
in progress—a working draft—that can be refined as
your program develops.
SUMMARY
This chapter briefly discussed what a social work
agency is all about and how programs fit within them.
It touched on the fundamentals of evidence-based
programs and presented a few criteria for selecting
one out of the many that exist. We discussed how to
construct program goals, objectives, indicators, prac-
tice objectives, and practice activities. The chapter
ended with a brief rational of why evidence-based
programs need to have logic models which is explored
in-depth in the following chapter.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Study Questions Chapter 7
The goal of this chapter is to provide you with a beginning knowledge base for you to feel comfortable in answering the
below questions. AFTER you have read the chapter, indicate how comfortable you feel you are in answering each
of the following questions on a 5-point scale where
1
Very
un
comfortable
2
Somewhat
uncomfortable
3
Neutral
4
Somewhat
comfortable
5
Very
comfortable
If you rated any question between 1–3, reread the section of the chapter where the information for the question is
found. If you still feel that you’re uncomfortable in answering the question, then talk with your instructor and/or your
classmates for more clarification.
Questions Degree of comfort?
(Circle one number)
1 Discuss how mission statements are used within agencies.
1 2 3 4 5
2 Discuss how goals are used within agencies. 1 2 3 4 5
3
Discuss the differences between an agency’s mission statement and its goal.
Provide a social work example throughout your
discussion.
1 2 3 4 5
4
List and then discuss the four requirements of an agency’s goal. Provide an example
of one using your field placement (or work) setting.
1 2 3 4 5
5 What’s an agency? What’s a program? Discuss the differences between the two? 1 2 3 4 5
6
List and then discuss the four themes that you can use in naming social work
programs. Rename the program that you are housed within in reference to your field
(or work) setting using the criteria presented in the book.
1 2 3 4 5
7
What are evidence-based programs? Select one from the websites presented in the
book and discuss what the program is all about and how it was evaluated to become
“evidence-based.”
1 2 3 4 5
8
Discuss each one of the 23 criteria that need to be addressed when you select an
evidence-based program to implement within your community.
1 2 3 4 5
9
List and then discuss the 11 criteria that need to be considered when writing a
program goal.
1 2 3 4 5
10 Discuss the differences between an agency’s goal and a program’s goal. 1 2 3 4 5
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
11
What are program objectives? Provide a social work example throughout your
discussion.
1 2 3 4 5
12
What are knowledge-based objectives? Provide a social work example throughout
your discussion.
1 2 3 4 5
13
What are affect-based objectives? Provide a social work example throughout your
discussion.
1 2 3 4 5
14
What are behaviourally based objectives? Provide a social work example throughout
your discussion.
1 2 3 4 5
15
What are SMART objectives? Provide a social work example throughout your
discussion.
1 2 3 4 5
16
What are indicators of a program objective? Provide a social work example
throughout your discussion.
1 2 3 4 5
17
What are practice objectives? Provide a social work example throughout your
discussion.
1 2 3 4 5
18
What are practice activities? Provide a social work example throughout your
discussion.
1 2 3 4 5
19 What are logic models? Why are they useful to social work programs.? 1 2 3 4 5
Study Questions for Chapter 7 Continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 7 Assessing Your Self-Efficacy
AFTER you have read this chapter AND have completed all of the study questions, indicate how knowledgeable you feel you
are for each of the following concepts on a 5-point scale where
1
Not knowledgeable
at all
2
Somewhat
un
knowledgeable
3
Neutral
4
Somewhat
knowledgeable
5
Very
knowledgeable
Concepts Knowledge Level?
(Circle one number)
1 The differences between agencies and programs 1 2 3 4 5
2 Agency and program mission statements 1 2 3 4 5
3 Agency and program goals 1 2 3 4 5
4 Requirements for agency and program goals 1 2 3 4 5
5 Constructing program names 1 2 3 4 5
6 Designing social work programs 1 2 3 4 5
7 Evidence-based programs 1 2 3 4 5
8 Criteria for selecting evidence-based programs 1 2 3 4 5
9 Writing program goals 1 2 3 4 5
10 Writing program objectives 1 2 3 4 5
11 Selecting indicators for program objectives 1 2 3 4 5
12 Formulating practice objectives 1 2 3 4 5
13 Formulating practice activities 1 2 3 4 5
14 Logic models 1 2 3 4 5
Add up your scores (minimum = 14, maximum = 70) Your total score =
A 66–70 = Professional evaluator in the making
A– 63–65 = Senior evaluator
B+ 59–62 = Junior evaluator
B 56–58 = Assistant evaluator
B– 14–55 = Reread the chapter and redo the study questions
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
CHAPTER OUTLINE
MODELS AND MODELING
Concept Maps
Two Types of Models: One Logic
Examples
LOGIC MODELS AND EVALUATION DESIGN
Limitations
Models Begin With Results
Logic Models and Effectiveness
BASIC PROGRAM LOGIC MODELS
Assumptions Matter
Key Elements of Program Logic Models
Nonlinear Program Logic Models
Hidden Assumptions and Dose
BUILDING A LOGIC MODEL
From Strategy to Activities
Action Steps for a Program Logic Model
Creating Your Program Logic Model
SUMMARY
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:20:46.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
5
1
THE PROCESS
CENTERS FOR DISEASE CONTROL AND PREVENTION
The previous two chapters presented the rational of
how case- and program-level evaluations (internal
and external) help us to become more accountable
to society. As you know, our programs are extremely
complex and dynamic organizations that have numer-
ous outside pressures to attend to, as well as concentrat-
ing on their own internal struggles—all at the same
time providing efficient and effective services to clients.
Not only do program evaluations (i.e., need, pro-
cess, outcome, efficiency) bring us a step closer to
accountability; they also help line-level workers and
evaluators alike learn about our clients’ life experi-
ences, witness client suffering, observe client progress
and regress, and feel the public’s pressure to produce
totally unrealistic “magnificent and instant positive
change” with extremely limited resources.
Integrating evaluation activities into our program’s
service delivery system, therefore, presents an immense
opportunity for us to learn more about social problems,
the people they affect, and how our interventions actu-
ally work. For organizational learning to occur, however,
there must be an opportunity for continuous, meaning-
ful, and useful evaluative feedback. And this feedback
must make sense to all of our stakeholder groups. All
levels of staff within a program have an influence on the
program’s growth and development, so they all must be
involved in the “evaluative processes” as well. Thus we
now turn our attention to the evaluative process.
THE PROCESS
What’s this “evaluative process,” you ask? The answer
is simple. It’s a tried-and-true method that contains
six general steps as presented in Figure 3.1. As with the
previous editions of this book, the steps and all related
text have been adopted and modified from the Centers
for Disease Control and Prevention (CDC; 1999a,
1999b, 1999c, 2005, 2006, 2010, 2011, 2013); Milstein,
Wetterhall, and CDC Evaluation Working Group (2000),
and Yarbrough, Shulha, Hopson, and Caruthers (2011).
It’s our opinion that the CDC’s evaluative frame-
work that we use to describe the program evaluation
process is the “gold standard” of all the evaluation
frameworks that exist today. Hopefully you will agree
after reading this chapter.
The steps are all interdependent on one another
and, more often than not, are executed in a nonlinear
sequence. An order exists, however, for fulfilling each
step—earlier steps provide the foundation for sub-
sequent steps. Now that we know there are six steps
when evaluating social work programs we imme-
diately turn our attention to the first step: engaging
stakeholders in the evaluative process.
STEP 1: ENGAGE STAKEHOLDERS
For all four types of evaluations mentioned in the pre-
vious chapter and presented in depth in Part III of this
book, the evaluation cycle begins by engaging all of
our stakeholder groups. As we know by now, almost
all social work evaluations involve partnerships with
and among its stakeholders; therefore, any evaluation
of a program requires considering the value systems of
the various stakeholder groups. As you know from the
previous two chapters, they must be totally engaged in
3Excellence is a continuous process and
not an accident.
~ A. P. J. Abdul Kalam
C h a p t e r
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
52 Part I: Preparing for Evaluations
the evaluation of your program in order to ensure that
their perspectives are understood, appreciated, and,
more important, heard. We simply cannot overem-
phasize this point enough—if you don’t include your
stakeholders in an evaluation it will fail. Guaranteed.
Stakeholders are people or organizations invested
in your program, interested in the results of your
evaluation, and/or with a stake in what will be done
with the results of your evaluation. Representing their
needs and interests throughout the process is funda-
mental to doing a good
program evaluation.
When stakeholders are not engaged, your evalu-
ation findings can easily be ignored, criticized, or
resisted because your evaluation doesn’t address your
stakeholders’ individual evaluation questions or val-
ues. After becoming involved, stakeholders can easily
help to execute the other five steps. Identifying and
engaging three stakeholder groups are critical to your
evaluation:
Group 1: Those involved in your program’s
operations, such as sponsors,
collaborators, coalition partners, funding
officials, administrators, executive
directors, supervisors, managers, line-level
social workers, and support staff.
Group 2: Those served or affected by your
program, such as clients, family members,
neighborhood organizations, academic
institutions, elected officials, advocacy
groups, professional associations, skeptics,
opponents, and personnel at related or
competing social service
programs.
Group 3: Primary users of your evaluation’s
results, such as the specific persons in a
position to do and/or decide something
regarding the findings that were derived
from your evaluation.
Clearly, the three categories are not mutually
exclusive; in particular, the primary users of evalu-
ation findings are often members of the other two
groups. For example, your program’s executive direc-
tor (Group 1) could also be involved in an advocacy
organization or coalition (Group 2) in addition to
being the main person who would utilize your evalu-
ation’s recommendations (Group 3).
Why Stakeholders Are Important to an Evaluation
Stakeholders can help (or hinder) an evaluation
before it’s even conducted, while it’s being con-
ducted, and after the results are collected and ready
for use. Because so many of our social service efforts
are complex and because our programs may be sev-
eral layers removed from frontline implementation,
stakeholders take on a particular importance in
ensuring meaningful evaluation questions are identi-
fied and your evaluation results will be used to make
a difference.
Stakeholders are much more likely to support
your evaluation and act on the results and recom-
mendations if they are involved in the evaluation pro-
cess. Conversely, without stakeholder support, your
evaluation may be ignored, criticized, resisted, or even
sabotaged.
You need to identify those stakeholders who mat-
ter the most by giving priority to those stakeholders
who:
Step 1
Engage
Stakeholders
Step 2
Describe the
Program
Evaluation
Standards
Utility
Feasibility
Propriety
Accuracy
Step
6
Ensure Use &
Share Lessons
Learned
Step
5
Justify
Conclusions
Step
4
Gather Credible
Data
Step
3
Focus the
Evaluation
Figure 3.1: The program evaluation process.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 53
IN A
NUTSHELL
Description of the Program Evaluation Process
(Narrative for Figure 3.1)
Steps (Figure 3.1) Description of Steps
Step 1
Engage
stakeholders
Evaluation stakeholders are people or organizations that are invested in your program, are
interested in the results of the evaluation, and/or have a stake in what will be done with evaluation
results. Representing their needs and interests throughout the process is fundamental to a good
program evaluation.
Step 2
Describe
the
program
A comprehensive program description clarifies the need for your program, the activities you are
undertaking to address this need, and the program’s intended outcomes. This can help you when
it is time to focus your evaluation on a limited set of questions of central importance. Note that
in this step you are describing the program and not the evaluation. Various tools (e.g., logic and
impact models) will be introduced to help you depict your program and the anticipated outcomes.
Such models can help stakeholders reach a shared understanding of the program.
Step 3
Focus the
evaluation design
Focusing the evaluation involves determining the most important evaluation questions and the
most appropriate design for an evaluation, given time and resource constraints. An entire program
does not need to be evaluated all at once. Rather, the “right” focus for an evaluation will depend
on what questions are being asked, who is asking them, and what will be done with the resulting
information.
Step 4
Gather
credible data
Once you have described the program and focused the evaluation, the next task is to gather data
to answer the evaluation questions. Data gathering should include consideration of the following:
indicators, sources of methods of data collection, quality, quantity, and logistics.
Step 5
Justify
conclusions
When agencies, communities, and other stakeholders agree that evaluation findings are justified,
they will be more inclined to take action on the evaluation results. Conclusions become justified
when analyzed and synthesized data is interpreted through the ‘prism’ of values that stakeholders
bring, and then judged accordingly. This step encompasses analyzing the data you have collected,
making observations and/or recommendations about the program based on the analysis, and
justifying the evaluation findings by comparing the data against stakeholder values that have
been identified in advance.
Step 6
Ensure use
and share lessons
learned
The purpose(s) you identified early in the evaluation process should guide the use of evaluation
results (e.g., demonstrating effectiveness of the program, modifying program planning,
accountability). To help ensure that your evaluation results are used by key stakeholders, it’s
important to consider the timing, format, and key audiences for sharing information about the
evaluation process and your findings.
3.1
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
54 Part I: Preparing for Evaluations
• Can increase the credibility of your efforts or
the evaluation process itself
• Are responsible for day-to-day
implementation of the activities that are part
of your social work program
• Will advocate for (or authorize changes
to) your program that the evaluation may
recommend
• Will fund or authorize the continuation or
expansion of your program
In addition you must include those who partici-
pate in your program and are affected by your pro-
gram and/or its evaluation, such as:
• Program line-level staff, supervisors,
managers, and administrative support
• Local, state, and regional coalitions interested
in the social problem you are trying to solve
• Local grantees of your funds
• Local and national advocacy partners
• Other funding agencies, such as national and
state governments
• State education agencies, schools, and other
educational groups
• Universities, colleges, community colleges,
and other educational institutions
• Local government, state legislators, and state
governors
• Privately owned businesses and business
associations
• Health-care systems and the medical
community
• Religious organizations
• Community organizations
• Private citizens
• Program critics
• Representatives of populations
disproportionately affected by the social
problem you are trying to solve. This should
include current clients and perhaps clients
who have “graduated” from your program.
• Law enforcement representatives
The Role of Stakeholders in an Evaluation
Stakeholder perspectives should influence every
step of your evaluation. Stakeholder input in Step 2:
Describe the Program, ensures a clear and consen-
sual understanding of your program’s activities and
outcomes. This is an important backdrop for even
more valuable stakeholder input in Step 3: Focus the
Evaluation to ensure that the key questions of most
importance are included.
Stakeholders may also have insights or preferences
on the most effective and appropriate ways to collect data
from target respondents. In Step 5: Justify Conclusions,
the perspectives and values that stakeholders bring to
your project are explicitly acknowledged and honored
in making judgments about the data gathered.
The product of this step is a list of
stakeholders to engage and a rationale for
their involvement.
Finally, the considerable time and effort you spent
in engaging and building consensus among stake-
holders pays off in the last step, Step 6: Ensure Use
and Lessons Learned, because stakeholder engage-
ment has created a market for the evaluation’s results,
or findings.
Stakeholders can be involved in your evalu-
ation at various levels. For example, you may want
to include coalition members on an evaluation team
and engage them in developing relevant evaluation
questions, data collection procedures, and data anal-
yses. Or consider ways to assess your partners’ needs
and interests in the evaluation, and develop means
of keeping them informed of its progress and inte-
grating their ideas into evaluation activities. Again,
stakeholders are more likely to support your evalu-
ation and act on its results and recommendations if
they are involved in the evaluation process from the
get-go.
Involve Critics as Well
Have you ever heard the phrase, “keep your
friends close and your enemies closer?” Well, this slo-
gan aptly applies to the evaluation process as well. It’s
very important for you to engage your program’s crit-
ics in your evaluation. Critics will help you to identify
issues around your program’s strategies and evalua-
tion data that could be attacked or discredited, thus
helping you strengthen the evaluation process. This
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 55
information might also help you and others under-
stand the opposition’s rationale and will help you
engage potential agents of change within the opposi-
tion. However, use caution: it’s important to under-
stand the motives of the opposition before engaging
them in any meaningful way.
The emphasis on engaging stakeholders mirrors
the increasing prominence of participatory models or
“action” research in the research/evaluation commu-
nity. A participatory approach combines systematic
inquiry with the collaboration of diverse stakehold-
ers to meet specific needs and to contend with broad
issues of equity and justice.
STEP 2: DESCRIBE THE PROGRAM
Writing a good description of your program sets
the frame of reference for all subsequent decisions
in the evaluation process. Your description enables
comparisons with similar programs and facilitates
attempts to connect your program’s components to
its intended outcomes. Moreover, your stakeholders
might have differing ideas regarding your program’s
overall goal and objectives. Evaluations done without
agreement on your program’s description will be of
limited use.
Using a Logic Model to Describe Your Program
Your evaluation plan must include a logic model for
your program as a whole. When developing your
evaluation plan it’s important to develop a logic
model that specifically describes what your propose
to evaluate. Simply put, the product of this step is a
logic model of what is being evaluated, which must be
accompanied by a text-based description.
Let’s use a quick example of what we mean by
a text-based description. Figure 3.1 presents the six
steps of doing a program evaluation and the following
illustration, “In a Nutshell 3.1,” provides a text-based
description of each step. One shows (i.e., Figure 3.1)
and the other describes (In a Nutshell 3.1).
We strongly encourage you to develop a
text-based description to accompany your logic
model. This description must explain what you are
proposing to evaluate and how this contributes to
accomplishing your program’s intended outcomes.
This section should also describe important program
features of what is being evaluated, such as the con-
text in which your program operates, the character-
istics of the population your program is intended to
reach, and its stage of development (e.g., a pilot activ-
ity versus an activity that has been in place for a num-
ber of years).
The product of this step is creation of a
logic model accompanied by a text-based
description.
Such descriptions are invaluable, not only for
your own records but also for others who might be
interested in implementing activities similar to those
contained in your program. With a clear description
of the activity and context in which your program
resides, other social service programs will be better
able to determine how likely it is that the evaluation
results you obtained relate to what they would see if
they chose to implement these same activities in their
programs. Chapter 8 describes how to construct logic
models in depth. Without a doubt, constructing logic
models causes social work students a great deal of
anxiety. It’s hard to do, as it makes one think in a logi-
cal and consistent manner.
Logic models are nothing more than simple
tools that help people physically see the interrelations
among the various components of your program.
They are concept maps with narrative depictions of
programs in that they visually describe the logic of
how your program is supposed to work.
Figure 3.2 presents the basic five elements of the
standard run-of-the-mill logic model broken down
into the work you plan to do (i.e., numbers 1 and 2)
and the intended results that you expect to see from
your work (i.e., numbers 3–5). Using Figure 3.2 as a
guide, Figure 3.3 describes how to read a logic model
(W. K. Kellogg Foundation, 2004).
In sum, a logic model is a pictorial diagram that
shows the relationship among your program’s compo-
nents. It provides your program staff, collaborators,
stakeholders, and evaluators with a picture of your
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
56 Part I: Preparing for Evaluations
program, how it operates, and how it’s intended to
accomplish your program’s objectives.
By discussing the logic model with different
stakeholder groups, you can share your understand-
ing of the relationships among the resources you
have to operate your program, the activities you plan
do, and the changes or results you wish to achieve
from your activities. The CDC (2006) provides nine
steps that you can follow when developing your logic
model:
Step 1: Establish a logic model work group.
Your evaluation work group can be
composed of program staff, collaborators,
evaluators, and other stakeholders.
Identify areas where each stakeholder is
needed and contact them to discuss their
potential interest in participating in the
discussion and any questions or concerns
they have about your program.
Step 2: Convene the work group to discuss
the purpose and steps for constructing
your logic model. Review and summarize
relevant literature, planning documents,
reports, and data sources that will
help explain your program’s purposes,
activities, and intended outcomes.
Step 3: Provide an overview of the general
logic modeling process. Review the
definitions of terms, outline the overall
steps to construct or revise a logic model,
choose the type of logic model that best
fits your program needs, review your goals
and objectives (if they already exist), or
reach consensus on program goals and
Resources/
Inputs
1 2 3 4 5
Activities Outputs Outcomes
Your Intended ResultsYour Planned Work
Impact
Figure 3.2: The basic logic model.
Resources/
Inputs
1 2 3 4 5
Activities Outputs Outcomes
Your Intended ResultsYour Planned Work
Impact
Certain
resources are
needed to
operate your
program
If you have
access to
resources, then you
can use them
to accomplish
your planned
activities
If you
accomplish
your planned
activities, then
you will
hopefully deliver
the amount of
product and/or
service that
you intended
If you
accomplish
your planned
activities to the
extent you
intended, then
your participants
will benefit in
certain ways
If these
benefits to
participants are
achieved, then
certain changes
in organizations,
communities,
or systems
might be
expected to
occur
Figure 3.3: How to read a logic model.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 57
IN A
NUTSHELL
Step 1
Engaging Stakeholders
When a variety of stakeholders are involved in evaluation planning from the outset you can (a) plan and conduct
evaluations that more closely fit your collective needs, (b) have greater buy-in for the use of your evaluation’s results, and
(c) avoid later critiques of your evaluation or the program by showing a transparent and open evaluation process.
Purpose
Fostering input, participation, and power-sharing among those persons who have an investment in the
conduct of your evaluation and its findings; it’s especially important to engage primary users of your
evaluation’s findings.
Role
Helps increase chances that your evaluation will be useful; can improve your evaluation’s credibility,
clarify roles and responsibilities, enhance cultural competence, help protect evaluation participants,
and avoid real or perceived conflicts of interests.
Activities
Consulting insiders (e.g., leaders, staff, clients, and program funding sources) and outsiders (e.g.,
skeptics); taking special effort to promote the inclusion of less powerful groups or individuals;
coordinating stakeholder input throughout the process of your evaluation’s design, operation, and use;
avoiding excessive stakeholder identification, which might prevent the progress of your evaluation.
It’s time to engage a group of stakeholders to help you create your evaluation plan. The planning team for your evaluation
should include individuals who are interested in—and perhaps affected by—the specific evaluation to be carried out.
There are three major categories of evaluation stakeholders to consider (Russ-Eft & Preskill, 2009, pp. 141–143):
• Primary stakeholders. Individuals who are involved in program operations and who have the ability to use evaluation
findings to alter the course of a program. Examples of primary stakeholders include program staff and managers, as well
as funders.
• Secondary stakeholders. Individuals who are served by the program and therefore are likely to be affected by any
changes made as a result of the evaluation findings. Examples include program participants (e.g., workshop or training
attendees) or others who are directly reached by your program.
• Tertiary stakeholders. Individuals who are not directly affected by programmatic changes that might result from the
evaluation but who are generally interested in the results. Examples include legislators and other state social service
programs.
A final set of stakeholders—often overlooked but important to engage—are program critics. These are individuals or groups
that may oppose your program based on differing values about how to create change, what changes are necessary, or how
best to utilize limited resources. Engaging opponents of the program in your evaluation can strengthen the credibility of
your results and potentially reduce or mitigate some of the opposition.
Multiple stakeholder perspectives can contribute to rich and comprehensive descriptions of what’s being evaluated
while also facilitating a well-balanced and useful evaluation. Your stakeholders may also be engaged in carrying out your
evaluation or in implementing its recommendations.
3.2
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
58 Part I: Preparing for Evaluations
subsequently outline the objectives in
support of each goal.
Step 4: Decide whether you will use the
“if-then” method, reverse logic method, or
both to construct the logic model. If you
have a clear picture of what your inputs and
activities will be, you will want to use the
“if-then” approach, in which you construct
your logic model from left to right, starting
with the process components and working
toward the outcomes.
The “reverse logic” approach can be
used to work from the right to the left of
the logic model, starting with the goal and
working backward through the process
components. If outputs are predetermined,
you can start from the middle and branch
out in both directions (an approach that
combines the previous two methods).
Step 5: Brainstorm ideas for each logic model
column. After brainstorming is complete,
arrange these items into groups such as
professional development, collaborations,
and so on. Check that each activity
logically links to one or more outputs and
each output links to one or more outcomes.
Step 6: Determine how to show your
program’s accomplishments and select
indicators to measure your outputs
and short-term outcomes. The question
number for each associated indicator
should be placed under the output or
short-term outcome that it measures.
Step 7: Perform checks to assure links across
logic model columns. You should be able
to read the logic model from both left
to right and right to left, ensuring that a
logical sequence exists between all of the
items in each column. It’s often helpful to
color-code specific sections of your logic
model to illustrate which sections logically
follow one another.
Step 8: Ensure that your logic model
represents your program but does not
provide unnecessary detail. Review the
items placed under the headings and
subheadings of the logic model, and
then decide whether the level of detail is
appropriate. The work group should reach
consensus in fine-tuning the logic model
by asking: What items in the logic model
can be combined, grouped together, or
eliminated?
Step 9: Revise and update your logic model
periodically to reflect program changes.
Changes in your logic model may be
needed to reflect new or revised program-
matic activities or interventions or to
account for a change in a new intervention
or new evaluation findings.
Concept Maps
Logic models are nothing more than concept
maps. Concept mapping is a tool that can be used to
visually illustrate key elements of either the program’s
design or aspects of the evaluation plan. Concept
mapping is a technique that is used to display infor-
mation visually. Surely you have heard the expression
“a picture is worth a thousand words.” Concept map-
ping makes a complicated thing simple. As Albert
Einstein said, “If you can’t explain it simply, you
don’t understand it well enough,” and “If I can’t see it,
I can’t understand it.” And this is the guy who came
up with E = mc2
Concept mapping facilitates communication
through pictures; as such, it reduces the amount
of text reading that would otherwise be needed in
a planning process. Specifically, it’s used to dia-
gram concepts and the relationships between them.
Concept maps can illustrate simple or complex ideas.
For example, Figure 7.6 in Chapter 7 shows a simple
concept map illustrating the relationship of the goal
of an agency to the goals of three programs housed
within the agency.
A more complex concept map is shown in
Figure 3.4, which offers a visual illustration of a
client-centered program design for a family and com-
munity support program. The illustration shows the
relationship between the family and community sup-
port components of the program, which share both
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 5
9
office space and program objectives. Figure 3.4 also
features the program’s goal and details various activi-
ties that workers engage in. Indeed, Figure 3.4 high-
lights many key program design concepts that are
discuss thoroughly in Chapter 7.
Another example of a concept map is shown in
Figure 3.5. Rather than diagramming the relationship
between program design concepts (as shown in Figure
3.4), the concept map featured in Figure 3.5 shows the
fit of evaluation as a key phase of program operations
in both components of the program. Furthermore, the
picture reveals that the two program components (fam-
ily support and community support) will have separate
evaluations but the results of both will be considered
together when shared with the community.
Communication tools. Concept maps are communica-
tion tools. Thus they can have the effect of answering
evaluation questions about a group’s thinking or gen-
erating new questions that aim for fuller understand-
ing. It’s important to understand that the concept
maps featured in Figures 3.4 and 3.5 present only two
of many possible representations. In viewing the two
illustrations, perhaps you had ideas about how the
program design or the evaluation plan could be illus-
trated differently.
It may be that your idea is to add concepts not fea-
tured, such as identifying priority evaluation questions
or specific measurement instruments. On the other
hand, it may be your opinion that Figure 3.4 could be
simplified by deleting parts of the illustration such as
the program goal statement. Perhaps you see the rela-
tionships between concepts differently and would pre-
fer to see the concept shapes in another arrangement.
Evaluation planning tools. Concepts maps are also
planning tools. To be useful as a planning tool, the
exercise of building concept maps should involve
representatives of key stakeholder groups. Bringing
different stakeholders—especially those with diver-
gent views—together to build one concept map can
generate rich discussion. Because communication
Friendly office space in highly
visible part of neighborhood
• Reduce child abuse reports
• Increase support to parents
• Increase community efficacy
*drop-in support
* meeting space
*volunteer hub
*emergency provisions
*telephone, fax, Internet
*referral information
*program office
Program Objectives
• Peer support hotline to families with
children under 18 years old
• Short-term support counseling
• Peer support and support groups
• Emergency goods and services
• Liaising with courts, schools, and
other service providers
• Employment training for community
members in volunteer roles
Activities
Program Goal: To enhance quality of life for families that are living in the Edison neighborhood where such problems as
poverty, substance abuse, mental illness, and domestic violence put children at risk for abuse and neglect. By improving the
capacity of families and the community they live in, the program aims to build a safe neighborhood that values child well-being.
Family and Community Support Program
High Risk DCFS
Neighborhood
Family Support Program
Center
• Community outreach and
awareness on child safety and
well-being
• Time-limited improvement
projects within community
• Recruit and sustain community
board members and volunteers
• Resource and fund-raising efforts
Activities
Community Support
Figure 3.4: Concept map of a client-centered program design.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
60 Part I: Preparing for Evaluations
can result in intense and impassioned discussions as
stakeholders promote different points of view, it’s wise
to have a skilled facilitator to accomplish the task.
Once concept maps are created they can be used
as visual reminders throughout the planning and
evaluation processes. The visual illustrations can
function as literal maps that chart future discussion
and planning decisions. As such, they should be easily
accessible or displayed in clear sight of those working
on the program and evaluation plans.
For example, suppose that stakeholders of the
family and community support programs wind up
spending 40 minutes of a 60-minute meeting in a
heated debate about the type of activities that workers
are expected to perform in the family support compo-
nent of the program. It would be possible, and perhaps
strategic, for a workgroup member to mention this
fact, point to Figure 3.5, and add the suggestion that
the group needs to wrap up discussion about family
support to ensure that discussion about the commu-
nity support component of the program does not get
ignored.
STEP 3: FOCUS THE EVALUATION
After completing Steps 1 and 2, you and your stake-
holders should have a clear understanding of your pro-
gram and have reached a consensus on its description.
Now your evaluation team needs to focus the evaluative
efforts. This includes determining the most meaning-
ful evaluation questions to ask and the most appropriate
evaluation design to implement that would produce the
most valid and reliable data that will be used to answer
the questions.
Focusing your evaluation assumes that your
entire program does not need to be evaluated at any
specific point in time. Rather, the precise evaluation
design to use entirely depends on what questions are
being asked, who is asking the questions, and what
will be done with the results.
Since resources for evaluation are always limited, we
provide a series of decision criteria to help you determine
the best evaluation focus at any point in time. These cri-
teria are inspired by two of the four CDC’s evaluation
standards that are discussed in the following chapter:
Family and Community Support Program
Evaluation Plan
Community Board
Phase 4
Evaluate
Phase 2
Define solutions
(consider
linkages with
other providers)
Phase 1
Determine family
needs Family
Support
Component
Phase 4
Evaluate
Phase 3
Implement family
solution
Phase 5
Educate
community about
program plans
and
accomplishments
Phase 1
Determine
community
needs
Phase 2
Define time-limited
community
improvement
projects
Community
Support
Component
Phase 3
Implement
community
improvement
projects
Figure 3.5: Concept map of an evaluation plan.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 61
• Utility (who will use the results and what
information will be most useful to them)
• Feasibility (how much time and resources are
available for the evaluation)
The logic model developed in the previous step,
Step 2: Describing Your Program, sets the stage for
determining the best evaluation focus. The approach
to focusing an evaluation in the CDC Evaluation
Framework differs slightly from traditional evalua-
tion approaches. Rather than a summative evaluation
conducted when your program had run its course
and asking “Did your program work?”, the CDC
framework views evaluation as an ongoing activity
over the life of a program that asks,” Is your program
working?”
A description of formative and summative evalu-
ations is presented in Box 2.1. This may be an excellent
time to revisit it before reading further.
In short, your social service program will always
be ready for some kind of an evaluation. Because
your logic model displays your program from inputs
through activities/outputs through to the sequence of
outcomes from short term to most distal, it can guide
a discussion of what you can expect to achieve at a
given point in the life of your program.
Should you focus your evaluative efforts on dis-
tal outcomes or only on short- or midterm ones?
Conversely, does a process evaluation make the most
sense right now?
Types of Evaluations
Many different questions can be part of a program
evaluation, depending on how long your program
has been in existence, who is asking the question,
and why the evaluation information is needed. As
we know from the previous chapter, there are four
types of evaluations: need, process, outcome, and
efficiency. This section ignores needs assessments for
the moment and concentrates on questions that the
remaining three types of evaluations can answer for a
program that is already in existence:
• Process evaluations
• Outcome/efficiency evaluations
Process Evaluations
As we know, process evaluations—sometimes
referred to as implementation evaluations—document
whether a program has been implemented as intended
and the reasons why or why not. In process evalua-
tions you might examine what activities are taking
place, who is conducting the activities, who is reached
through the activities, and whether sufficient inputs
have been allocated or mobilized. How to do process
evaluations is discussed in depth in Chapter 11.
The products of this step include a final set
of evaluation questions and the evaluation
design that will be used to answer the
questions.
Process evaluations are important to help distin-
guish the causes of poor program performance—was
your program a bad idea in the first place, or was it a
good idea that could not reach the standard for imple-
mentation that you previously set? In all cases, process
evaluations measure whether your actual program’s
performance was faithful to your initial plan. Such
measurements might include contrasting actual and
planned performance along all or some of the following:
• The locale where your services or program
are provided (e.g., rural, urban);
• The number of people receiving your services;
• The economic status and racial/ethnic
background of people receiving your services;
• The quality of your services;
• The actual activities that occur while the your
services are being delivered;
• The amount of money the evaluation going to
cost;
• The direct and in-kind funding for your
services;
• The staffing for your services or programs;
• The number of your activities and meetings;
and
• The number of training sessions conducted.
When evaluation resources are limited—as they
usually are—only the most important issues of imple-
mentation can be included. The following are some
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
62 Part I: Preparing for Evaluations
IN A
NUTSHELL
Step 2:
Describing Your Program
Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables
comparisons with similar programs and facilitates attempts to connect program components to their effects. Moreover,
stakeholders might have differing ideas regarding your program’s goal and objectives.
Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating
with stakeholders to formulate a clear and logical description will bring benefits before data are available to evaluate your
program’s effectiveness.
Content areas to include in a program description are presented below.
Purpose
Scrutinizing the features of the program being evaluated, including its purpose and place
in the larger social service delivery context. Description includes information regarding the
way your program was intended to function and the way that it actually was implemented.
Also includes features of your program’s context that are likely to influence conclusions
regarding your program.
Role
Improves evaluation’s fairness and accuracy; permits a balanced assessment of strengths
and weaknesses; and helps stakeholders understand how program features fit together
and relate to a larger context.
Activities
Characterizing the need (or set of needs) addressed by your program; listing specific
expectations as goals, objectives, and criteria for success; clarifying why program
activities are believed to lead to expected changes; drawing an explicit logic model to
illustrate relationships between program elements and expected changes; assessing
your program’s maturity or stage of development; analyzing the context within which
your program operates; considering how your program is linked to other ongoing efforts;
avoiding creation of an overly precise description for a program that is under development.
Suggested content
areas to address
when describing your
program
Questions to ask and answer about each content area when describing your program are
listed below. Your main goal is to end up with a logic model that clearly paints an accurate
picture of what your program is all about.
Need What problem or opportunity does your program address? Who experiences it?
Context
What is the operating environment around your program? How might environmental
influences such as history, geography, politics, social and economic conditions, secular
trends, or efforts of related or competing organizations affect your program and its
eventual evaluation?
Stage of development
How mature is your program? Is your program mainly engaged in planning, implementation,
or effects? Is your program the only game in town, or are there similar programs in your
immediate area?
3.3
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 63
“usual suspects” that compromise a program’s imple-
mentation and might be considered for inclusion in a
process evaluation:
• Transfers of accountability: When a
program’s activities cannot produce the
intended outcomes unless some other person
or organization takes appropriate action,
there is a transfer of accountability.
• Dosage: The intended outcomes of a
program’s activities (e.g., training, case
management, counseling) may presume a
threshold level of participation or exposure
to the intervention.
• Access: When intended outcomes require not
only an increase in consumer demand but
also an increase in supply of services to meet
it, then the process evaluation might include
measures of access.
• Staff competency: The intended outcomes
may presume well-designed program activities
delivered by staff that are not only technically
competent but also matched appropriately to
the target audience. Measures of the match of
staff and target audience might be included in
the process evaluation
Outcome/Effectiveness Evaluations
Outcome evaluations assess progress on the
sequence of outcomes your program is to address.
Programs often describe this sequence using terms
like “short-term,” “intermediate,” and “long-term out-
comes,” or “proximal” (close to the intervention) or “dis-
tal” (distant from the intervention). How to do outcome
evaluations is discussed in depth in Chapter 12.
Depending on the stage of development of your
program and the purpose of the evaluation, outcome
evaluations may include any or all of the outcomes in
the sequence, including:
• Changes in client’s attitudes, behaviors,
feelings, cognitions, and beliefs;
• Changes in risk or protective behaviors;
• Changes in the environment, including
public and private policies, formal and
informal enforcement of regulations, and
inf luence of social norms and other societal
forces; and
• Changes in trends in morbidity and
mortality.
While process and outcome evaluations are the
most common of all four types of evaluations, there
are several other types of evaluation questions that
can be central to a specific program evaluation. These
include the following:
• Efficiency: Are your program’s activities
being produced with minimal use of
resources such as budget and staff time?
What is the volume of outputs produced by
Resources
(will go in logic model)
What assets are available to conduct your program’s activities, such as time, talent,
technology, information, and money?
Activities
(will go in logic model)
What steps, strategies, or actions does your program take to effect change?
Expected effects
(will go in logic model)
What changes resulting from your program are anticipated? What must your program
accomplish to be considered successful?
Logic model
What is the hypothesized sequence of events for bringing about change? How do your
program’s elements (i.e., resources, activities, expected effects) connect with one another
to form a plausible picture of how your program is supposed to work? Logic models are
discussed in Chapter 8.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
64 Part I: Preparing for Evaluations
the resources devoted to your program? (This
topic is covered in Chapter 13.)
• Cost-effectiveness: Does the value or benefit
of your program’s outcomes exceed the cost
of producing them? (This topic is covered in
Chapter 13.)
• Cause-effect: Can the your program’s
outcomes be related to your program, as
opposed to other things that are going on at the
same time? (This topic is covered in Tool E.)
All of these types of evaluation questions relate to
a part, but not all, of your logic model. Figures 3.6a and
3.6b show where in the logic model each type of evalu-
ation focuses. As can be seen in Figure 3.6a, process
evaluations would focus on the inputs, activities, and
outputs and would not be concerned with outcomes/
effectiveness. Effectiveness evaluations would do the
opposite—focusing on some or all outcome boxes but
not necessarily on the activities that produced them.
As can be seen in Figure 3.6b, efficiency evalua-
tions care about the arrows linking inputs to activi-
ties/outputs—how much output is produced for a
given level of inputs/resources. Cause-effect focuses
on the arrows between specific activities/outputs and
specific outcomes—whether progress on the outcome
is related to the specific activity/output.
Determining the Focus of an Evaluation
Determining the “correct” evaluation focus is
solely determined on a case-by-case basis. Several
guidelines inspired by the utility and feasibility evalu-
ation standards (discussed in the following chapter)
can help you determine the best focus.
Utility Considerations
1. What is the purpose of your evaluation?
“Purpose” refers to the general intent of
your evaluation. A clear purpose serves
as the basis for your evaluation questions,
evaluation design, and data collection
methods. Some common purposes are:
• Gain new knowledge about your program’s
activities;
• Improve or fine-tune an existing program’s
operations (e.g., program processes or
strategies);
• Determine the effects of your program by
providing data concerning your program’s
contributions to its long-term goal; and
• Affect your program’s participants by
acting as a catalyst for self-directed change
(e.g., teaching).
2. Who will use the results from your evalua-
tion? Users are the individuals or organiza-
tions that will utilize your evaluation findings.
The users will likely have been identified
during Step 1 in the process of engaging stake-
holders. In this step you needed to secure their
input in the selection of evaluation questions
CASUAL ATTRIBUTION
EFFICIENCY
INPUTS ACTIVITIES
OUTPUTS SHORT-TERM
EFFECTS/
OUTCOMES
INTERMEDIATE
EFFECTS/
OUTOCMES
LONG-TERM
EFFECTS/
OUTCOMES
Figure 3.6b: Using logic models to determine types of possible evaluations.
INPUTS ACTIVITIES
PROCESS/IMPLEMENTATION OUTCOME/EFFECTIVENESS
OUTPUTS SHORT-TERM
EFFECTS/
OUTCOMES
INTERMEDIATE
EFFECTS/
OUTOCMES
LONG-TERM
EFFECTS/
OUTCOMES
Figure 3.6a: Using logic models to determine types of possible evaluations.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 65
and the evaluation design that would gather
data to answer the questions. As you know
by now, support from the intended users will
increase the likelihood that your evaluation
results will be used for program improvement.
3. How will the users actually use the
evaluation results? Many insights on use
will have been identified in Step 1. Data
collected may have varying uses, which
should be described in detail when design-
ing your evaluation. Some examples of
uses of evaluation findings are as follows:
• To document the level of success in
achieving your program’s objectives;
• To identify areas of your program that
need improvement;
• To decide on how to allocate resources;
• To mobilize community support;
• To redistribute or expand the locations
where your program or intervention, is
being carried out;
• To improve the content of your program’s
materials;
• To focus your program’s resources on a
specific client population; and
• To solicit more funds or additional partners.
4. What do other key stakeholders need from
your evaluation? Of course the most important
stakeholders are those who request or who will
use the results from your evaluation. Never-
theless, in Step 1, you may also have identified
stakeholders who, while not using the findings
of the current evaluation, have key questions
that may need to be addressed in your evalua-
tion to keep them engaged. For example, a par-
ticular stakeholder may always be concerned
about costs, disparities, or cause-effect issues. If
so, you may need to add those questions when
deciding on an evaluation design.
Feasibility Considerations
The first four questions will help you to identify
the most useful focus of your evaluation, but you must
also determine whether it’s a realistic and feasible one.
Questions 5 through 7 provide a reality check on your
desired focus:
5. What is the stage of development of your
program? During Step 2 you identified
your program’s stage of development.
There are roughly three stages in program
development—planning, implementation,
and maintenance—that suggest different
focuses. In the planning stage, a truly for-
mative evaluation—who is your target cli-
entele, how do you reach them, how much
will it cost—may be the most appropriate
focus.
An evaluation that included program
outcomes would make little sense at this
stage. Conversely, an evaluation of a pro-
gram in a maintenance stage would need
to include some measurement of progress
on developing program outcomes, even if
it also included questions about its imple-
mentation.
6. How intensive is your program? As you
know, some social work programs are
wide-ranging and multifaceted. Others may
use only one approach to address a large
problem. Some programs provide extensive
exposure (“dose”) of a program, while oth-
ers involve participants quickly and superfi-
cially. Simple or superficial programs, while
potentially useful, cannot realistically be
expected to make significant contributions
to distal outcomes of a larger program, even
when they are fully operational.
7. What are relevant resource and logistical
considerations? Resources and logistics may
influence decisions about your evaluation’s
focus. Some outcomes are quicker, easier, and
cheaper to measure, while others may not
be measurable at all. These facts may tilt the
decision about the focus of your evaluation
toward some outcomes as opposed to others.
Early identification of inconsistencies
between utility and feasibility is an impor-
tant part of the evaluation focus step. But
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
66 Part I: Preparing for Evaluations
we must also ensure a “meeting of the
minds” on what is a realistic focus for a
specific program evaluation at a specific
point in time.
Narrowing Down Evaluation
Questions
As should be evident by now, social work programs
are complex entities. In turn, any evaluation within
them can also be multifaceted and can easily go in
many different directions. For example, a program
evaluation can produce data to answer many different
questions, such as, “Is a program needed in the first
place?” (Chapter 10); “What exactly is my program?”
(Chapter 11); “Is my program effective?” (Chapter 12);
and “Is my program efficient?” (Chapter 13)?
The list of possible evaluation questions is limit-
less, but program resources—human and fiscal—are
not. As such, an essential planning task of any evalua-
tion is to decide on a reasonable number of questions
that will be the main focus of your evaluation. The W.
K. Kellogg Foundation (1998) provides four tips for
developing evaluation questions:
Tip 1: Ask yourself and evaluation team
members why you are asking the questions
you are asking and what you might be
missing.
Tip 2: Different stakeholders will have
different questions. Don’t rely on one or
two people (external evaluator or funder)
to determine questions. Seek input from as
many perspectives as possible to get a full
picture before deciding on what questions
to answer.
Tip 3: There are many important questions
to address. Stay focused on the primary
purpose for your evaluation activities at
a certain point in time and then work to
prioritize which are the critical questions
to address. Because your evaluation
will become an ongoing part of project
management and delivery, you can and
should revisit your evaluation questions
and revise them to meet your current needs.
Tip 4: Examine the values embedded in the
questions you are asking. Whose values
are they? How do other stakeholders,
particularly evaluation participants, think
and feel about this set of values? Are there
different or better questions the evalua-
tion team members and other stakeholders
could build consensus around?
Sources for Questions
By focusing a program evaluation around clearly
defined questions, evaluation activities can be kept
manageable, economical, and efficient. All too often
stakeholders identify more interests than any single
evaluation can reasonably manage.
A multitude of stakeholder-related sources
can be utilized to generate a list of potential evalu-
ation questions. The W. K. Kellogg Foundation
(1998) lists nine stakeholder-related sources for our
consideration:
Source 1: Program Director: Directors are
usually invaluable sources of information
because they are likely to have the “big
picture” of the project.
Source 2: Program Staff/Volunteers: Staff
members and volunteers may suggest
unique evaluation questions because they
are involved in the day-to-day operations
of the program and have an inside
perspective of the organization.
Evaluation
questions
Chapter 10
Is a program
needed?
Chapter
11
What exactly
is my
program
anyway?
Chapter
12
Is my
program
effective?
Chapter 13
Is my
program
efficient?
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 67
Source 3: Program Clientele: Participants/
consumers offer crucial perspectives
for the evaluation team because they
are directly affected by the program’s
services. They have insights into the
program that no other stakeholder is
likely to have.
Source 4: Board of Directors/Advisory
Boards/Other Project Leadership: These
groups often have a stake in the program
and may identify issues they want
addressed in the evaluation process. They
may request that certain questions be
answered to help them make decisions.
Source 5: Community Leaders: Community
leaders in business, social services, and
government can speak to issues underlying
the conditions of the target population.
Because of their extensive involvement in
the community, they often are invaluable
sources of information.
Source 6: Collaborating Organizations:
Organizations and agencies that are
collaborating with the program should
always be involved in formulating
evaluation questions.
Source 7: Program Proposal and Other
Documents: The program proposal, funder
correspondence, program objectives and
activities, minutes of board and advisory
group meetings, and other documents may
be used to formulate relevant evaluation
questions.
Source 8: Content-Relevant Literature and
Expert Consultants: Relevant literature
and discussion with other professionals
in the field can be potential sources of
information, and of possible questions, for
evaluation teams.
Source 9: Similar Programs/Projects:
Evaluation questions can also be obtained
from executive directors and staff of other
programs, especially when their programs
are similar to yours.
Techniques to Focus Questions
Figure 3.7 shows a simple survey that we used to
aid us in an evaluation planning session within a rural
literacy program. The 24 questions shown in Figure 3.7
are only a sample of those generated by the program’s
stakeholders, which included representation from the
program’s steering committee, administration, and
workers, as well as other professionals and local citi-
zens; a total of 20 stakeholders participated in the plan-
ning process. The complete brainstorm list (not shown)
included more than 80 questions—far too many to focus
the program’s evaluation, which had a modest budget.
The simple survey shown in Figure 3.7 was created
to gather stakeholder input that would help identify
priority questions of interest. The questions listed were
created by the program’s stakeholders. Thus the survey
itself also had the added benefit of showing stakehold-
ers that their ideas were both valued and were being
put to good use in planning the program’s evaluation
strategy.
Evaluations that are not sufficiently focused gener-
ally result in large and unwieldy data collection efforts.
Unfortunately, when mass quantities of data are col-
lected without a forward thinking plan—linking
the data collected to the evaluation questions to be
answered—the data may be compromised by poor
reliability and validity. On the other hand, evaluation
data derived from carefully focused questions make it
much easier to maintain the integrity of the data col-
lection process and produce credible results.
Focusing an evaluation does not imply that only one
part or aspect of a program or service will be of interest.
In fact, there are usually a number of different interests
that can be accommodated within a single evaluation.
Figure 3.7, for example, suggests that, depending on
the stakeholders’ ratings, the literacy program’s evalua-
tion could end up focusing on questions related to cli-
ent characteristics (Questions 1–10), program services
(Questions 11–18), or client outcomes (Questions 19–24),
or a combination of all three.
Focusing evaluation questions means that pro-
gram interests are first identified and the evaluation’s
activities are then organized around those interests.
Thus there can be multiple points of focus within a
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
68 Part I: Preparing for Evaluations
Figure 3.7: Example of a simple survey determining the priority of the evaluation questions that were selected for the final evaluation.
Evaluation Question Priority Survey
Instructions: (1) Rate each question by circling one number using the scale to the right of each question. (2) Feel free to
add questions that you consider to be a priority for evaluation.
Client Characteristic Questions: Definitely Keep Deserves Consideration Throw Out
1. Who referred family to the program? 1 2 3
2. How many children in the family? 1 2 3
3. How old is each family member? 1 2 3
4. How long has the family lived in the community? 1 2 3
5. What is the family structure? 1 2 3
6. Does the family live in town or rural? 1 2 3
7. Does the family access other community services? 1 2 3
8. What languages are spoken in the home? 1 2 3
9. What are the education levels of parents? 1 2 3
10. Does family have (or want) a library card? 1 2 3
Program Service Questions:
11. How many visits were made to the family? 1 2 3
12. How long was each visit? 1 2 3
13. How many scheduled visits were missed? Why? 1 2 3
14. How many times was family not ready for the visit? 1 2 3
15. Did family readiness improve over time? 1 2 3
16. How satisfied were parents with program? 1 2 3
17. How satisfied was family with the worker? 1 2 3
18. What was easiest/most difficult for you in the
program?
1 2 3
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 69
single evaluation, but it’s important that these be
clearly identified and planned from the beginning.
The focal questions selected for a program’s evalu-
ation need not remain static. Questions may be added
or deleted as circumstances and experiences dictate. In
other words, a specific set of questions may guide the
focus of an evaluation for a limited period of time.
STEP 4: GATHERING CREDIBLE DATA
In this step you will work with your stakeholders to
identify the data collection methods and sources you
will use to answer your evaluation questions. You will
need to review your data collection plan in light of the
work you did in your evaluation planning process:
• Are there new data sources you may want to
incorporate?
• Do your methods meet your stakeholders’
needs for information?
• Do you need to adjust your data collection
timeline?
• Are there measures you might standardize
across evaluations?
For new evaluative efforts, you may want to build
in a pilot test or more small-scale data collection
efforts before conducting a more intensive effort. As
you develop your data collection approach, it’s critical
to keep in mind why you are collecting the data and
how you will use them. Being explicit about the use of
data before they are collected helps you to conserve
resources and reduces respondent burden.
The products of this step include data
collection methods and indicators that will
be used to answer your evaluation questions.
Your stakeholders may also help identify indi-
cators that could be used to judge your program’s
success. Let’s say you have chosen to evaluate a rela-
tively new educationally oriented type of intervention
designed to educate line-level social workers within
your community about how the Affordable Care Act
(aka, Obamacare) will affect their clientele. You want
to know, for example, to what extent your intended tar-
get audience is attending (item 1 below) and complet-
ing the training, or your intervention (item 2 below)
broken down by the type of practitioner they are (item
3 below). Your stakeholders decide that training atten-
dance logs will be maintained and recommend includ-
ing the following three specific indicators:
1. Attendance
2. Proportion of attendees who complete the
training
3. Type of social work practitioner (commu-
nity organizers, group workers, school
social workers, medical social workers,
Client Outcome Questions:
19. Do clients show change after the program? 1 2 3
20. Do children’s literacy skills improve? 1 2 3
21. Do reading behaviors change? 1 2 3
22. Were the parents’ expectations of program met? 1 2 3
23. What is the support worker’s evaluation of services? 1 2 3
24. Has enjoyment for reading increased? 1 2 3
Figure 3.7: (continued)
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
70 Part I: Preparing for Evaluations
IN A
NUTSHELL 3.4 Step 3:
Focusing Your Evaluation Design
The direction and process of your evaluation must be focused to assess issues of greatest concern to stakeholders while
using time and resources as efficiently as possible. Not all design options are equally well suited to meeting the information
needs of your stakeholders.
After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious.
A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful,
feasible,
ethical, and accurate.
Content areas to include when focusing your evaluation design are presented below.
Purpose
Planning in advance where your evaluation is headed and what steps will be taken; process is
iterative (i.e., it continues until a focused approach is found to answer evaluation questions
with methods that stakeholders agree will be useful, feasible, ethical, and accurate); evaluation
questions and methods might be adjusted to achieve an optimal match that facilitates use by
primary users.
Role
Provides investment in quality; increases the chances that your evaluation will succeed by
identifying procedures that are practical, politically viable, and cost-effective; failure to plan
thoroughly can be self-defeating, leading to an evaluation that might become impractical
or useless; when stakeholders agree on a design focus, it’s used throughout the evaluation
process to keep your project on track.
Activities
Meeting with stakeholders to clarify the real intent or purpose of your evaluation; learning
which persons are in a position to actually use the findings, then orienting the plan to meet their
needs; understanding how your evaluation results are to be used; writing explicit evaluation
questions to be answered; describing practical methods for sampling, data collection,
data analysis, interpretation, and judgment; preparing a written protocol or agreement
that summarizes your evaluation procedures, with clear roles and responsibilities for all
stakeholders; revising parts or all of your evaluation plan when critical circumstances change.
Suggested
content areas to
address when
focusing your
evaluation design
Questions to ask and answer about each content area when focusing your evaluation design
are listed below. Your main goal is to end up with an evaluation design that is useful, feasible,
ethical, and accurate.
Purpose
What is the intent or motive for conducting your evaluation (i.e., to gain insight, change practice,
assess effects, or affect participants)?
Users
Who are the specific persons that will receive your evaluation findings or benefit from being
part of your evaluation? How will each user apply the information or experiences generated
from your evaluation?
Uses How will each user apply the information or experiences generated from your evaluation?
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 71
DHS workers, child protection workers,
and so on)
Learn more about how to collect
credible data to answer your
evaluation questions in Tool H in
the Evaluation Toolkit.
You can see from this list of indicators that it will
be important to have a question on the attendance
sheet that asks attendees what type of social work
practitioner they are (item 3). Had you not discussed
the indicators that will be used to determine the “suc-
cess” of your intervention, it’s possible this important
question would have been left off the attendance log.
STEP 5: JUSTIFYING YOUR
CONCLUSIONS
Planning for data analyses and interpretation of the
data prior to conducting your evaluation is impor-
tant to ensure that you collect the “right” data to fully
answer your evaluation questions. Think ahead to how
you will analyze the data you collect, what methods
you will use, and who will be involved in interpreting
the results.
Part of this process is to establish standards of
performance against which you can compare the
indicators you identified earlier. You may be familiar
with “performance benchmarks,” which are one type
of standard. In this example, a benchmark for the
indicator “proportion of attendees who complete
training” may be “more than 60% of attendees com-
pleted the training.” Standards often include compar-
isons over time or with an alternative approach (e.g.,
no action or a different intervention). It’s important to
note that the standards established by you and your
stakeholders do not have to be quantitative in nature.
The products of this step include a set of
performance standards and a plan for
synthesizing and interpreting evaluation
findings.
Regardless of whether your “indicators” are quali-
tative or quantitative in nature, it’s important to dis-
cuss with evaluation stakeholders what will be viewed
as a positive finding. The standards you select should be
clearly documented in the individual evaluation plan.
Make sure to allow time for synthesis and inter-
pretation in your individual evaluation plan. At the
completion of your evaluation, you will want to be
able to answer such questions as:
• Overall, how well does what is being
evaluated perform with respect to the
standards established in the evaluation plan?
• Are there changes that may need to be made
as a result of your evaluation’s findings?
Questions
What questions should your evaluation answer? What boundaries will be established to create
a viable focus for your evaluation? What unit of analysis is appropriate (e.g., a system of related
programs, a single program, a project within a program, a subcomponent or process within a
project)?
Methods
What procedures will provide the appropriate information to address stakeholders’ questions
(i.e., what research designs and data collection procedures best match the primary users,
uses, and questions)? Is it possible to mix methods to overcome the limitations of any single
approach?
Agreements
How will your evaluation plan be implemented within available resources? What roles and
responsibilities have the stakeholders accepted? What safeguards are in place to ensure that
standards are met, especially those for protecting human subjects?
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
IN A
NUTSHELL 3.5 Step 4:
Gathering Credible Data
Persons involved in an evaluation should strive to collect data that will convey a well-rounded picture of your program and
be seen as credible by your evaluation’s primary users. Data should be perceived by your stakeholders as believable and
relevant for answering their questions. Such decisions depend on the evaluation questions being posed and the motives for
asking them. Having credible data strengthens evaluation judgments and the recommendations that follow from them.
Although all types of data have limitations, an evaluation’s overall credibility can be improved by using multiple procedures
for gathering, analyzing, and interpreting data. When stakeholders are involved in defining and gathering data that they find
credible, they will be more likely to accept your evaluation’s conclusions and to act on its recommendations.
The following aspects of data gathering typically affect perceptions of credibility.
Purpose
Compiling data that stakeholders perceive as trustworthy and relevant for answering their
questions. Such data can be experimental or observational, qualitative or quantitative, or it
can include a mixture of methods. Adequate data might be available and easily accessed,
or it might need to be defined and new data collected. Whether a body of data are credible
to stakeholders might depend on such factors as how the questions were posed, data
sources, conditions of data collection, reliability of the measurement procedures, validity
of interpretations, and quality control procedures.
Role
Enhances the evaluation’s utility and accuracy; guides the scope and selection of data and
gives priority to the most defensible data sources; promotes the collection of valid, reliable,
and systematic data that are the foundation of any effective evaluation.
Activities
Choosing indicators that meaningfully address evaluation questions; describing fully the
attributes of data sources and the rationale for their selection; establishing clear procedures and
training staff to collect high-quality data; monitoring periodically the quality of data obtained
and taking practical steps to improve their quality; estimating in advance the amount of data
required or establishing criteria for deciding when to stop collecting data in situations where an
iterative or evolving process is used; safeguarding the confidentiality of data and data sources.
Suggested content
areas to address when
collecting credible data
Questions to ask and answer about each content area in relation to collecting credible data
are listed below. You main goal is to collect valid and reliable relevant data.
Indicators
How will general concepts regarding the program, its context, and its expected effects
be translated into specific measures that can be interpreted? Will the chosen indicators
provide systematic data that are valid and reliable for the intended uses?
Sources
What sources (i.e., persons, documents, observations) will be accessed to gather data?
What will be done to integrate multiple sources, especially those that provide data in
narrative form and those that are numeric?
Quality Are the data trustworthy (i.e., reliable, valid, and informative for the intended uses)?
Quantity
What amount of data are sufficient? What level of confidence or precision is possible? Is
there adequate power to detect effects? Is the respondent burden reasonable?
Logistics What techniques, timing, and physical infrastructure will be used for gathering and handling data?
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 73
STEP 6: ENSURING USAGE AND
SHARING LESSONS LEARNED
As we have seen, you can promote the use of your
evaluation findings by the actions you take throughout
your evaluation’s planning process. Building a com-
mitment to using evaluation results both internally
and with your stakeholders is extremely important.
Sharing what you have learned will also add to our
knowledge base about what interventions work with
specific clientele.
The product of this step includes a
communication and reporting plan for your
evaluation.
Thinking about the use of your evaluation findings
does not need to wait until your evaluation is completed
and results are ready to be disseminated. Think early
and often about how and at what points you can (and
need to) make use of your evaluation’s results. Pilot-test
results can be used to improve program processes.
Baseline results can help to better target your interven-
tion. Preliminary findings can help you to refine your
data collection strategies in future rounds. Build in time
to your schedule to ensure your evaluation’s findings are
actually used. For example, will you have enough time
after your results are finalized to develop an action plan
for program improvement?
Dissemination of results and communication
about lessons learned should not be an afterthought.
To increase the likelihood that intended audiences will
use your evaluation findings for program improve-
ment, it’s important to think through how and with
whom you will communicate as you plan and imple-
ment each evaluation, as well as after the evaluation
has been completed. Your strategy should consider
the purpose, audience, format, frequency, and timing
of each communication (Russ-Eft & Preskill, 2009).
As you develop your dissemination plan, keep in
mind the following:
• Consider what information you want to
communicate. What action do you hope each of
your audiences will take based on the information
you provide? Are you just keeping them informed,
or do you want them to act in some way? Tailor
your communication plan accordingly.
• Your audience will likely vary greatly
across evaluations and also may change as
an evaluation progresses. Think broadly
about who to include in communication.
For instance, at various points in time you
may want to include executive directors,
program managers, supervisors, individuals
participating in planning the evaluation,
legislators or funders, or individuals affected by
your program.
• Formats can be formal or informal and may
include a mix of e-mail correspondence,
newsletters, written reports, working sessions,
briefings, and presentations. Formats may differ
by audience and may also differ over time for
the same audience as information needs change.
• Consider your communication strategies when
estimating the resources that will be required
to carry out your evaluation. If your evaluation
resources are limited, we recommend giving the
greatest consideration to the information needs of
the primary evaluation stakeholders (those who
have the ability to use your evaluation’s findings).
SUMMARY
This chapter presented a discussion on how the CDC’s
six-step evaluation process unfolds and stressed how
our stakeholders need to be involved in every aspect
of our evaluation. The next chapter discusses how we,
as professional social workers, must follow strict pro-
fessional standards when evaluating our programs,
taking into account the contents of the first three
chapters of this book.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
74 Part I: Preparing for Evaluations
IN A
NUTSHELL 3.6 Step 5:
Justifying Your Conclusions
The conclusions that you draw from your evaluation conclusions are only justified when they are directly linked to the data
you gathered. They will be judged against agreed-on values or standards set by your stakeholders. Stakeholders must
agree that your conclusions are justified before they will use the results from your evaluation with any confidence.
Purpose
Making claims regarding your program that are warranted on the basis of data that have
been compared against pertinent and defensible ideas of merit, value, or significance (i.e.,
against standards of values); conclusions are justified when they are linked to the data
gathered and consistent with the agreed-on values or standards of stakeholders.
Role
Reinforces conclusions central to the evaluation’s utility and accuracy; involves values
clarification, qualitative and quantitative data analysis and synthesis, systematic
interpretation, and appropriate comparison against relevant standards for judgment.
Activities
Using appropriate methods of analysis and synthesis to summarize findings; interpreting
the significance of results for deciding what the findings mean; making judgments
according to clearly stated values that classify a result (e.g., as positive or negative
and high or low); considering alternative ways to compare results (e.g., compared with
program objectives, a comparison group, national norms, past performance, or needs);
generating alternative explanations for findings and indicating why these explanations
should be discounted; recommending actions or decisions that are consistent with the
conclusions; and limiting conclusions to situations, time periods, persons, contexts, and
purposes for which the findings are applicable.
Suggested content
areas to address
when justifying your
conclusions
Questions to ask and answer about each content area when in comes to justifying your
conclusions are listed below. Your main goal is to end up with conclusions that are based
on solid reliable and valid data that your stakeholders will appreciate.
Standards
Which stakeholder values provide the basis for forming the judgments? What type or level
of performance must be reached for your program to be considered successful? to be
unsuccessful?
Analysis and synthesis What procedures will you use to examine and summarize your evaluation’s findings?
Interpretation What do your findings mean (i.e., what is their practical significance)?
Judgment
What claims concerning your program’s merit, worth, or significance are justified based on
the available data (evidence) and the selected standards?
Recommendations
What actions should be considered resulting from your evaluation? (Note: Making
recommendations is distinct from forming judgments and presumes a thorough
understanding of the context in which programmatic decisions will be made.)
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 75
IN A
NUTSHELL 3.7 Step 6:
Ensuring Usage and Sharing Lessons Learned
Assuming that the lessons you learned in the course of your evaluation will automatically translate into informed decision-making
and appropriate action would be naive. Deliberate effort is needed on your part to ensure that your evaluation processes and
findings are used and disseminated appropriately. Preparing for use involves strategic thinking and continued vigilance, both of
which begin in the earliest stages of stakeholder engagement and continue throughout the entire evaluation process.
Purpose
Ensuring that stakeholders are aware of the evaluation procedures and findings; the findings
are considered in decisions or actions that affect your program (i.e., findings use); those who
participated in the evaluation process have had a beneficial experience (i.e., process use).
Role
Ensures that evaluation achieves its primary purpose—being useful; however, several
factors might influence the degree of use, including evaluator credibility, report clarity,
report timeliness and dissemination, disclosure of findings, impartial reporting, and
changes in your program or organizational context.
Activities
Designing the evaluation to achieve intended use by intended users; preparing stakeholders
for eventual use by rehearsing throughout the project how different kinds of conclusions
would affect program operations; providing continuous feedback to stakeholders regarding
interim findings, provisional interpretations, and decisions to be made that might affect
likelihood of use; scheduling follow-up meetings with intended users to facilitate the transfer
of evaluation conclusions into appropriate actions or decisions; disseminating both the
procedures used and the lessons learned from the evaluation to stakeholders using tailored
communications strategies that meet their particular needs.
Suggested content
areas to address when
ensuring usage and
sharing lessons learned
Questions to ask and answer about each content area when it comes to ensuring usage of your
findings and sharing the lessons you learned. Your main goal is to be sure your findings are
utilized in addition to sharing with others what lessons you learned from your evaluation.
Design
Is your evaluation organized from the start to achieve the intended uses by your primary
stakeholder groups?
Preparation
Have you taken steps to rehearse the eventual use of your evaluation findings? How have
your stakeholder groups been prepared to translate new knowledge into appropriate action?
Feedback
What communication will occur among parties to the evaluation? Is there an atmosphere of
trust among stakeholders?
Follow-up
How will the technical and emotional needs of users be supported? What will prevent lessons
learned from becoming lost or ignored in the process of making complex or politically
sensitive decisions? What safeguards are in place for preventing misuse of the evaluation?
Dissemination
How will the procedures or the lessons learned from your evaluation be communicated to
your relevant stakeholders in a timely, unbiased, and consistent fashion? How will your
reports be tailored to your various stakeholder groups?
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
76 Part I: Preparing for Evaluations
Study Questions Chapter 3
The goal of this chapter is to provide you with a beginning knowledge base for you to feel comfortable in answering the
following questions. AFTER you have read the chapter, indicate how comfortable you feel you are in answering each
question on a 5-point scale where
1
Very
un
comfortable
2
Somewhat
uncomfortable
3
Neutral
4
Somewhat
comfortable
5
Very
comfortable
If you rated any question between 1–3, reread the section of the chapter where the information for the question is found. If
you still feel that you’re uncomfortable in answering the question then talk with your instructor and/or your classmates for
more clarification.
Questions Degree of comfort?
(Circle one number)
1
Without peeking at Figure 3.1, list the six steps that you would have to go through in
doing an evaluation. Then describe each step in relation to your field placement (or
work setting) to
illustrate your
main points.
1 2 3 4 5
2
List the main stakeholder groups that you would need to formulate for your
evaluation. Then describe the role that each stakeholder group would have in
relation to your field
placement (or
work
setting) to illustrate your main points.
1 2 3 4 5
3
In your own words, describe the purpose of a logic model when describing your
program (Step 2). Then describe how it would be used in relation to your field
placement (or
work setting) to illustrate your main points.
1 2 3 4 5
4
List the five elements of a logic model and describe each element in detail. Then
construct a logic model in relation to your field placement (or work setting) to
illustrate your main points.
1 2 3 4 5
5
In reference to logic models, what are “if-then” statements? Make an “if-then” statement in
relation to
your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
6
What are concept maps? How are they used when doing an evaluation? Provide
specific social work examples from your field placement (or work setting) to
illustrate your main points.
1 2 3 4 5
7
What are the differences between a formative and a summative evaluation?
Describe how your field placement (or work setting) could use both of them.
1 2 3 4 5
8
When focusing an evaluation you must be concerned with two of CDC’s standards:
utility and feasibility. List the four questions that you will need to ask and answer
under the utility standard and the three questions under the feasibility standard.
Then describe the two evaluation standards in relation to your field placement (or
work setting) to illustrate your main points.
1 2 3 4 5
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 3: The Process 77
9
List and describe the four main types of evaluation questions that an evaluation
can answer. Then describe each question in relation to your field placement (or work
setting) to illustrate your main points.
1 2 3 4 5
10
What four chapters in this book describe how to do needs assessments, process
evaluations, outcome evaluations, and efficiency evaluations?
1 2 3 4 5
11
In reference to formulating evaluation questions, list four tips that you can use to
make the task easier. Then describe each tip in relation to your field placement (or
work setting) to illustrate your main points.
1 2 3 4 5
12
In reference to formulating evaluation questions, list the nine stakeholder groups
(sources) that you can use to make the task easier. Then describe how you can use each
source in relation to your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
13
Describe how you will work with stakeholders to describe your program (Step 2).
Use your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
14
Describe how you will work with stakeholders to focus your evaluation (Step 3). Use
your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
15
Describe how you will work with stakeholders to gather credible data (Step 4). Use
your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
16
Describe how you will work with stakeholders to justify your conclusions from an
evaluation (Step 5). Use your field placement (or work setting) to illustrate your
main points.
1 2 3 4 5
17
Describe how you will work with stakeholders to ensure that your evaluation’s findings are
used (Step 6). Use your field placement (or work setting) to illustrate your main points.
1 2 3 4 5
18
Discuss how you would engage “stakeholders” for a program evaluation. Then
discuss how you would engage client systems within your field placement setting.
Notice any differences between the two? If so, what are they? Provide specific
social work examples throughout your discussion.
1 2 3 4 5
19
Discuss in detail how you would describe a program before it’s evaluated. Then
discuss in detail how you assess your client systems psychosocial environments
before you intervene. Notice any differences between the two? If so, what are they?
Provide
specific social work examples throughout your discussion.
1 2 3 4 5
20
Discuss in detail how you would focus an evaluation. Then discuss how you would
narrow down a client’s presenting problem area so it can become more specific and
manageable. Notice any differences between the two? If so, what are they? Provide
specific social work examples throughout your discussion.
1 2 3 4 5
Study Questions for Chapter 3 Continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
78 Part I: Preparing for Evaluations
Chapter 3 Assessing Your Self-Efficacy
AFTER you have read this chapter AND have completed all of the study questions, indicate how knowledgeable you feel you
are for each of the following concepts on a 5-point scale where
1
Not knowledgeable
at all
2
Somewhat
un
knowledgeable
3
Neutral
4
Somewhat
knowledgeable
5
Very
knowledgeable
Concepts Knowledge Level?
(Circle one number)
1
Listing the six steps (in the order they are presented in this book) of doing an
evaluation
1 2 3 4 5
2 Describing in detail each one of the six step of the evaluation process 1 2 3 4 5
3 Utilizing stakeholders to help you describe your program (Step 2) 1 2 3 4 5
4 Utilizing stakeholders to help you focus your evaluation (Step 3) 1 2 3 4 5
5 Utilizing stakeholders to help you gather credible data for your evaluation (Step 4) 1 2 3 4 5
6
Utilizing stakeholders to help you to justify your conclusions from your evaluation
(Step 5)
1 2 3 4 5
7
Utilizing stakeholders to help you to ensure that the findings from your evaluation
will be used (Step 6)
1 2 3 4 5
8 Constructing logic models 1 2 3 4 5
9 Constructing “if-then” statements for logic models 1 2 3 4 5
10 Developing concept maps 1 2 3 4 5
Add up your scores (minimum = 10, maximum = 50) Your total score =
A 48–50 = Professional evaluator in the making
A− 45–47 = Senior evaluator
B+ 43–44 = Junior evaluator
B 40–42 = Assistant evaluator
B− 10–39 = Reread the chapter and redo the study questions
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
CHAPTER OUTLINE
THE FOUR STANDARDS
Utility
Feasibility
Propriety
Accuracy
STANDARDS VERSUS POLITICS
When Standards Are Not Followed
SUMMARY
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:36:21.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
16
5
C h a p t e r
THEORY OF CHANGE AND
PROGR AM LOGIC MODELS
LISA WYATT KNOWLTON AND CYNTHIA C. PHILLIPS
Logic models were introduced in Chapter 3
when we discussed how they can be used to
describe your social work program—Step 2 of
the six-step process of doing an evaluation. They were
then briefly discussed in the previous chapter in rela-
tion to how they can be used in actually designing a
social service program.
Given what you already know about logic models
from your previous readings, this chapter discusses
them at a much more advanced level. In fact, this
chapter presents two types of models that can be used
in your modeling activities:
• Theory of Change Models. These are
conceptual; that is, they are simply a general
graphical representation of how you believe
change will occur within your program. They
are done before a program logic model is
constructed.
• Program Logic Models. These are
operational; that is, they are based off of
your theory of change model. As depicted
in Figures 3.2 and 3.3 in Chapter 3, they
detail the resources, planned activities,
outputs, and outcomes over time that
ref lect your program’s intended goal.
In an ideal world, they are constructed
after a theory of change model is
completed.
MODELS AND MODELING
Regardless of type—theory of change or program
logic—good models are used to,
• explain an idea
• resolve a challenge
• assesses progress
• clarify complex relationships among a
program’s elements or parts
• organize information
• display thinking
• develop common language among
stakeholders
• offer highly participatory learning
opportunities
• document and emphasize explicit client and
program outcomes
• clarify knowledge about what works and why
• identify important variables to measure
and enable more effective use of evaluation
resources
• provide a credible reporting framework
• lead to a program’s improved design,
planning, and management
Concept Maps
Models are concept maps that we all carry around
in our minds about how the world does (or should)
8If you don’t know where you are going,
any road will get you there.
~ Lewis Carroll
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
0
1
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
166 Par t II: Designing Programs
work. They are tools we can use to convey a scheme,
program, or project in a brief, clear visual format.
They describe our planned actions and the expected
results from our actions. A model is a snapshot of an
individual’s or group’s current thinking about how
their social work program will work.
Modeling is also a technique that encourages
the iterative development of a program. More spe-
cifically it creates a safe space for a program’s stake-
holders to start a debate, generate ideas, and support
deliberations. More important, it allows us to think
more clearly about specific relationships between and
among variables. Models are a single, coherent logic
that reflects a consistent thread that connects your
program’s overall design, implementation, and even-
tual evaluation. This thread of logic is critical to your
program’s effectiveness.
Modeling allows careful consideration of the
relationship between what you actually do as a social
worker (your day-to-day activities) and the results you
obtain from your activities (outcomes). When tackled
by a team—or a small group of stakeholders for that
matter—models can be improved by engaging the
knowledge and experience of others. The best models
are socially constructed in a shared experience that
is facilitated. The shared understanding and meaning
they produce among social workers are valuable and
enable success in subsequent steps of an evaluation’s
implementation.
Moreover, models are also used to calibrate
alignment between the program’s “big picture” and
its various component parts. They can easily illustrate
parts of a program or its whole system.
Two Types of Models: One Logic
As previously stated, there are two types of models:
theory of change and program logic. They only differ
by their level of detail and use. Nevertheless, they are
both based on logic:
• A theory of change model is a very basic
general representation of how you believe
your planned change will occur that will lead
to your intended results.
• A program logic model details the resources,
planned activities, outputs, and their
outcomes over time that reflect the program’s
intended results.
The level of detail and features distinguish theory
of change models from program logic models. The
two types of models and their relative features are
highlighted in Table 8.1.
On one hand, the two models are different
from one another in relation to time frame, level of
detail, number of elements, display, and focus. On
the other hand, they are alike because they share the
same research, theory, practice, and/or literature.
Essentially, the two types are simply different views
of the same logic that have a shared origin. The two
model also differ in purposes:
• Theory of change models display an idea or
program in its simplest form using limited
information. These models offer a chance to
test plausibility. They are the “elevator speech”
or “cocktail-napkin outline” of an idea or
project.
• Program logic models, on the other hand,
vary in detail but offer additional information
that assists in a program’s design, planning,
strategy development, monitoring, and
evaluation. Program logic models support
a display that can be tested for feasibility.
They are the proposal version of a social work
program because they have fleshed out in
detail—from a theory of change model—the
resources, activities, outputs, outcomes, and
other elements of interest to those creating
and/or using the model.
Examples
The following two examples briefly explain the gen-
eral concepts and terms related to theory of change
models and program logic models. Although we show
one of each type of model, it’s important to keep in
mind that these are only two examples from a much
broader continuum of possibilities. There are many
ways to express or display ideas and level of detail.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 167
Theory of Change Model Example
Theory of change models are the critical founda-
tion for all social work programs. Often these models
exist as part of an internal mental framework that is
“dormant” or undisclosed. They can also imply con-
siderable knowledge, experience, research, and prac-
tice. The evidence base for theory of change models
typically is not made explicit.
Figure 8.1 shows a simple theory of change
model for a community leadership program aptly
titled “Community Leadership Program.” Read from
left to right, it illustrates that the program contains
two strategies: an academy leadership curriculum
(Strategy 1) and an academy leadership experience
opportunity (Strategy 2).
These two strategies, when combined together
and successfully implemented, will then lead to
“more and better” community leaders, which in turn
will lead to better community development. In short,
the two strategies within the Community Leadership
Program, when successfully implemented, leads to
positive results.
Program Logic Model Example
Like theory of change models, program logic
models are also visual methods of presenting an idea.
And, like theory of change models, they are simply
concept maps as mentioned in Chapter 3. They offer
a way to describe and share an understanding of rela-
tionships (or connections) among elements necessary
to operate your social work program. Logic models
describe a bounded program: both what is planned
(the doing) and what results are expected (the get-
ting). They provide a clear road map to a specified
end, with the end always being the outcomes and the
ultimate impact of the program.
Common synonyms for logic models include
concept maps, idea maps, frameworks, rich pictures,
action, results or strategy maps, and mental mod-
els. Program logic models delineate—from start to
finish—a specified program effort. For example, a
program logic model for our Community Leadership
Program (based on the theory of change model pre-
sented in Figure 8.1) would include the specified
resources, activities, outputs, outcomes, and impact:
• Resources (or inputs) are what are needed to
ensure the program can operate as planned.
For example, money to pay your tuition is
needed before you can enroll in your social
work program, along with a host of other
resources you will need.
• Activities are the tactical actions that occur
within the program such as events, various
types of services, workshops, lectures,
publications, and the like. Together, activities
make up your program’s overall design—it’s
the intervention package. This is where the
rubber hits the road. For example, one of the
activities of your social work program is the
courses you take. This is the “guts” of your
social work program.
Table 8.1: Features of Model Types.
Features Theory of Change Program Logic
Time frame No time Time bound
Level of detail Low High
Elements Few (“do + get”) Many
Primary display Graphics Graphics + text
Focus Generic Targets + specified results
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
168 Par t II: Designing Programs
• Outputs are descriptive indicators of what the
specific activities generate. For example, this
could simply be the number of students who
graduate each year after they complete the
activities (i.e., courses).
• Outcomes are changes in our clients’
awareness, knowledge levels, skills, and/or
behaviors. The impact reflects changes over a
longer period. For example, this could simply
be the number of students who found social
work jobs after graduating or the degree of
your effectiveness as a social worker.
Figure 8.2 displays a simple program logic model
for our Community Leadership Program shown as a
theory of change model in Figure 8.1.
The program logic model illustrated in Figure
8.2 suggests that the program’s desired results include
more and better community leaders, which in turn
will lead to better community development efforts.
It implies the leadership development agenda is
about resolution of community challenges and that,
if resolved, will contribute to better community
development.
To “read” this model, first note on the far
right-hand column (column 6) the intended impact
(ultimate aim) of the program: community develop-
ment. Then move to the far left-hand column (col-
umn 1), where resources (or inputs) essential for the
program to operate are listed. As you should know
by now, program logic models employ an “if–then”
sequences among their elements.
When applied to the elements in each column in
Figure 8.2, it reads,
• IF we have these resources (column 1),
• THEN we can provide these activities
(column 2).
• IF we accomplish these activities (column 2),
• THEN we can produce these outputs
(column 3).
• IF we have these outputs (column 3),
• THEN we will secure these short-term
outcomes (column 4).
• and so on.
Box 8.1 illustrates another version of how this
“if-then” logic can be used.
Academy
Leadership
Curriculum
Academy
Leadership
Experiences
“More and
Better”
Community
Leaders
Community
development
Strategies
Results
Figure 8.1: Theory of change model for the Community Leadership Program.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Resources ImpactShort Term Outputs
Activities
Curriculum
and materials
Faculty
Sponsors ($)
Host and
facility
Leadership
Curriculum
Content
Leadership
Experiences
Processes
Community
development
Outcomes
Better Leaders
New leadership
attitudes,
knowledge, skills,
and behaviors
Increased
community
awareness and
action bias
Participant
description
Completion
rate
Participant
satisfactionMarketing/
communication
campaign
Graduates
use
knowledge and
skills obtained
through the
program to
strengthen
the community
Participants
Intermediate/Long Term
Figure 8.2: Program logic model for the Community Leadership Program (from Figure 8.1).
G
rinnell, R
. M
., G
abor, P
. A
., &
U
nrau, Y
. A
. (2015). P
rogram
evaluation for social w
orkers : F
oundations of evidence-based program
s. O
xford U
niversity P
ress, Incorporated.
C
reated from
capella on 2023-01-30 18:28:35.
Copyright © 2015. Oxford University Press, Incorporated. All rights reserved.
170 Par t II: Designing Programs
The program logic model depicted in Figure 8.2 is
just one very simple representation of how a program
might be designed. Many other variations of this exam-
ple also exist that would still be logical and plausible.
LOGIC MODELS AND
EVALUATION DESIGN
A clear and coherent program logic model provides
great assistance during an evaluation’s design. It points
out the key features and shows the relationships that
may or may not need to be evaluated. At this level, eval-
uation questions are the foundation for an evaluation’s
design. If we apply this to our Community Leadership
Program, for example, it’s more than appropriate to
focus on our program’s intended results.
As illustrated in Box 2.1, a summative evaluation
question could be: What difference did our program
make in the community’s development? Perhaps a
place to begin is in determining the contribution the
program made to the actual generation of more and
better community leaders.
In this example, an evaluation could consider
both changes in the awareness, knowledge, skills,
and behavior of the program’s participants as well
as the impact they had on community development.
Stakeholders might also want to know about the con-
tent of the two activities (i.e., leadership curriculum,
leadership experiences) and quality of training. They
might be curious about implementation fidelity and
adaptation too. Figure 8.3 demonstrates a program
logic model with typical evaluation questions.
This program logic model represented by Figure
8.3 is serving as a concept map to guide the evaluation
of the program. The five key evaluation questions are
contained at the bottom of their respective columns
in Figure 8.3. Key questions for our Community
Leadership Program include:
1. Is the program doing the right things?
(column 1)
2. Is the program doing things right? (column 3)
3. What difference has the program made
among participants? (column 4)
4. What difference has the program made
across the community? (columns 5 and 6)
5. What are the ways community needs can
and should be
addressed by the program?
(columns 3–6)
Positioning questions on the logic model iden-
tifies where the data might be found to address any
given inquiry:
BOX 8.1 USING “IF-THEN” STATEMENTS IN DEVELOPING LOGIC MODELS
IF a certain set of resources (such as staff, equipment, materials) are available,
THEN the program can provide a certain set of activities or services to participants.
IF participants receive these services,
THEN they will experience specific changes in their knowledge, attitudes, or skills.
IF individuals change their knowledge, attitudes, or skills,
THEN they will change their behavior and usual practice.
IF enough participants change their behavior and practice,
THEN the program may have a broader impact on the families or friends of participants or on the community as a whole.
Thus a school-based alcohol prevention program could have the following theory:
As a result of the reduced alcohol use of individual youth, alcohol problems in schools will decline.
Social worker provide
alcohol prevention training
to youth
Youth garin
knowledge of
alcohol avoidance
strategies
Youth practice
alcohol avoidance
strategies
Youth reduce
alcohol initiation and
use
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Is the program doing
the right things?
Is the program doing
things right?
What difference has the program
made among participants?
What difference has the program
made across the community?
What are the ways that community
needs can and should be
addressed by the program?
1
32 4
5
Resources ImpactOutputsActivities
Curriculum
and materials
Faculty
Sponsors ($)
Host and
facility
Leadership
Curriculum
Content
Leadership
Experiences
Processes
Community
development
Short Term
Outcomes
Better Leaders
New leadership
attitudes,
knowledge, skills
and behaviors
Increased
community
awareness and
action bias
Participant
description
Completion
rate
Participant
satisfactionMarketing/
communication
campaign
Graduates use
knowledge and
skills obtained
through the
program to
strengthen
the community
Participants
Intermediate/Long Term
Figure 8.3: Program evaluation model for the Community Leadership Program (from Figure 8.2).
G
rinnell, R
. M
., G
abor, P
. A
., &
U
nrau, Y
. A
. (2015). P
rogram
evaluation for social w
orkers : F
oundations of evidence-based program
s. O
xford U
niversity P
ress, Incorporated.
C
reated from
capella on 2023-01-30 18:28:35.
Copyright © 2015. Oxford University Press, Incorporated. All rights reserved.
172 Par t II: Designing Programs
• Question 1 “tests” the logic constructed
during the planning phase of the program.
This question requires thoughtful
connections to be drawn across activity
accomplishment, implementation
fidelity, and the attainment of desired
outcomes/impact. It addresses the overall
effectiveness of the selected activities and
the related action in achieving the desired
results.
• Question 2 examines implementation fidelity/
variance as well as the scope, sequence,
penetration, and quality of activities.
• Questions 3 and 4 focus on the extent to
which outcomes and impact have been
achieved.
• Question 5, like Question 1, should span
the whole model to surface program
improvement needs. Questions 1 and 5
are more reflective but are essential to a
program’s improved effectiveness.
These evaluation questions can be very helpful
in the initial design and development of the program,
as they help to aim the program’s intervention(s). The
next step is establishing indicators. Models also help
us to guide the conversation and exploration needed
to determine outcome indicators (see previous chap-
ter), or the measures of progress, for any given social
work program.
Limitations
It’s important to note that the proper reference,
“logic model,” in no way guarantees that the model
is, in fact, logical. While many models do demon-
strate some modicum of logic, however, a logical
representation does not always equal plausibility,
feasibility, or success. There’s some danger in seeing
a graphic display on paper and considering it “true.”
This notion of omnipotence can stem from a work-
er’s limited domain knowledge, vested interests, and
lack of perspective. Typically, models do not take
unintended consequences into account, although
every social work program has negative side effects.
Realistically, even when program theory and
logic models are constructed and build on the
insights of a broad representative stakeholder group,
can anyone be sure who’s right? Every model must
always be considered a draft. They are always incom-
plete and provide a simple illustration that makes
evaluation and program improvement more accessi-
ble to individuals and groups. The mere existence of
a model does not mean that the model—or the plans
it represents—is ready for immediate implementa-
tion or that it will readily deliver its intended results.
It’s essential to note that a logic model is a graphic
display of the program captured at one point in time.
It has to change in order to reflect best thinking and
current evidence as these evolve over time. Creating
and displaying variations of a model are experiences
that can develop thinking about strategies/activities
and their intended results. This development is a criti-
cal process in model quality and, ultimately, in the
feasibility of the efforts described.
One of the greatest value of logic models is their use
in an iterative, intentional process aimed at improving
the thinking they illustrate. This is best done through
a facilitated sequence with selected stakeholders.
Obviously, logic models do not ensure perfect pro-
gram implementation fidelity or even quality. Nor do
they remedy any of the many concerns about organi-
zational structure and culture that can deeply affect
the program’s effectiveness (see Chapters 5 and 6).
Important action steps associated with quality include
the identification of both the assumptions and the evi-
dence used when developing models.
Models Begin With Results
Determining the results you desire is the first step in
evaluating a program’s overall effectiveness, because
knowing where you are headed—or where you want
to go—is critical to picking the best route to use (see
quote at the beginning of this chapter). Logic models
always begin with results. Results consist of outcomes
and impact; each appears in a sequence over time.
While impact is the ultimate end sought, sometimes
synonymous with vision, outcomes are earlier indica-
tions of progress toward the results.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 173
Start
Here
Improved Program
Effectiveness
Can we make
better decisions?
Are we doing the
right work?
Are we achieving
superior results?
Design
Planning and
Implementation
Evaluation
Theory of
Change (TOC)
Program Logic
Model (PLM)
Figure 8.4: The effectiveness continuum and models.
Results are the best place to begin when you are
struggling with deciding which interventions (strat-
egy) you should use to solve the social problem. It’s
important to avoid moving prematurely to specify
what you want to do without knowing where you
want to go. When it comes to program planning,
specifying those outcomes most likely to occur soon
and then those that will take more time to emerge
helps determine what route (action path) might be
best to use.
Social workers commonly complain their work is
both activity focused and frantic. Considerable time and
effort are spent on a flurry of tasks that frequently lack
a clear relationship to the program’s intended results.
Logic models can assist in sorting priorities because
they both rely on—and help build—a visual literacy that
makes action and expected consequences clear. Through
the models and modeling, stakeholders can identify
strong evidence-based interventions likely to contribute
to the results sought. And those interventions with less
(relative) value can be sidelined or discarded.
Logic Models and Effectiveness
In the workplace (and in life), almost everyone is
interested in effectiveness. To that end, you need to
ask—and answer—three questions:
• Are you doing the right work?
• Can you make better decisions?
• Are you getting superior results?
All three of these questions apply in any
context—whether it’s in government or the private
or nonprofit sector. They are among the most critical
questions for social work administrators and line-level
workers alike because they focus on key levers that
influence performance. Doing the “right work” along
with making “great decisions” secures “superior
results.”
Logic models can help to answer the three ques-
tions. Thus they are a useful tool for anyone interested
in developing more effective social work programs.
Figure 8.4 demonstrates key points of the
design, planning, implementation, and evaluation
that the two types of models can support. Theory
of change models are most helpful during the ini-
tial design of a program (left side of diagram). As
plans or evaluation require greater detail, program
logic models can make a substantial contribution
to these later stages of work (right side of diagram).
The types of models and their uses form a continu-
ous loop that can provide feedback about a single
program throughout its life cycle.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
174 Par t II: Designing Programs
Logic models as both a concept mapping tool and
a strategic process offer considerable value to your
program and, subsequently, its effectiveness. They can
be used for different purposes at different times in the
life cycle of a program. Theory of change models can
dramatically influence program planning because
they rely on knowledge to offer choices about doing
the right work. In this stage, the selection of interven-
tion strategies to produce the intended results occurs.
Program logic models help with more pre-
cise decisions about selecting the most promising
evidence-based interventions that will be the most
effective to achieve the intended results. They also
aid in the design of an evaluation. They can assist in
pointing to optimal areas of inquiry and help to deter-
mine whether progress is being made and what differ-
ence has occurred relative to results.
Some social service organizations use logic mod-
els routinely. They are a standard tool that promotes
alignment and synergy. For example, a program eval-
uation can be designed and implemented more eas-
ily when a clear theory of change model and program
logic model are already in existence.
BASIC PROGRAM LOGIC MODELS
The remainder of this chapter identifies the basic
elements of program logic models. Generally, these
models have enough detail to support a program’s
overall intervention strategy, design, implementation,
and evaluation.
As we know, theory of change models are the
foundation for program logic models. When well
developed, theory of change models can ensure
intellectual rigor for program logic models. Figure
8.5 illustrates the relationship of a theory of change
model (composed of strategies and results) to the pri-
mary elements of a program logic model (composed
of resources, activities, outputs, short-term outcomes,
intermediate-term outcomes, long-term outcomes,
and impact). The theory of change model is illustrated
in the top horizontal row, and the program logic
model is illustrated in the bottom horizontal row.
Notice that under the “Do” column in Figure
8.5, theory of change models use the term “strate-
gies” and program logic models use the three terms
“resources,” “activities,” and “outputs.” Under the
“Get” column, theory of change models use the term
“results,” and program logic models use the four
terms, “short-term outcomes,” “intermediate-term
outcomes,” “long-term outcomes,” and “impact.”
Assumptions Matter
It’s important to be aware that specific assumptions
are not illustrated in Figure 8.5. Recall that assump-
tions are informed by beliefs, past experiences,
intuition, and knowledge. Too often, program logic
models are built without the benefit of explicitly
naming the assumptions and underlying the specific
theory of change. This omission can help explain why
tremendous conflict, even chaos, can erupt during
program development, planning, implementation,
and assessment.
Do Get
OutputsResources Activities Impact
Short-Term
Outcomes
Long-Term
Outcomes
Intermediate-
Term
Outcomes
Strategies Results
Figure 8.5: Relationship of program and theory of change models.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 175
In the absence of explicitly named assumptions,
either a clear theory of change does not exist and/or
people hold multiple and conflicting variations that
reflect their deeply held views about what should, or
could, work and why. This can lead to diffused or
diluted social work programs that lack the focus and
intensity needed to produce their intended results.
Because of these implications, omitting this “founda-
tion” for your program undermines its potential for
success.
As noted previously, conceptualization and learn-
ing styles differ from person to person. Organizational
culture also affects how design, planning, monitor-
ing, and measuring occur within any given program.
Given these practical issues, we strongly suggest
that both theory of change and program logic mod-
els eventually be created to form the foundation of
shared meaning for all aspects of your program. The
sequence in which they are developed certainly will
reflect your stakeholders’ preferences.
Key Elements of Program Logic Models
Program logic models display what a social work pro-
gram might contain from start to finish. Its elements
consist of the recipe for a bounded investment of
financial and social capital for a specified result.
The level of detail within a logic model must show
the relationships that illustrate the essential linkages
that are needed in order to make a plan fully opera-
tional for each of the strategy strands identified in
the theory of change model. The primary elements
for each strand of a program logic model include
resources, activities, outputs, outcomes, and impact.
Figures 3.2 and 3.3 in Chapter 3 are the basic tem-
plates of the elements for most program logic models.
This is good time to review these two figures. The ele-
ments within these two figures are as follows:
• Resources are essential for activities to
occur. They can include human, financial,
organizational, community, or systems
resources in any combination. They are used
to accomplish specific activities. Sometimes
resources are called inputs.
• Activities are the specific actions that make
up the program. They reflect tools, processes,
events, evidence-based interventions,
technology, and other devices that are
intentional in the program. Activities are
synonymous with interventions deployed
to secure the program’s desired changes or
results.
• Outputs are what specific activities will
produce or create. They can include
descriptions of types, levels, and audiences
or targets delivered by the program. Outputs
are often quantified and qualified in
some way.
• Outcomes are about changes in our client
system, often in program participants or
organizations, as a result of the program’s
activities. They often include specific changes
in awareness, knowledge levels, skills, and
behaviors. Outcomes are dependent on the
preceding resources, activities, and outputs.
Sometimes outcomes are deconstructed by
time increments into short, intermediate, and
long term (e.g., Figure 8.3).
Time spans for outcomes are relative and should
be specified for the program described. However,
short term is often 1 to 3 years, intermediate term 4
to 6 years, and long term 7 to 10 years. The intervals
specified for any given model would depend on the
size and scope of the effort.
For example, a small-scale program such as
an adult education typing class in one location
might produce knowledge and skill outcomes in 6
weeks, where behavioral changes, such as changes
in employment status, might take somewhat longer.
Alternatively, a program targeting changes in global
water quality might specify changes in the awareness
and knowledge of international policymakers within
1 to 3 years; actual environmental improvements
might not occur for several decades. Typically, divid-
ing the project duration into thirds works pretty well
as a starting point. Relying on additional evidence-
based material also helps to inform us as to what’s fea-
sible and realistic.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
176 Par t II: Designing Programs
Being exceedingly clear about timing and
expected results is of paramount importance. The
time span for outcomes is program specific. The logi-
cal sequencing of any given outcome chain also mat-
ters. Think about what will happen first, then what is
likely to happen next.
Also keep in mind that the sequence may or may
not be lockstep and barrel. Under some conditions,
there may be different points of entry into a sequence.
The important thing is to explore the interconnec-
tions and dependencies that do exist among the out-
comes and impact you specify.
Impact is the ultimate intended change in an orga-
nization, community, or other client system. It carries
an implication about time. It varies in its relative tim-
ing to the actual program or change effort. Sometimes
impact occurs at the end of the program, but, more fre-
quently, the impact sought is much more distant.
For some efforts, this may mean impact can be
cited in 7 to 10 years or more. This can have important
implications, as it’s well beyond the funding cycle for
many typical grant-funded programs or the patience
of many managers or politicians. A program logic
model is one easy way to show how the work you do
(your activities) within these constraints will hope-
fully contribute to a meaningful impact (your desired
outcome that was obtained via your activities).
Nonlinear Program Logic Models
Just as in theory of change models, very few logic mod-
els of social work programs are developed in linear
progressions. Purposely, to aid learning, we simplified
the display of elements as a straight sequence. Reality
suggests cycles, iterations (additional attempts), and
interactions are very common. This more organic
development is shown in Figure 8.6.
In this circular display, there is no specific
starting point. Although the logic model elements
are constant, the work of design, planning, manag-
ing, or evaluating might begin with any element.
Intermediate
-Term
Outcomes
Short-Term
Outcomes
Activities
Outputs
Resources
Impact
Activities
OutputsResources
Impact
Short-Ter m
Outcomes
Short-Term
Outcomes
Activities
OutputsResources
Impact
Short-Term
Outcomes
Short-Term
Outcomes
Short-Term
Outcomes
Intermediate
-Term
Outcomes
Intermediate-
Term
Outcomes
Figure 8.6: Nonlinear logic model.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 177
In addition, this view shows how cycles of the
same activity might occur over time. Keep in mind
that Figure 8.6 groups activities together. A more
detailed view could be staggering to portray.
Sometimes capturing reality in a display impedes
communication.
Hidden Assumptions and Dose
As we know by now, a program logic model displays
the elements that are most critical to establishing
and operating a social work program. It specifies the
activities and their interdependent relationship as
well as what they are expected to achieve. Program
logic models do not necessarily include assumptions,
but they rely on them.
They offer a view of the map that can inform a
program’s action plan and, later, its implementation.
They can also quantify the “dosage” (e.g., number,
type, and duration of activities) and describe the
effects and benefits of the program for any given dos-
age, in addition to the ultimate change expected.
Getting the Dosage Right
Dosage is an important concept in effectiveness.
A diluted dosage can have the same impact as no
dosage at all. For example, if your mini-program’s
intended result is a large voter turnout in a local
election (outcome), a classified ad may not be the
best communication strategy (activity to achieve the
outcome). A comprehensive media plan (Activity 1),
for example, coupled with free transportation to
the voting booths (Activity 2) has a greater chance
of success (outcome). So it’s tremendously impor-
tant to design your program with enough of the
right activities and dosage to secure your intended
outcome.
BUILDING A LOGIC MODEL
An example of a program logic model for an
improved-health program is displayed in Figure 8.7.
As can be seen in the second column from the far left,
the total intervention package, or overall interventive
strategy, if you will, is actually composed of four
activities. More often than not, a program’s interven-
tion package rarely relies on just one activity—they
usually rely on multiple activities, as is evident in
Figure 8.7.
The program logic model portrayed in Figure 8.7
suggests that IF we provide our participants with an
exercise activity, a nutrition activity, a stress-reduction
activity, and a retention activity, THEN their health
will improve. Notice the word activity in the previ-
ous sentence and the “if-then” logic. Thus there are
four activities (second column) that make up the com-
plete intervention package for the improved-health
(far right column) program. And we couldn’t do the
activities without the resources as outlined in the far
left column.
Activities are sometimes called components,
services, or interventions. The components of your
social work program, for example, are all the courses
you take in addition to other services your program
makes available to you such as advising, providing/
sponsoring a social work club, field trips, emergency
loan funds, a social work library, study area, computer
area, and so on.
Note the development of detail connecting
the four activities (i.e., the total intervention pack-
age) to results in this program’s logic model com-
pared to a the theory of change model for the same
program. The program logic model simply provides
much more detail than the theory of change model
for the same program by explicating the elements
from a basic logic model for each activity strand. In
a program logic model, for example, the details rela-
tive to the program’s resources, activities, outcomes,
impact, and other elements are labeled and placed in
a sequential order.
Although still an overview and incomplete, the
logic model illustration provides a detailed view of
what this health-improvement program needs for
resources, wants to do, plans to measure, and hopes
to achieve. Beginning with the far left column with
resources, this program’s logic model includes funds,
facility, faculty, and coaches, as well as eligible and
willing participants, among its requisite inputs.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Improved
health
Do Get
Resources Activities Outputs Short-Term
Outcomes
Long-Term
Outcomes
Impact
Nutrition
activities
Funds
Facility
Faculty
Intermediate-
Term Outcomes
Repeats for each strategy
Motivation Retention
Knowledge
Exercise
activities
Stress-
reduction
activities
Coaches
Eigible and
willing
participants
Retention and
recruitment
activities
Curricula
and staff
Participant
data
Messages
and media
Coaching
tools and logs
Awareness
Relaxation
Flexibility
Fat/Calories
Nutrients
Endurance
Strength
Adherence
Skill
Figure 8.7: Logic model for an improved-health program.
G
rinnell, R
. M
., G
abor, P
. A
., &
U
nrau, Y
. A
. (2015). P
rogram
evaluation for social w
orkers : F
oundations of evidence-based program
s. O
xford U
niversity P
ress, Incorporated.
C
reated from
capella on 2023-01-30 18:28:35.
Copyright © 2015. Oxford University Press, Incorporated. All rights reserved.
Chapter 8: Theory of Change and Program Logic Models 179
Once again, the program’s overall intervention
contain four activities, or components. Outputs from
the four activities could be numerous. For this illus-
tration, we show only the overarching categories of
information that could be considered.
Each activity would be repeated for each of the
strands. These would include details about the scope,
sequence, and quality of the curriculum; staffing qual-
ifications; and information about participants and
their participation. Activities “inside” these compo-
nent strands contribute to changes in the participants’
knowledge levels (short-term outcome), skills, and
adherence (intermediate-term outcomes). Eventually,
they can contribute to increases in the participants’
strength, endurance, nutrients, flexibility, and relax-
ation (long-term outcomes).
Concurrently, over time, these same activities
also yield reduced fat/calories (another long-term out-
come). In fact, reducing fat/calories could indeed have
a column of its own—to the immediate right of the
long-term-outcomes. It would come just to the left of
the program’s impact, or improved health.
The retention and recruitment activity strand also
generates some outputs and outcomes. Aggregated,
activities within this component secure and keep par-
ticipants in the program. Note that this model uses
arrows to show relationships. Sometimes they reflect a
cluster (indicating synergies) rather than just one-to-
one relationships.
As is typical of many social work programs, sev-
eral activities, or components, within an interven-
tion package are shown as contributing collectively
to outcomes rather than each component making its
individual contribution to distinct outcomes in iso-
lation. Collectively, the long-term outcomes gener-
ate improved health, which could be measured in a
variety of ways (e.g., blood pressure, blood lipid, sugar
profiles, weight, physical fitness).
In contrast to the big-picture view that theory of
change models offer, program logic models provide
a closer, more detailed picture of a program’s opera-
tions. This view of the program provides adequate
detail to create well-conceptualized and operation-
alized work plans. Program logic models provide a
reliable outline for work plans that are then used to
implement and manage a program. Just like theory
of change models, program logic models are based
on logic, but, here too, feasibility—given limited time
and resources—is the appropriate standard for assess-
ing their actual realistic value.
A common question about program logic mod-
els focuses on their level of detail. Essentially, their
detail level is determined by their intended use and
users. Although somewhat situational, they build out
an overall intervention into activities. Sometimes
they can even get into detailing the tasks that are con-
tained within the activities, although more often that
is described in the program’s operations manual or
action plan.
From Strategy to Activities
Some program logic models can be extremely com-
plex, but the steps to create them are generally the
same as for more simple efforts (see Figures 3.2 and
3.3 in Chapter 3). Large-scale programs or multi-
year change efforts (sometimes called “initiatives”)
often are composed of many activities aimed at tar-
get audiences across many sites over a considerable
time period. Often a single activity has numerous
components—and sometimes even subcomponents.
As previously stated, program logic models usu-
ally do not display underlying beliefs or assumptions.
They are nevertheless important elements in the
conscious exploration of multiple target audiences.
Sometimes social work programs are implemented in
a cascade with some overlap in time, which requires a
particular sequence of activities. When this is the cir-
cumstance, it can be helpful to focus on a function, a
given intervention, or one partner’s designated work.
The task is often simplified by thinking about
a single aspect and then connecting it back to the
whole with some of the inherent complexity reduced.
Ultimately, program execution relies on integrated
action—but the work that precedes it may require
focused developmental attention on smaller parts.
Using our health-improvement program exam-
ple, Figure 8.8 provides an orientation to how the
exercise activity strand is reduced to subactivities. It
breaks the activity into greater detail.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
180 Par t II: Designing Programs
As can be seen in Figure 8.8, it becomes evi-
dent that exercise, as an activity, is made up of four
key subactivities: physical exercise (strength), physi-
cal exercise (endurance), education, and assessment.
Together, all four of the subactivities represent a com-
prehensive activity called exercise. And the exercise
activity is just one of the four activities to improved
health. Recall that the whole theory of change for this
example includes three other activities to improved
health: nutrition, stress reduction, and retention and
recruitment.
It’s the combination of the four activities reflected
in the whole program that is most likely to secure the
program’s desired results. Each strand of a compre-
hensive program logic model needs to illustrate the
contribution it makes to the overall desired result as
well as its interdependence.
As you specify the subactivities content of
your activity, you are naming more precisely what
makes up the given activity. Later, the whole model
is tested for feasibility—both practically before its
implementation and literally when the program
is evaluated. This may be a good time to reread
Chapter 7 in reference to how a client system’s prac-
tice objectives must be in congruent with the pro-
gram’s objectives.
Action Steps for a Program Logic Model
The practical construction of a program logic model
often begins with one or more information sources
(e.g., research, interviews, past experiences, hunches,
documents):
• First, we recommend that you begin with
both a theory of change model and a program
logic model with the named ends. You are
most clear about your intended results
(outcomes and impact). Our experience
is that you must know what you want to
accomplish before beginning a logic model.
Put this on the far right in your model
(impact).
• Second, name the changes or outcomes that
will be part of your progress toward your
program’s intended impact. Unpacking this
sequence is important because it makes it
easier to see the strength of the connection
between what you do (activities) and what
you can get (outcomes).
• Third, we suggest tackling the specific
activities, or interventions, that are required
to achieve the outcomes you have specified in
the second step. Interventions/activities are
what causes the outcomes. Outcomes do not
change by osmosis. They change because of
interventions/activities.
• Fourth, list all the resources (inputs) that
you need to implement your intervention
package.
• Finally, outputs reflect the information
needed to verify that activities named earlier
in the process reach the right audiences and
are of the quality and quantity needed to
produce results.
So, according to Figure 8.9, the steps to draft a
program logic model are ordered in this way:
Step 1: Identify the results that your total
intervention package (various activities)
will ultimately generate—the impact of
your program.
Results
GetDo
Strength
Activities
Endurance
Activities
Exercise
Education
Fitness
Assessment
Exercise Activity
Figure 8.8: The exercise activity with four subactivities.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 181
Step 2: Describe the time-ordered series
of outcomes (or changes) that will show
progress toward your overall impact.
Step 3: Name all the activities needed to
generate the outcomes.
Step 4: Define the resources (inputs) that are
needed to produce the activities.
Step 5: Identify the outputs that reflect the
accomplishment of activities.
Creating Your Program Logic Model
As should be evident by now, the format of your logic
model helps you to organize your program’s infor-
mation in a useful way. Think of a program you are
affiliated with now or want to create and its intended
results. For each activity, brainstorm elements that
might be cited in short-term outcomes first but are
clearly linked to your intended results. Do the same
for resources, activities, and outputs. It’s important to
make choices about the outcomes that are realistically
and practically feasible with your limited financial
resources.
With some experience you will begin to recog-
nize commonly used activities that reflect knowledge
from our profession. For example, marketing/com-
munications, recruitment, retention, professional
development or education, advocacy, and policy are
activities often found in program logic models.
Examples of subactivities under a marketing/
communications activity could include preparing a
database of target markets, generating news releases,
creating and sending a newsletter, establishing a web-
site, and distributing public service announcements.
We suggest you tackle one activity at a time. Aim to
define the same level of detail for each activity. Box
8.2 presents some challenges when developing logic
models and provides some possible solutions to each
challenge.
Guiding Group Process
You can practice your group work skills when
you develop logic models. The best method for gen-
erating a program logic model is to work with your
stakeholders. Stakeholders are situational, but they
generally are those who have an interest in—or are
likely to—benefit from your program. As you know,
stakeholders often include funders, program staff,
and program participants. The facilitation of mod-
eling requires some advance planning and a com-
mitment to both discipline and quality during the
process.
If you’ve already constructed a theory of change
model, use it to catalyze the creation of a program
logic model. If not, defining shared understanding
for specified results gets your group process effort
started. It’s important to note that logic models need
to be continually updated to respond to the dynamics
of its external environment (context). They also reflect
living systems that are not mechanistic but are con-
stantly changing.
GetDo
Outputs
Step 5
Step 2
Step 1
Step 4
Step3
Activities Impact
Short-Term
Outcomes
Long-Term
Outcomes
Intermediate-
Term
Outcomes
Strategies Results
Resources
Figure 8.9: Steps in creating a program logic model.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
BOX 8.2 CHALLENGES AND POSSIBLE SOLUTIONS OF LOGIC MODEL DEVELOPMENT
Oftentimes stakeholders may have doubts or concerns
about developing a logic model process. There may be
concerns about the time and resources needed or the
usefulness of the product. To help you alleviate these fears,
we have listed some of the most common challenges to the
logic model effort and suggested some possible solutions.
Challenge: “We’ve had trouble developing a logic model
because our key stakeholders (e.g., staff, funders) cannot
agree on the right services or outcomes to include.”
• Although it might be difficult, keep key stakeholders
involved, including staff, program participants,
collaborators, or funders. Involving stakeholders does
not mean they need to be involved with all tasks, and
they do not need to have sign-off authority. Their role
can be as simple as inviting them to review materials
or help you think through some of your stickier
questions or issues.
• Focus on the process, not the product. Take time to
explore the reasons for disagreement about what should
be captured in the logic model. Look for the assumptions,
identify and resolve disagreements, and build
consensus. Agencies that work through disagreements
about the logic model typically end up with a stronger
model with which everyone can be satisfied.
Challenge: “We’re not really interested in developing a logic
model, but our funder requires it.”
• Look for examples of how other organizations have used
logic models in meaningful and interesting ways. Many
agencies have gone into the process with skepticism or
lack of interest but ultimately found the process valuable.
• Try to focus on the fun and interesting aspects of
the process. Building a logic model provides an
opportunity—all too rare in the everyday provision of
services—to discuss what it is about your work that
is most meaningful and to renew your appreciation
for the ways your program can change lives and
communities. Focusing on the importance of this
discussion—rather than seeing it as just a task to
complete—can increase engagement in the process.
Challenge: “I just want to get my logic model finished. I don’t
want to spend much time on it.”
• Logic models that are rushed often end up displaying
faulty logic, insufficient evidence, or models copied
from other programs that don’t quite fit yours. Keep
asking yourself “IF-THEN-WHY” questions to make sure
that the model is sound. IF you provide a service, THEN
what should be the impact for participants? WHY do
you think this impact will result? What evidence do
you have to support that connection?
• Make it more interesting by seeking a range of
evidence. If you already know the published research
by heart, look for additional types of evidence, such
as theoretical frameworks, unpublished evaluation
results, or experiences reported by program
participants.
• If possible, recruit a facilitator from outside your
agency who is trained and experienced in logic model
development.
Challenge: “The goal of my program is to change an entire
community, not just to influence the lives of a small group of
participants.”
• Think through each step that must occur. For instance,
how does each activity impact individuals? In what
ways does their behavior change? What has to occur
in order for these individual changes to result in
widespread community change?
• Consider issues or events outside the control of your
agency that may promote or impede the change
you are seeking. If needed, develop strategies for
monitoring or documenting these issues.
Challenge: “My logic model is so complicated that nobody
can understand it.”
• Focus on the most important activities and outcomes.
The model does not need to describe everything that
you do; it should show the services and goals that are
the most important to you.
• Avoid social work jargon at all costs. Describe your
activities and outcomes in “real-life” language that is
understood by a wide range of stakeholders. Try it out
on someone unfamiliar with your work—a neighbor or
a relative, for instance.
• Cut back on detail. Be specific enough to clearly
explain what will happen as a result of your activities
but without excessive detail.
Challenge: “I’m nervous about developing a logic model
because it might make funders hold us more accountable for
our results.”
• Include (and subsequently measure) only outcomes
that are realistic. If you do not want to be held
accountable for something, it must not be an essential
outcome goal. Outcomes are not hopes or wishes but
reasonable expectations.
• Incorporate time frames into the logic model to show
stakeholders the amount of time it will take to achieve
long-term goals. Example: If you have only 1 or 2 years
to show impact, you should not measure outcomes
that may take longer to emerge. Instead, measure the
intermediate steps toward those outcomes—the results
that your program can reasonably expect to achieve.
• Remember that a logic model should be a dynamic tool
that can and should be changed as needed; it is not a rigid
framework that imposes restrictions on what you can do.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 8: Theory of Change and Program Logic Models 183
For these two reasons (and others), it’s neces-
sary to expect program logic models to be continuity
revised. In association with some public specification
of time, outcomes, and impact, can be explored and
selected. This can be accomplished a number of ways.
We have had success in using the action steps
noted, particularly when each participant contributed
to brainstorming the model’s elements by nominat-
ing contributions on sticky notes. This quickly gener-
ates a large number of possibilities for each element.
Redundancies should be noted and celebrated as
commonly held.
Then the group can sort them into those that
must be kept, those that could be kept, and those that
will not be kept (are not relevant). Once the results
are named, then it’s possible to compose content for
the other elements. In this disciplined sequence, each
stakeholder contributes to the whole, and each contri-
bution has the benefit of an internal test relative to the
program’s design.
There are several variations on this approach.
From a group, you could invite individuals or pairs
to generate models in the sequence shown previ-
ously and then integrate and reconcile the variations.
This approach helps avoid “groupthink” but requires
strong process facilitation with content knowledge.
A generic model or template for a given program
may be available. With some advance planning it’s
possible to identify one of these prototypes and intro-
duce it to your group. Then the content adaptations can
focus on improving it so that the content is relevant to
your purposes, conditions, and planned results.
Regardless of the process, strategic decisions
about your model’s components and its relationships
between elements should be made from among all the
content generated. It’s important to consider criteria
for choices that reflect context, target audience(s),
research, practice, literature, program benchmark-
ing, as well as resource parameters. It can be very
helpful to have draft models critically reviewed in a
“mark-up.”
Microsoft Visio is an excellent software program
to construct logic models, but many other applications
such as Word and PowerPoint are also useful. These
as well as Inspiration software are all readily avail-
able. Take care in using technology for model creation
because it can exclude valuable participation from
your stakeholders. Box 8.3 lists a few online resources
you can use to help you in developing logic models.
SUMMARY
Logic models are simply a visual display of the path-
ways from actions to results. They are a great way to
review and improve thinking, find common under-
standings, document plans, and communicate and
explicate what works under what conditions.
Now that you have mastered the contexts of evalu-
ations (Part I) and know how to construct social work
programs via logic models (Part II), you are now in
an excellent position to evaluate the social work pro-
gram you have constructed—Part III: Implementing
Evaluations.
BOX 8.3 SELECTED ONLINE RESOURCES TO HELP CREATE LOGIC MODELS
• Everything You Want to Know About Logic Models
http://www.insites.org/documents/logmod.htm
• Logic Models and How to Build Them
http://www.uidaho.edu/extension/LogicModel
• Theory of Change Assistance and Materials
http://www.theoryofchange.org
• Logic Model Development Guide
http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.
pdf
• Community Tool Box
http://ctb.ku.edu/tools/en/sub_section_main_1877.htm
• Logic Model Builder
http://www.childwelfare.gov/preventing/developing/toolkit
• Using a Logic Model for Evaluation Planning
http://captus.samhsa.gov/western/resources/bp/step7/
eval2.cfm#b
• How to Build Your Program Logic Model
http://captus.samhsa.gov/western/resources/bp/step7/
eval3.cfm
• Developing a Logic Model: Teaching and Training Guide
http://www.uwex.edu/ces/pdande/evaluation/pdf/
lmguidecomplete
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:28:35.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
http://www.insites.org/documents/logmod.htm
http://www.uidaho.edu/extension/LogicModel
http://www.theoryofchange.org
http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669
http://www.wkkf.org/Pubs/Tools/Evaluation/Pub3669
http://ctb.ku.edu/tools/en/sub_section_main_1877.htm
http://www.childwelfare.gov/preventing/developing/toolkit
http://captus.samhsa.gov/western/resources/bp/step7/eval2.cfm#b
http://captus.samhsa.gov/western/resources/bp/step7/eval2.cfm#b
http://captus.samhsa.gov/western/resources/bp/step7/eval3.cfm
http://captus.samhsa.gov/western/resources/bp/step7/eval3.cfm
http://www.uwex.edu/ces/pdande/evaluation/pdf/lmguidecomplete
http://www.uwex.edu/ces/pdande/evaluation/pdf/lmguidecomplete
191
C h a p t e r
PREPARING FOR AN EVALUATION
CENTERS FOR DISEASE CONTROL AND PREVENTION
The six chapters in Part I provided you with the
essential ingredients you need to digest before
you can undertake any kind of evaluation: the
evaluation process (Chapter 3) and standards (Chap-
ter 4), in addition to ethical (Chapter 5) and cultural
(Chapter 6) considerations. The two chapters in Part
II provided information on how to design social work
programs (Chapter 7) with the use of logic models
(Chapter 8).
Simply put, you need to know what goes into a
program before you can evaluate it. Otherwise, how
would you know what you’re evaluating? With Parts
I and II under your belt you’re now well on your way to
getting your “feet wet” by actually doing one or more
of the four different types of evaluations that are cov-
ered in the following four chapters: need (Chapter 10),
process (Chapter 11), outcome (Chapter 12), and effi-
ciency (Chapter 13).
No matter what type of evaluation your evalu-
ation team decides to do, you have to know what to
realistically expect before you start one—the topic of
this chapter. Thus you need to begin thinking about
how you are going to implement your evaluation
before you actually carry it out. The expression “look
before you leap” readily comes to mind here.
We have distilled the combined experience of a
number of evaluation practitioners into nine evalua-
tion implementation strategies contained in Box 9.1
that we believe will help support your evaluation’s
success.
PLANNING AHEAD
Although this chapter discusses evaluation implemen-
tation strategies, we still talk about planning. By doing
so, we are asking that you “plan for the implementa-
tion of your evaluation” by incorporating the nine
strategies in Box 9.1 to guide your evaluation team in
conducting a particular evaluation. In a nutshell, they
represent important steps you need to plan for that will
help you to implement your evaluation more smoothly.
Reading through this chapter during the evalua-
tion planning process will remind you of things you will
want to incorporate into your actual evaluation plans as
you think ahead toward implementing it. In addition to
discussing these helpful implementation strategies, we
also provide a checklist (see Table 9.3) that you can use
to keep track of your own progress in preparing for the
eventual implementation of your evaluation.
Each of the four types of evaluations can, at
times, be a complex undertaking that requires the
cooperation and coordination of multiple people and
other resources. By managing your evaluation care-
fully, paying attention to the evaluation standards
(i.e., utility, feasibility, propriety, accuracy), and
closely following the steps in the evaluation process
as presented in Chapter 3, you can facilitate a more
smoothly run evaluation. Once again, key strategies
developed by practitioners to minimize potential
challenges and promote effective evaluation imple-
mentation are listed in Box 9.1.
9When you translate a dream into reality,
it’s never a full implementation. It’s
always easier to dream than to do.
~ Shai Agassi
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
192 Par t III: Implementing Evaluations
On planning …
Planning is bringing the future into the
present so that you can do something about
it now.
~ Alan Lakein
In the pages that follow, we highlight what’s
involved in each of these general strategies, which
aspects of your evaluation they can help you address,
and what benefits you can expect from each strategy.
Luckily, the majority of these strategies are simply a
part of good project management, something most
social workers do on a daily basis.
STRATEGY 1: WORKING WITH
STAKEHOLDERS
Many of the causes of misunderstandings about
evaluations—and of barriers to productive use of
their findings—can be avoided or minimized when
your stakeholders are included in key discussions at
various points throughout your evaluation’s life cycle.
Including those who are important to your program
in conversations about the program, the evaluation
itself, and what you hope to learn from it can make
them feel included and less anxious about the results
(see Tool C on how to reduce evaluation anxiety). Their
involvement can also offer you fresh perspectives on
what your evaluation can potentially accomplish and
ways to make the entire evaluation process run more
smoothly.
Some stakeholders you may want to consider
involving in your evaluation (or with whom you
will want to communicate about it in other ways)
include all those folks we mentioned in Chapters 1
through 3. Table 9.1 presents a variety of ways to
work with them throughout your evaluation. Note
that to engage stakeholders effectively, you will first
need to gauge their level of knowledge and experi-
ence regarding evaluation. It may also be necessary
to provide them with an overview of program evalu-
ation basics.
Perhaps you are wondering how you will manage
the involvement of so many people in your evaluation,
such as program directors, program staff, partners,
evaluator(s), evaluation team members, and other
program stakeholders. Questions you need to ask and
answer are:
• Who will play what role(s)?
• Who is in charge of which aspects of the
evaluation?
• Who has decision-making authority over
which aspects of the evaluation?
As you explore working with your stakeholders,
it’s important to recognize that you have a range of
options for how you can structure these relation-
ships and that there’s no “correct” or “incorrect”
structure.
BOX 9.1 IMPLEMENTATION STRATEGIES TO MAKE YOUR EVALUATION RUN SMOOTHLY
Strategy 1 Work with all stakeholder groups throughout the
evaluation life cycle—from initial design through action
planning and implementation—in order to help focus on
questions of interest to them and to incorporate their
perspectives
Strategy 2 Develop a concrete process for managing the
tasks, resources, and activities necessary for your
evaluation
Strategy 3 Pilot-test data collection instruments and
procedures
Strategy 4 Train data collection staff
Strategy 5 Monitor the evaluation’s progress, budget,
timeline, and scope. Communicate frequently and
effectively with the evaluation implementation team and
key stakeholders
Strategy 6 Disseminate results to all stakeholders in an
accessible manner. Consider interim reporting where
appropriate
Strategy 7 Develop an action plan to implement
evaluation recommendations that includes clear roles,
responsibilities, timeline, and budget
Strategy 8 Document lessons learned throughout the
evaluation for use in future evaluations
Strategy 9 Link findings from the evaluation back to the
strategic evaluation plan in case there are implications
for the revision of the plan
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9: Preparing for an Evaluation 19
3
Benefits of working with stakeholders:
• Encourages positive community response
to your evaluation
• Builds “political will” to support your
evaluation
• Develops support among program
leadership for the program and/or for your
evaluation in general
• Facilitates appropriate timing of your
evaluation in relation to information needs
• Leads to development of relevant
evaluation questions, which in turn
supports use
• Promotes findings that are credible, used,
understood, and accepted by all your
stakeholder groups.
Table 9.1: Ways to Work with Stakeholders.
Category Detail (if appropriate to your situation)
Upfront
discussions with
stakeholders about …
• Plans for the evaluation (yours and theirs)
• Program priorities (yours and theirs)
• Information needs and evaluation questions to explore (yours and theirs)
• When information is needed
• What evidence would be considered credible
• How the data to be collected will answer the evaluation questions
• How findings can be used
• Community member perspectives to consider
• Privacy, confidentiality, and cultural sensitivity
• Limitations of evaluation
• What to do if findings suggest immediate need for program modifications
• A proactive approach to public relations, referred to as issues management, if the
evaluation may reflect negatively on program or community
Frequent communication
throughout the evaluation
with stakeholders about …
• Results from pilot tests
• Implementation progress
• Early findings
• Successes achieved
• Challenges encountered
• Other topics
Postevaluation
discussions with
stakeholders about …
• Turning findings into conclusions
• Celebrating strengths
• Developing recommendations grounded in findings
• Developing strategies for disseminating results
• Lessons learned
• Limitations of the evaluation
• Implications of the current evaluation for changes needed in the strategic evaluation plan
• Designing an action plan with clear information on recommended strategies, roles and
responsibilities, timeline, and budget
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
194 Par t III: Implementing Evaluations
The first step is to consider upfront what you
want the roles, responsibilities, and lines of author-
ity for those involved in your evaluation to look like.
Here the evaluation literature can help you. For exam-
ple, King and Stevahn (2002) have put considerable
thought into the various roles an evaluator can play in
relation to other evaluation stakeholders, within the
organization sponsoring the evaluation, and in terms
of managing interpersonal conflict. Tools A and B
offer more information about these evaluator roles.
The second step is to clarify roles and responsibil-
ities for everyone involved in order to avoid any mis-
understandings. A “Roles and Responsibilities Table”
lays out in detail who is responsible for what (shown
as Table G.2 in Tool G),
As discussed further under Strategy 5, open and
ongoing communication among evaluation stake-
holders is paramount in conducting a successful eval-
uation. Tool G provides suggestions on ways to keep
team members and other stakeholders informed as
to the progress of your evaluation. Devising fair and
minimally burdensome ways to obtain feedback is
another important aspect of communication.
For example, depending on the size of your pro-
gram’s client catchment area and the dispersion of
your stakeholders throughout your state, you may
need to come up with creative ways for them to pro-
vide their input remotely, whether they are formally
serving on your evaluation team or their expertise is
being sought for other reasons.
Meeting by teleconference—rather than in
person—or allowing stakeholders to provide input
electronically are some ways to ease the burden of their
potential participation. Webinar software, should
you or one of your partners have access to it, allows
remote stakeholders to view graphics and other docu-
ments online during tele-discussions. Some computer
software packages of this type permit collaborative
editing of documents, whereby all stakeholders can
view edits on screen as they are being made.
Once you have drafted the final version of your
evaluation plan, you will want to revisit the compo-
sition of your evaluation team to see if you wish to
constitute it differently as you move toward the actual
implementation of your evaluation. The design may
have evolved in unexpected directions during plan-
ning, or new individuals or organizations may have
joined your partnership with a stake in the proposed
evaluation.
Should additional stakeholders review your draft
plan? Should some of them join the evaluation team
that will carry the evaluation forward—those able
to facilitate as well as those able to obstruct its prog-
ress? Addressing concerns these individuals raise will
help ensure your evaluation plan is feasible and that it
receives the support it needs.
STRATEGY 2: MANAGING THE
EVALUATION
Running a program evaluation is much like running
any other project. The things you “worry about” may
be a little different for an evaluation than for other
kinds of projects, but the good management prac-
tices that help you elsewhere in your professional life
will also work well for you with an evaluation. Good
management includes thinking ahead about what
is most important, which activities precede which
other activities, who will do what, what agreements
and clearances are needed, when important products
are due, how far your budget will stretch, and how to
make the budget stretch further.
You will also want to monitor progress and com-
municate frequently and efficiently with others on
your evaluation team throughout the entire evalua-
tion (see Strategy 5).
As part of your evaluation planning process, you
must think ahead to the eventual implementation of
your evaluation. We cannot stress this enough—think
ahead. This is the purpose of this chapter: to encourage
you to think ahead of what’s to come. For example, if
your own staff resources are lacking, either in terms of
skill level or time available, you may want to reach out
to partners and contractors to fill that gap. You may also
need to develop memoranda of agreement or contracts
to engage this external support in a timely fashion.
If required by your agency or one of the partners
engaged in your program, you may need clearances for
the protection of human subjects such as those that may
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9: Preparing for an Evaluation 19
5
be needed for an institutional review board (IRB) and
Health Insurance Portability and Accountability Act
(HIPPA). This can be requested as soon as your meth-
odology has been finalized and your measuring instru-
ments and consent (e.g., Box 5.2) and assent (Box 5.3)
forms required by these entities have been developed.
Finally, you need to anticipate things that could
cause problems down the road—such as the potential
evaluation challenges presented in Tool D. Having
identified potential challenges, you then need to put
in place as many safeguards as possible to prevent
them from happening, with contingency plans in
Table 9.2: Management Evaluation Strategies.
Category What to Look For
Logistics • Staff have skills required for evaluation tasks and are aware of their roles and responsibilities
• Staff are available to work on evaluation activities or alternatives have been considered
• Estimates of likely cost of evaluation in the individual evaluation plans are complete and feasible
• Efficiencies possible across evaluations have been identified
• Other sources of financial or staff support for evaluation (e.g., partner organizations, local
universities, grant funding) have been identified
• Actions to expand staff resources—such as contracting externally, training existing staff in
needed skills, “borrowing” partner staff, interns from local colleges and universities—have been
established
• Agreements are developed and executed that may be needed to contract out a portion of
the work (e.g., specific data collection activities, data analysis, development/distribution of
reports), to access data sources, to facilitate meetings with partners (schools, workplaces, etc.)
• Clearances/permissions that may be needed (such as IRB clearance, data-sharing agreements,
permission to access schools or medical facilities) are
in place
Data
collection
• Appropriate data storage, data system capacity, data cleaning, and data preparation procedures
are established and communicated
• Procedures for protection of data are in place (considering such safeguards as frequent data
backups, use of more than one audio recorder for interviews and focus groups)
• Safeguards for respondent confidentiality and privacy have been developed
• Those collecting or compiling data have been trained in the procedures
• Monitoring systems are in place to assess progress and increase adherence to procedures for
data protection, assurance of privacy and confidentiality
• Cultural sensitivity of instruments has been tested
• Respondent burden has been minimized (e.g., length of instrument considered, data collection
strategies designed to be optimally appealing and minimally burdensome)
• Ways to maximize respondent participation are in place
• Existing data useful for the evaluation have been identified and permission to access those data
has been obtained
Data analysis • Procedures for how incoming data will be analyzed to answer the evaluation questions are
in place
• Table shells showing analyses to be conducted are developed
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
196 Par t III: Implementing Evaluations
mind should things not go as planned. And yes, some-
times things do go south in an evaluation—way south.
Benefits of good evaluation management
practice:
• Maintains clarity among team members
about everyone’s roles and responsibilities
• Identifies and secures resources to
complete the evaluation
• Keeps your evaluation on track in terms of
timeline, budget, or scope
• Provides a sound plan for managing
incoming data
• Enables all evaluation team members to
follow clear procedures for working with
contractors, consultants, and evaluation
partners
This type of planning should be undertaken with
your evaluation team members, program stakehold-
ers, and individuals experienced in evaluation in the
areas outlined in Table 9.2. Depending on your own
level of familiarity with evaluation logistics, you may
or may not feel the need for outside help in working
through this process.
In either case, it’s important to consider how you
will document the decisions made as part of this pro-
cess so that you or others can refer back to them at
a later date. How you do this is up to you and your
evaluation team members.
You may find it helpful to integrate information
on managing evaluation logistics into the individual
evaluation plan, perhaps as an appendix. Or you may
want to produce a separate document containing this
information. The tips in Tool D will help you with
this process, though you are not required to use them;
they are there to use or not as you see fit.
STRATEGY 3: PILOT-TESTING
You should plan to pilot-test your data collection
instruments and procedures. This is one good way to
preempt some of the implementation challenges you
might otherwise face. This is important whether you
are conducting mail and/or telephone surveys; car-
rying out individual interviews, group interviews, or
focus groups; or abstracting data from archival sources.
Benefits of pilot-testing measuring
instruments and data collection procedures:
• Generates effective data collection
instruments that collect required data that
work with the designed data analysis plan
• Clarifies procedures for all data collection,
whether carried out by your staff,
contractors, consultants, or other data
collection partners
• Improves the validity and reliability of the
data collected
During the pilot test you will be looking at such
issues as clarity of instructions, appropriateness and
feasibility of the questions, sequence and flow of ques-
tions, and feasibility of the data collection procedures.
Use lessons learned during the pilot test to modify
your data collection instruments and/or your train-
ing materials for your data collectors. See Tool I for
additional information on training data collectors.
STRATEGY 4: TRAINING DATA
COLLECTION STAFF
Even if you are working with experienced individu-
als who are evaluation savvy, training those who will
be involved in data collection on the specific mea-
suring instruments and data collection procedures
you will use in this evaluation is another good way
to avoid difficulties during the data collection phase.
Training helps to ensure that all staff with data collec-
tion responsibilities are familiar with the instruments
and other forms that are part of your evaluation plan,
as well as the procedures that will be followed and
the safeguards that will be employed in implement-
ing the plan. It will also promote consistency in data
collection procedures across data collectors, thereby
increasing the reliability of the data gathered.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9: Preparing for an Evaluation 19
7
Benefits of training data collection staff:
• Promotes a consistent message about your
evaluation to outside audiences
• Maintains consistency in data collection
procedures
• Prevents loss of data and corruption of
data integrity
• Guards against ethical breaches
• Improves the validity and reliability of the
data collected
Training should be required whether data col-
lection is being done by your own staff, by partner
staff, or by contractors/consultants. Training ses-
sions should cover not only the logistics of the work
but also the ethical aspects, such as issues in human
subjects protection, maintenance of confidentiality
(Chapter 5), and observance of cultural sensitivity
(Chapter 6). Tool I presents guidelines to help you
develop and deliver training to data collection staff.
STRATEGY 5: MONITORING
PROGRESS
As mentioned earlier, an evaluation like any other
project, needs to be carefully managed. This includes
not only thinking ahead during planning about what
needs to be accomplished, who will do what, and what
time and budget constraints exist (per Strategy 2); it
also includes monitoring progress and maintaining
open lines of communication among members of
your evaluation team as your evaluation proceeds.
Benefits of tracking and ongoing
communication:
• Maintains clarity among all your
team members over their roles and
responsibilities
• Keeps your evaluation on track in terms of
timeline, budget, and scope
• Promotes effective communication with
your stakeholders and maintains their
engagement
Strategies such as those found in the Evaluation
Management Tool (Tool G) are useful for project
tracking and ongoing communication. These tools are
equally helpful in managing an evaluation with lots
of “moving parts.” You are not required to use these
tools. However, you may find them helpful in identi-
fying emerging issues that require your attention and
in making sure you stay on track in terms of timeline
and budget.
The tools are designed to help you track prog-
ress overall and against your established budget and
timeline, identify performance issues by your staff
or your contractor, identify implementation issues
such as data access and data collection, and moni-
tor the quality of your evaluation. Information to
help you budget for your evaluation is included in
Tool F.
STRATEGY 6: REPORTING RESULTS
Interim Reporting
Where appropriate, sharing interim findings derived
from your evaluation not only helps maintain stake-
holder interest in the evaluation process but also
increases the likelihood that your stakeholders have
the information they need in a timely manner. If you
decide to share findings midway through the evalu-
ation, be sure to couch the interim findings in terms
of caveats that the data are only preliminary at this
point. Furthermore:
• Share only what information you are
comfortable sharing at any given point
in time
• Focus on information you feel is important
for stakeholders to begin thinking about
• Consider presenting the information as “food
for thought” based on what you are seeing
thus far
Disseminating Final Results
Dissemination of an evaluation’s final results to stake-
holders should be a process tailored to the informa-
tion needs of your different stakeholder groups. While
final reports are a common way to share findings, it’s
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
198 Par t III: Implementing Evaluations
important to consider whether a large, formal final
report is the most appropriate way to disseminate
findings to the specific stakeholders with whom you
are working.
By “appropriate way” we mean a tailoring of
both message and format to the information needs
of a given audience; that is, you need to consider the
best way(s) to make the information you plan to share
accessible to that particular audience. For exam-
ple, some stakeholders may strongly desire a final
report—they may even need it for documentation or
accountability purposes. However, keep in mind that
for other stakeholders a final report may include more
information than they need or want.
Benefits of interim and final reporting:
• Facilitates appropriate timing of your
evaluation in relation to information needs
• Facilitates the comprehension and use of
the findings that were derived from your
evaluation
• Helps ensure, through interim reporting,
that there are few or no “surprises” in the
final evaluation report
Figure 9.1 presents a list of some alternative means
to disseminate evaluation findings. Depending on the
composition of your stakeholder groups, you may want
to experiment with one or more of these alternative
approaches. Additional guidance for presenting the
results of an evaluation is provided in Tool J.
Remember to set aside resources in your budget
to support communication activities—something
that is easy to forget to do. The communications por-
tion of your budget can be based on the communica-
tion ideas put forward in your evaluation plans.
Depending on the communication venue(s) you
choose, costs for communication activities might
include such things as staff time for materials devel-
opment and attendance at stakeholders’ meetings,
meeting space, refreshments, printing costs, or web-
site maintenance.
Also remember to check with your funders about
which of these costs are allowable under your grant(s).
Communication may be something your partners can
help with in various ways, but if tight resources limit
you, then focus on your primary stakeholders.
STRATEGY 7: DEVELOPING A PLAN
Another important step in linking evaluation to
action involves developing an action plan containing
strategies for implementing evaluation recommenda-
tions. The action plan should, at a minimum, contain
the following items:
• Rationale for recommended strategies
• Clear roles and responsibilities for
implementing the elements of the action plan
Most Interactive
with Audience
� Memos and postcards
� Comprehensive written
reports
� Executive summaries
� Newsletters, bulletins,
brochures
� News media communications
� Verbal presentations
� Videotape or computer-
generated presentations
� Posters
� Internet communications
� Working sessions
� Impromptu or planned
meetings with individuals
Least Interactive
with Audience
Figure 9.1: Alternative communication formats.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9: Preparing for an Evaluation 19
9
• Timeline
• Sources of funding for program or
intervention modifications, if needed
Define roles for stakeholders and community
members in the action planning and the action imple-
mentation processes. For example, you can convene a
“working session” that combines a briefing on find-
ings for stakeholders with joint planning on next steps
and development of an action plan.
Benefits of action planning:
• Facilitates the comprehension and use of
the evaluation’s findings
• Engages stakeholders in the improvement
of your program
• Promotes accountability for use of your
evaluation’s findings
Involving a variety of stakeholders in the action
planning process will help facilitate stakeholder and
decision-maker buy-in and thereby facilitate imple-
mentation of any recommendations that make sense
for your program. Tool K contains an Action Plan
template you can adapt to the needs of your own
program.
STRATEGY 8: DOCUMENTING
LESSONS LEARNED
History repeats itself—because we weren’t listening
the first time. That’s as true for evaluation as it is any-
where else. Yet by documenting lessons learned from
one evaluation for use in future evaluations you can
begin building a historical record of knowledge about
evaluation to pass on to future “generations” in your
program. Consider adopting the habit of closing your
evaluation team meetings by asking attendees:
• What have we learned?
• What can we do better next time?
Document these discussions in your meet-
ing minutes for later reference. In this way you are
encouraging your team members to reflect on their
evaluation practice, and this will lead to evaluation
capacity building.
Benefits of documenting lessons learned:
• Avoids repeating past mistakes
• Builds evaluation capacity among you
and your stakeholders
• Transfers knowledge to those who come
after you
• Creates an archive of good evaluation
practices over time
As your various evaluations proceed and as
you “learn by doing,” make sure you and your team
members pause occasionally to reflect upon what
you have learned and document those things you
want to remember to make your next evaluation go
more smoothly. In some cases, you may learn things
you would like to share more broadly, for example,
through presentations at a grantee meeting, a profes-
sional conference, or even in a peer-reviewed journal
article.
STRATEGY 9: LINKING BACK TO
YOUR EVALUATION PLAN
Linking your evaluation findings back to your evalu-
ation plan is a critical final strategy in ensuring an
evaluation’s use and promoting ongoing program
improvement. It’s not uncommon that an evalua-
tion report raises more questions than it answers.
This is actually a good thing. In a sense, each evalua-
tion you conduct helps you set the agenda for future
evaluations.
On planning . . .
Failing to plan is planning to fail.
~Alan Lakein
Findings from your evaluation may suggest, for
example, that a particular component of the program
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
200 Par t III: Implementing Evaluations
was functioning well (e.g., a parent training compo-
nent) but that another component you touched on
only tangentially is functioning less well and should
be looked into more closely (e.g., community aware-
ness of available parent training classes). Or findings
may demonstrate that another component of your
program is not working well yet not really explain
why that is so or how the problem could be remedied.
The why and how of what isn’t working may then
become grist for the mill of a future evaluation. Further,
findings regarding issues encountered with the logis-
tics of the evaluation itself may suggest that alternative
approaches need to be tried in upcoming evaluations.
This is not to say that you need to completely
revamp your evaluation plan every time you complete
another individual evaluation. Rather, we propose
that new information gleaned from each succes-
sive evaluation be viewed within the context of your
long-range evaluation plans to see if any midcourse
corrections are warranted.
While it’s possible that recently compiled find-
ings may occasionally imply that a planned evaluation
should be scrapped and replaced with one of greater
urgency, it’s far more likely that your revised approach
will involve only minor modifications to one or more
proposed evaluations.
Findings may also help you generate ideas for an
evaluation “wish list” pending the next evaluation
cycle—or the sudden availability of additional evalu-
ation funds. What you want is for your evaluation to
continually inform not only your immediate program
improvement efforts but also your longer range strat-
egies for evaluations. That’s why linking evaluation
findings back to your strategic evaluation plan is so
critical.
As a last check, before you call an evaluation plan
“final” and begin to implement your evaluation, use
the checklist in Table 9.3 to see if you have covered all
the steps that will help lead to a successful implemen-
tation of your evaluation.
Table 9.3. Preevaluation Checklist for the Successful Implementation of an Evaluation Plan.
Category Yes No
Do we have an evaluation planning team composed of individuals with the
knowledge, skills, and experience relevant to planning this evaluation?
Do we have an evaluation implementation team of individuals who will take
responsibility for implementing the evaluation, providing access to data, overseeing
data collection, analyzing the data, and preparing the evaluation report?
Have we identified our key stakeholders for this evaluation? See Chapters 1–3.
Have we thought about how to work with our stakeholders? (Table 9.1)
• Preevaluation?
• During the evaluation?
• Postevaluation?
• To develop the Action Plan (Tool K)?
• To manage public relations?
• To minimize evaluation anxiety (Tool C)?
Will the evaluation design (Tool E) and data collection methods (Tool H) result in …
• Methodology that is feasible given resource and practical constraints?
• Data that are credible and useful to stakeholders?
• Data that are accurate?
• Data that will help answer the evaluation questions in a timely manner?
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9: Preparing for an Evaluation 201
Spending some “quality time” over a glass of
wine—or two—with your evaluation plan will pay
off in the long run as you move forward to its imple-
mentation. With a solid individual evaluation plan
in hand, you will be in the best possible position to
implement an evaluation that meets the standards of
utility, feasibility, propriety, and accuracy that were
covered in Chapter 4.
Category Yes No
Are we prepared logistically? (Table 9.2)
Do we have plans for …
• Staffing?
• Budget (Tool F)?
• Funding?
• Data sharing and other types of contracts/agreements?
• Human subjects (IRB), HIPAA, and organizational clearances/permissions?
Are we prepared for data collection? (Table 9.2)
Have we addressed …
• Finalization and approval of data collection instruments?
• Propriety of the evaluation, including protection of human subjects?
• Cultural sensitivity, clarity, and user-friendliness of instruments?
• Respondent burden?
• Methods to obtain high response rates or complete data?
• Data handling, processing, storage?
• Data confidentiality, security?
Did we pilot-test our measuring instruments and data collection procedures?
Did we train the data collection staff? (Tool I)
Will the data analyses answer our evaluation questions?
Have we specified the …
• Analyses to answer each evaluation question?
• Table shells that show how the results will be presented?
Do we have methods in place (Tool G) to track evaluation implementation and to
promote communication within the evaluation implementation team?
For example, do we have a …
• Timeline?
• Budget?
• Roles and responsibilities table?
• Project description?
• Project status form?
Have we planned for sharing interim results (if appropriate) and for disseminating
the final results? (See Tool J.)
Table 9.3: Continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
202 Par t III: Implementing Evaluations
Also, by following the strategies described here
that relate to stakeholder engagement and sharing
results—“Working with Stakeholders,” “Monitor-
ing Progress and Promoting Ongoing Communi-
cation,” “Interim Reporting and Dissemination of
Final Results,” “Developing an Action Plan,” “and
“Linking Back to the Evaluation Plan”—you will be
better able to translate your evaluation findings into
shared action by you and your stakeholders alike.
SUMMARY
This chapter briefly provided the nine basic strategies
that need to be followed when you are going to do any
type of evaluation. Therefore, when reading the fol-
lowing four chapters, keep in mind that the strategies
outlined in this chapter must be applied to each one.
In a nutshell, they represent important steps you can
take during the planning stages of your evaluation that
will help you to implement your plans more smoothly.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Study Questions Chapter 9
The goal of this chapter is to provide you with a beginning knowledge base for you to feel comfortable in answering the
following questions. AFTER you have read the chapter, indicate how comfortable you feel you are in answering each
question on a 5-point scale where
1
Very
un
comfortable
2
Somewhat
uncomfortable
3
Neutral
4
Somewhat
comfortable
5
Very
comfortable
If you rated any question between 1–3, reread the section of the chapter where the information for the question is found. If
you still feel that you’re uncomfortable in answering the question, then talk with your instructor and/or your classmates for
more clarification.
Questions Degree of comfort?
(Circle one number)
1
List the nine strategies that you need to consider before doing any type of program
evaluation.
1 2 3 4 5
2
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to your program’s stakeholders before you actually carry out your
evaluation (Strategy 1).
List, and then discuss, the issues you feel need to be addressed in relation to
your stakeholder groups. Provide as many examples as you can throughout your
discussion. And, more important, don’t forget to utilize the tools contained in the
Toolkit
when appropriate.
1 2 3 4 5
3
Pretend, for the moment, that you have been hired to evaluate your social work program.
However, you fully realize that you need to address several issues in reference to
developing a good process for managing your evaluation before you actually carry out
your evaluation (Strategy 2).
List, and then discuss, the issues you feel need to be addressed in relation to
developing a process to managing your evaluation. Provide as many examples as you
can
throughout your discussion. And, more important, don’t forget to utilize the tools
contained in the Toolkit when appropriate.
1 2 3 4 5
4
Pretend, for the moment, that you have been hired to evaluate your social work program.
However, you fully realize that you need to address several issues in reference to
pilot-testing your data collection instruments before they are used to collect data for
your evaluation (Strategy 3).
List, and then discuss, the issues you feel need to be addressed in relation to
pilot-testing your measuring instruments and data collection procedures. Provide as
many examples as you can throughout your discussion. And, more important, don’t
forget to
utilize the
tools contained in the Toolkit when appropriate.
1 2 3 4 5
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
5
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to training the folks who will be collecting data before they actually collect
them (Strategy 4).
List, and then discuss, the issues you feel need to be addressed in relation to training
your data collectors when it comes to training data collection staff. Provide as many
examples as you can throughout your discussion. And, more important, don’t forget to
utilize the tools contained in the Toolkit when appropriate.
1 2 3 4 5
6
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to how you are going to monitor the progress of your evaluation in addition
to how you are going to promote ongoing communication within your stakeholder
groups before you actually carry out your evaluation (Strategy 5).
List, and then discuss, the issues you feel need to be addressed in relation to
monitoring your evaluation’s progress in addition to promoting ongoing communication
with your stakeholder groups. Provide as many examples as you can throughout your
discussion. And, more important, don’t forget to utilize the tools contained in the Toolkit
when appropriate.
1 2 3 4 5
7
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to how you are going to handle interim reporting procedures and the
dissemination of your findings before you actually carry out your evaluation (Strategy
6).
List, and then discuss, the issues you feel need to be addressed. Provide as many
examples as you can throughout your discussion. And, more important, don’t forget to
utilize the tools contained in the Toolkit when appropriate.
1 2 3 4 5
8
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to how you are going to develop an action plan before you even begin your
evaluation (Strategy 7).
List, and then discuss, the issues you feel need to be addressed in relation to
developing an action plan for your evaluation. Provide as many examples as you can
throughout your discussion. And, more important, don’t forget to utilize the tools
contained in the Toolkit when appropriate.
1 2 3 4 5
Study Questions for Chapter 9 Continued
continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
9
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to how you are going to document what you have learned from your
evaluation (Strategy 8).
List, and then discuss, the issues you feel need to be addressed in relation to
documenting what you have learned from your evaluation. Provide as many examples
as you can throughout your discussion. And, more important, don’t forget to utilize the
tools contained in the Toolkit when appropriate.
1 2 3 4 5
10
Pretend, for the moment, that you have been hired to evaluate your social work
program. However, you fully realize that you need to address several issues in
reference to how you are going to link your findings back to your original evaluation
plan even before you begin the evaluation (Strategy 9).
List, and then discuss, the issues you feel need to be addressed in relation to linking
your evaluation findings back to your original evaluation plan. Provide as many
examples as you can throughout your discussion. And, more important, don’t forget to
utilize the tools contained in the Toolkit when appropriate.
1 2 3 4 5
Study Questions for Chapter 9 Continued
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Chapter 9 Assessing Your Self-Efficacy
AFTER you have read this chapter AND have completed all of the study questions, indicate how knowledgeable you feel you
are for each of the following concepts on a 5-point scale where
1
Not knowledgeable
at all
2
Somewhat
un
knowledgeable
3
Neutral
4
Somewhat
knowledgeable
5
Very
knowledgeable
Concepts Knowledge Level?
(Circle one number)
1
Overall, the nine strategies that can be implemented to increase the overall
success of an
evaluation
1 2 3 4 5
2
Working with stakeholders in an effort to increase the overall success of an
evaluation
1 2 3 4 5
3
Developing a process for managing an evaluation in an effort to increase its overall
success
1 2 3 4 5
4
Pilot-testing data collection instruments in an effort to increase the overall
success of an evaluation
1 2 3 4 5
5
Training data collection staff in an effort to increase the overall success of an
evaluation
1 2 3 4 5
6
Monitoring the progress of an evaluation in an effort to increase its overall
success
1 2 3 4 5
7
Writing interim and final evaluation reports in an effort to increase the overall
success of an evaluation
1 2 3 4 5
8
Developing an action plan in an effort to increase the overall success of an
evaluation
1 2 3 4 5
9
Documenting the lessons learned from an evaluation in an effort to increase its
overall success
1 2 3 4 5
10 Linking evaluation findings back to the evaluation’s strategic plan 1 2 3 4 5
Add up your scores (minimum = 10, maximum = 50) Your total score =
A 47–50 = Professional evaluator in the making
A– 45–46 = Senior evaluator
B+ 43–44 = Junior evaluator
B 41–42 = Assistant evaluator
B– 10–40 = Reread the chapter and redo the study questions
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
CHAPTER OUTLINE
WHAT ARE NEEDS ASSESSMENTS?
DEFINING SOCIAL PROBLEMS
Social Problems Must Be Visible
DEFINING SOCIAL NEEDS
The Hierarchy of Social Needs
FOUR TYPES OF SOCIAL NEEDS
Perceived Needs
Normative Needs
Relative Needs
Expressed Needs
SOLUTIONS TO ALLEVIATE SOCIAL NEEDS
STEPS IN DOING A NEEDS ASSESSMENT
STEP 3A: FOCUSING THE PROBLEM
Example
STEP 4A: DEVELOPING NEEDS
ASSESSMENT QUESTIONS
STEP 4B: IDENTIFYING TARGETS FOR
INTERVENTION
Establishing Target Parameters
Selecting Data Sources (Sampling)
STEP 4C: DEVELOPING A DATA
COLLECTION PLAN
Existing Reports
Secondary Data
Individual Interviews
Group Interviews
Telephone and Mail Surveys
STEP 4D: ANALYZING AND
DISPLAYING DATA
Quantitative Data
Qualitative Data
STEP 6A: DISSEMINATING AND
COMMUNICATING EVALUATION RESULTS
SUMMARY
Grinnell, R. M., Gabor, P. A., & Unrau, Y. A. (2015). Program evaluation for social workers : Foundations of evidence-based programs. Oxford University Press, Incorporated.
Created from capella on 2023-01-30 18:42:52.
C
op
yr
ig
ht
©
2
01
5.
O
xf
or
d
U
ni
ve
rs
ity
P
re
ss
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.