Please see attached.
5340: U8 D1 Using a Logic Model as a Strategic Planning Tool
In your readings for this week, Watson and Hoefer (2014) provide a general
overview of using logic models to define a problem and identify inputs, activities,
outputs, and outcomes.
In your initial post, discuss how application of a logic model would be different for
a small nonprofit organization, a large federal agency, or the development of a
policy for a specific human service program. How would the leader for each of
these organizations engage the participants in the process? Cite examples from
the reading.
Note: Must me a minimum of 250 words and 1 Scholarly Journal
73
INTRODUCTION
Nonprofit administrators both develop and evaluate programs. A logic model is useful for
both, even though development happens before the program begins and evaluation happens
after it has been in operation. A good evaluation, however, is planned at the same time that the
program is designed so that necessary data is collected along the way, rather than annually or
after the program finishes. This chapter first describes the process of logic modeling using an
example of the logic model. Then, it discusses how to use the logic model to plan an evaluation.
LOGIC MODELS
The idea of logic models as an adjunct to program evaluation extends at least as far back as
2000 when the Kellogg Foundation published a guide to developing logic models for program
design and evaluation. According to Frechtling (2007), a logic model is “a tool that describes
the theory of change underlying an intervention, product or policy” (p. 1). While one can find
many variations on how a logic model should be constructed, it is a versatile tool that is used
to design programs, assist in their implementation, and guide their evaluation. This chapter
describes one basic approach to logic modeling for program evaluation and links the planning
and evaluation aspects of human service administration.
You should understand that not all programs have been designed with the aid of a logic
model, although that is becoming less common every year. Federal grants, for example, often
require applicants to submit a logic model, and their use throughout the human services sec-
tor is growing through academic education and in-service training. If there is no logic model
for a program you are working with, it is possible to create one after a program has been
implemented. You can thus bring the power of the tool to bear when changing a program or
creating an evaluation plan.
Logic model terminology uses system theory terminology. Because logic models are said to
describe the program’s “theory of change,” it is possible to believe that this refers to something
such as social learning theory, cognitive-behavioral theory, or any one of a number of psycho-
logical or sociological theories. In general, though, logic models have a much less grand view
of theory. We begin with the assumption that any human services program is created to solve
a problem. The problem should be clearly stated in a way that does not predetermine how
the problem will be solved. The utility of a logic model is in showing how the resources used
(inputs) are changed into a program (activities) with closely linked products (outputs) that
7Logic Models and
Program Evaluation
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
i
g
h
t
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
74 LEADERSHIP SKILLS
then lead to changes in clients in the short, medium, and long terms. The net effect of these
client changes is that the original problem is solved or at least made better for the clients in
the program. An example of a logic model is shown as Figure 7.1.
The problem being addressed by the example program is, “School-aged youth have anger
management problems leading to verbal and physical fights at school and home.” This prob-
lem statement is specific about who has a problem (school-aged youth), what the problem is
(anger management problems leading to verbal and physical fights), and where it is a problem
(school and home). It also does not prejudge what the solution is, allowing for many possible
programs to address the problem. An example problem statement that is not as good because
it states the problem in a way that allows only one solution is, “There is a lack of anger man-
agement classes in schools for school-aged youth.”
Another way to make the problem statement good is to phrase the statement in such a way
that almost anyone can agree that it is actually a problem. The example problem statement
might make this point more clearly by saying, “There are too many verbal and physical fights
at school and home among school-aged youth.” Phrased this way, there would be little doubt
that this is a problem, even though the statement is not specific about the number of such
fights or the cause of the fights. If the program personnel want to focus on anger management
problems, this way of stating the problem might lead to a host of other issues being addressed
instead that might be leading to fights—such as overcrowding in the halls, gang membership,
conflict over curfews at home, or anything else that might conceivably cause youth to fight at
school or home. Be prepared to revisit your first effort at the problem statement and seek
input from interested stakeholders to be sure that you are tackling what is really considered
the reason for the program. The problem statement is vital to the rest of the logic model and
evaluation so take the time to make several drafts to get full agreement.
After the problem statement, the logic model has six columns. Arrows connect what is
written in one column to something else in the next column to the right or even within the
same column. These arrows are the “logic” of the program. If the column to the left is
achieved, then we believe that the element at the end of the arrow will be achieved. Each
arrow can be considered to show a hypothesis that the two elements are linked. (The example
presented here is intentionally not “perfect” so that you can see some of the nuances and
challenges of using this tool.)
The first column is labeled “Inputs.” In this column, you write the major resources that will
be needed or used in the program. Generically, these tend to be funds, staff, and space, but
can include other elements such as type of funds, educational level of the staff, and location
of the space (on a bus line, for example), if they apply to your program. The resource of
“staff,” for example, might mean MSW-level licensed counselors. In the end, if only staff
members with bachelor degrees in psychology are hired, this would indicate that the “staff”
input was inadequat
e.
The second column is “Activities.” In this area, you write what the staff members of the
program will be doing—what behaviors you would see them engage in if you sat and watched
them. Here, as elsewhere in the logic model, there are decisions about the level of detail to
include. It would be too detailed, for example, to have the following bullet points for the case
management activity:
• Answer phone calls about clien
ts
• Make phone calls about clients
• Learn about other agencies’ servic
es
• Write out referral forms for clients to other agencies
This is what you would see, literally, but the phrase “case management” is probably
enough. Somewhere in program documents, there should be a more detailed description of
the duties of a case manager so that this level of detail is not necessary on the logic model,
which is, after all, a graphical depiction of the program’s theory of change, not a daily
to-do list.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
75
Fi
gu
re
7
.1
Ex
a
m
pl
e
of
a
L
og
ic
M
od
el
S
h
o
rt
-t
er
m
L
o
n
g
-t
er
m
M
ed
iu
m
-t
er
m
B
et
te
r
re
co
gn
iti
on
o
f
ro
le
a
ng
er
p
la
ys
in
th
ei
r
liv
es
B
eg
in
ni
ng
le
ve
l u
se
of
s
ki
lls
to
ha
nd
le
a
ng
er
H
ig
he
r
le
ve
l
us
e
of
s
ki
l
ls
to
ha
nd
le
a
ng
er
R
ef
ra
m
e
si
tu
at
io
ns
s
o
an
ge
r
oc
cu
rs
le
ss
fr
eq
ue
nt
ly
F
ew
er
fi
gh
ts
at
s
ch
oo
l
F
ew
er
fi
gh
ts
at
h
om
e
In
p
u
ts
P
ro
bl
em
: S
ch
oo
l-a
ge
d
yo
ut
h
ha
ve
a
ng
er
m
an
ag
em
en
t p
ro
bl
em
s
le
ad
in
g
to
v
er
ba
l a
nd
p
hy
si
ca
l f
ig
ht
s
at
s
ch
oo
l a
nd
h
om
e.
A
ct
iv
it
ie
s
O
u
tp
u
ts
F
un
di
ng
S
ta
ff
S
pa
ce
C
as
e
m
an
ag
em
en
t
In
di
vi
du
al
co
un
se
lin
g
R
ef
er
ra
ls
to
ot
he
r
ag
en
ci
es
C
ou
ns
el
in
g
se
ss
io
ns
O
u
tc
o
m
es
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
76 LEADERSHIP SKILLS
The other danger is being too general. In this case, a phrase such as “provide social work
services” wouldn’t be enough to help the viewer know what the employee is doing as there
are so many activities involved in social work services. Getting the correct level of specificity
is important in helping develop your evaluation plan here and throughout the logic model.
As you can see from the arrows leading from the inputs to the activities, the program the-
ory indicates that, given the proper funds, staff, and space, the activities of case management
and individual counseling will occur. This may or may not happen, however, which is why a
process evaluation is needed and will be discussed later in this chapter.
The third column lists “Outputs.” An output is a measurable result of an activity. In this
example, the activity of “case management” results in client youth being referred to other
agencies for services. The output of the activity “individual counseling” is counseling sessions.
It is important to note that outputs are not changes in clients—outputs are the results of
agency activities that may or may not then result in changes to clients. The connection
between agency activity and outputs is perhaps the most difficult part of putting together a
logic model because many people mistakenly assume that if a service is given and docu-
mented, then client changes are automatic. This is simply not true.
The next three columns are collectively known as “Outcomes.” An outcome is a change in the
client and should be written in a way that is a change in knowledge, attitude, belief, status, or
behavior. Outcomes are why programs are developed and run—to change clients’ lives. Outcomes
can be developed at any level of intervention—individual, couple or family, group, organization,
or community of any size. This example uses a program designed to make a change at an indi-
vidual youth level, but could also have changes at the school or district level if desired.
Outcomes are usually written to show a time dimension with short-, medium-, and long-
term outcomes. The long-term outcome is the opposite of the problem stated at the top of the
logic model and thus ties the entire intervention back to its purpose—to solve a particular
problem. The division of outcomes into three distinct time periods is obviously a helpful fic-
tion, not a tight description of reality. Still, some outcomes are expected to come sooner than
others. These short-term outcomes are usually considered the direct result of outputs being
developed. On the example logic model, the arrows indicate that referrals and individual
counseling are both supposed to result in client youth better recognizing the role that anger
plays in their life. After that is achieved, the program theory hypothesizes that clients will use
skills at a beginning level to handle their anger. This is a case where one short-term outcome
(change in self-knowledge) leads the way for a change in behavior (using skills).
Logic models use the term outcome, but many people use the terms goals and objectives
to talk about what a program is trying to achieve. In the previous chapter, you were told
that an outcome objective answers the question, “What difference did it make in the lives
of the people served?” In this chapter, you are told that an outcome is a “change in the
client.” What’s the difference?
In reality, there is not much difference. Goals and objectives are one way of talking
about the purpose of a program. This terminology is older than the logic model terminol-
ogy and more widespread. But it can be confusing, too, because an objective at one level
of an organization may be considered a goal at another level or at a different time.
Outcomes are easier to fit into the logic model approach to showing program theory by
relating to resources, activities, and outputs. Systems theory terminology is more wide-
spread than before and avoids some of the conceptual pitfalls of goals and objectives
thinking.
OUTCOMES AND GOALS AND OBJECTIVES:
WHAT’S THE DIFFERENCE?
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Logic Models and Program Evaluation 77
The element “beginning level use of skills to handle anger” has two arrows leading to
medium-term outcomes. The first arrow leads to “higher level use of skills to handle anger.”
In this theory of change, at this point, there is still anger, but the youth recognize what is
occurring and take measures to handle it in a skillful way that does not lead to negative con-
sequences. The second arrow from “beginning level use of skills to handle anger” indicates
that the program designers believe that the skills youth learn will assist them to reframe situ-
ations they are in so that they feel angry less frequently. This is a separate behavior than
applying skills to handle anger, so it receives its own arrow and box.
The final column represents the long-term outcomes. Often, there is only one element
shown in this column, one indicating the opposite of the problem. In this logic model, since
the problem is seen to occur both at school and at home, each is looked at separately. A youth
may reduce fights at home but not at school, or vice versa, so it is important to leave open
the possibility of only partial success.
This example logic model shows a relatively simple program theory, with two separate
tracks for intervention but with overlapping outcomes expected from the two intervention
methods. It indicates how one element can lead to more than one “next step” and how dif-
ferent elements can lead to the same outcome. Finally, while it is not necessarily obvious just
yet, this example shows some weak points in the program’s logic that will emerge when we
use it as a guide to evaluating the program.
PROGRAM EVALUATION
As you can see from this discussion, we have used a logic model to represent what we believe
will happen when the proper inputs are applied to the correct client population. In the end,
if all goes well, clients will no longer have the problem the program addresses, or at least the
degree or extent of the problem will be less.
Evaluation is a way to determine the worth or value of a program (Rossi, Lipsey, &
Freeman, 2003). There are two primary types of evaluation: process and outcome. The first,
process evaluation, examines the way a program runs. In essence, a process evaluation exam-
ines the first three columns of a logic model to determine whether required inputs were avail-
able, the extent to which activities were conducted, and the degree of output accomplishment.
Another aspect of a process evaluation, called fidelity assessment, examines whether the
program being evaluated was conducted in accord with the way the program was supposed
to be conducted. If all components of a program are completed, fidelity is said to be high.
Particularly with evidence-based and manualized programs, if changes are made to the pro-
gram model during implementation, the program’s effectiveness is likely to be diminished.
The value of the logic model for evaluation is that most of the conceptual information
needed to design the evaluation of a program is in the logic model. The required inputs are
listed, and the evaluator can check to determine which resources actually came into the pro-
gram. Activities are similarly delineated, and an evaluator can usually find a way to count the
number of activities that the program completed. Similarly, the logic model describes what
outputs are expected, and the evaluator merely has to determine how to count the number of
completed outputs that result from the program activities.
Looking at the example logic model shows us that we want to have in our evaluation plan
at least one way to measure whether funding, staff, and space (the inputs) are adequate; how
We present both sets of terms so that you can be comfortable in all settings. But you
should realize that both approaches are ultimately talking about the same thing: the
ability of an organization to make people’s lives better.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
78 LEADERSHIP SKILLS
much case management occurred and individual counseling was conducted (the activities);
and the extent to which referrals were made (and followed up on) and the number of indi-
vidual counseling sessions that happened (the outputs). This information should be in pro-
gram documents to compare what was planned for with what was actually provided. Having
a logic model from the beginning allows the evaluator to ensure that proper data are being
collected from the program’s start, rather than scrambling later to answer some of these basic
questions.
As noted earlier, this is not a perfect logic model. The question in the process evaluation at
this stage might be to determine how to actually measure “case management.” The output is
supposed to be “referrals to other agencies,” but there is much else that could be considered
beneficial from a case management approach. This element may need careful delineation and
discussion with stakeholders to ascertain exactly what is important about case management
that should be measured.
The second primary type of evaluation examines program outcomes. Called an outcome
evaluation, it focuses on the right half of the logic model, where the designated short-,
medium-, and long-term outcomes are listed. The evaluator chooses which outcomes to assess
from among the various outcomes in the logic model. Decisions need to be made about how
to measure the outcomes, but the logic model provides a quick list of what to measure. In the
example logic model, the short-term outcome “better recognition of the role anger plays in
their lives” must be measured and could be accomplished using a set of questions asked at
intake into the program and after some time has passed after receiving services. One standard-
ized anger management instrument is called the “Anger Management Scale” (Stith & Hamby,
2002). A standardized instrument, if it is appropriate for the clients and program, is a good
choice because you can find norms, or expected responses, to the items on the instrument. It
is helpful to you, as the evaluator, to know what “average” responses are so you can compare
your clients’ responses to the norms. Sometimes, however, it can be difficult to find a stan-
dardized instrument that is fully appropriate and relevant to your program.
Another way of measuring is to use an instrument you make up yourself. This has the
advantage of simplicity and of being directly connected to your evaluation. In this case, for
example, you could approach this outcome in at least two ways. First, you could request a
statement from the case worker or counselor indicating that the client has “recognized the
role that anger plays” in his or her life, without going into any detail. A second approach
would be to have the client write a statement about the role anger plays in his or her life.
Neither of these measurements will have a lot of practical utility. Going through the logic
model in this way actually shows that this link in program logic is difficult to measure and
may not be totally necessary.
Outcome evaluations also sometimes include a search for unanticipated outcomes. An
unanticipated outcome is a change in clients or the environment that occurs because of
the program, intervention, or policy, but that was not thought would result and so is not
included in the logic model.
WHAT IS AN UNANTICIPATED OUTCOME?
While it may seem startling to have an example in a text that shows a less-than-perfect
approach, it is included here to show that using a logic model is very useful in showing weak
spots in the program logic. This link to “better recognition” is not a fatal problem, and may
indeed be an important cognitive change for the client. The issue for evaluation is how to
measure it, and whether it really needs to be measured at all.
Of more importance is the next link, which leads to “learn skills to handle anger.” The eval-
uation must ensure that clients understand skills to help them handle anger and so document
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Logic Models and Program Evaluation 79
these skills. It is not enough to indicate that skills were taught, as in a group or individual
session. Teaching a class is an activity and so would be documented in the process evaluation
portion of the overall evaluation, but being in a class does not guarantee a change in the client.
In this evaluation, we would like to have a measure of skill that can show improvement in the
ability to perform the anger management skill. This attribute of the measure is important
because we expect the clients to get better in their use over time and include more skillful use
of the techniques as a medium-term outcome in the logic model.
The other medium-term outcome expected is that clients will be able to reframe situations
so that they actually get angry less frequently. The program logic shows this outcome occur-
ring as a result of both beginning and higher level use of skills. Because this element is broken
out from the use of skills to “handle anger,” it will need a separate measure. As an evaluator,
you can hope that an established, normed instrument is available, or that this is a skill that is
measured by a separate item on a longer scale. If not, you will need to find a way to pull this
information from staff members’ reports or client self-assessments.
The final links in the logic model connect the medium-term outcomes to the long-term
outcomes of fewer fights at school and fewer fights at home. Because youth having too many
fights was identified as the problem this program is addressing, we want to know to what
degree fights decreased. The measure here could be client self-reports, school records, or
reports from people living in the home.
Implicit in the discussion of the use of this logic model for evaluation purposes is that
measurements at the end will be compared to an earlier measure of the same outcome. This
is called a single group pretest-posttest evaluation (or research) design. It is not considered a
strong design due to the ability of other forces (threats to internal validity) to affect the
results. The design could be stronger if a comparison group of similar youth (perhaps at a
different school) were chosen and tracked with the same measures. The design could be much
stronger if youth at the same school were randomly assigned to either a group that received
the program or a different group that did not receive the program. It is beyond the scope of
this book to cover in detail all the intricacies of measurement and evaluation design, but we
hope this brief overview whets your appetite to learning more.
Measurement of outcomes, while alluded to earlier, is an important part of any evaluation
effort. If measures are not appropriate or have low validity and reliability, the value of the
evaluation will be seriously compromised. It is suggested that anyone designing an evaluation
look at a book on research methods such as Rubin and Babbie (2012), and also have access
to books about measures, such as Fischer and Corcoran (2007). (The cost of a new book on
research methods may be pretty high, but used editions contain much the same information
and can be found for much lower prices.)
SUMMARY
Using an example, this chapter has covered the components of a logic model and how to
develop one. It also demonstrates how to use a logic model to design an evaluation plan,
including how it raises issues of comprehending program logic, measurement, and evaluation
design.
REFERENCES
Fischer, J., & Corcoran, K. (2007). Measures for clinical practice and research: A sourcebook (4th ed.).
New York: Oxford University Press.
Frechtling, J. (2007). Logic modeling methods in program evaluation. San Francisco: Jossey-Bass.
Preskill, H., & Russ-Eft, D. (2004). Building evaluation capacity. Thousand
Oaks, CA: Sage.
Rossi, P., Lipsey, M., & Freeman, H. (2003). Evaluation: A systematic approach (7th ed.). Thousand
Oaks, CA: Sage.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
80 LEADERSHIP SKILLS
Rubin, A., & Babbie, E. (2012). Essential research methods for social work (3rd ed.) Brooks-Cole.
Stith, S., & Hamby, S. (2002). The anger management scale: Development and preliminary psychomet-
ric properties. Violence and Victims, 17(4), 383–402.
HELPFUL TERMS
Activities—elements of a logic model that describe what is done in the program, interven-
tion, or policy with the inputs allocated.
Fidelity evaluation or fidelity assessment—a type of process evaluation specifically designed
to determine the fidelity with which a program, intervention, or policy was implemented. In
other words, a fidelity evaluation (or fidelity assessment) determines the degree to which the
program was conducted in the way it was supposed to be conducted.
Goals—descriptions of future outcomes or states of being that typically are not measurable
or achievable. Instead, goal statements are focused on outcomes and are ambitious and ide-
alistic (see Chapter 6).
Inputs—elements of a logic model that describe the resources that will be used to address
the problem described in the problem statement. Inputs typically include funding, staff, and
space.
Logic model—“[a] tool that describes the theory of change underlying an intervention, prod-
uct or policy” (Frechtling, 2007, p. 1). Using systems theory concepts, relationships between
resources, activities, and desired outcomes are displayed.
Measurement—the act of operationalizing concepts (such as a particular change in clients)
and assigning a score or value to the level of that concept.
Objectives—the results that are expected as the organization works toward its stated goals.
Objectives are the steps that will be taken to reach the stated goals (see Chapter 6).
Outcome evaluation—a type of evaluation where the focus is answering questions about the
achievement of the program’s stated desired outcomes. Sometimes, efforts are included to
measure “unanticipated outcomes,” that is, effects of the program that were not included in
the logic model.
Outcomes—elements in logic models that describe changes in recipients’ knowledge, atti-
tudes, beliefs, status, or behavior. These are often divided into short-, medium-, and long-
term outcomes to show that some outcomes come before others.
Outputs—elements of a logic model describing the measurable results of program, interven-
tion, or policy activities.
Problem statement—an element of a logic model that describes the problem that the pro-
gram, intervention, or policy is trying to improve.
Process evaluation—a type or part of a larger evaluation that examines the way a program,
intervention, or policy is run or is implemented.
Program evaluation—using a set of research-based methods to determine the worth or value
of a program.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Logic Models and Program Evaluation 81
EXERCISES
1. In-Basket Exercise
Directions
For this exercise, you are Roberta McIntosh, a social work intern who has received a memo
from Jonas Sigurdson, a grant writer. He requests for you to develop a logic model, using
the information from the chapter and in the memo.
Memo
Date: October 30, 20XX
To: Roberta McIntosh; Social Work Intern
From: Jonas Sigurdson, Grantwriter
Subject: Logic Model Needed
As you know, we are working away on a grant application for a program. The funder wants
to see a logic model as part of the application. Since I know you have studied how to do a
logic model in your classes, and I am a bit unsure what is required, I would like you to read
over the draft program description and develop a logic model by Thursday. Be sure to
include all the required elements of a logic model, and use only outcomes that we can mea-
sure without too much trouble. I should warn you that this is a fairly complicated program,
so developing a logic model will likely take you a pretty good chunk of time and several
drafts before you get one that really captures what we’re trying to do.
Serve More Project Program Description
The Serve More Project of Urban AIDS Services, Inc. (UAS), the largest nonprofit provider
of HIV/AIDS case management services in the Northwestern U.S., is designed to reduce the
incidence of HIV infection and increase the engagement of at-risk King County, Washington
Blacks and Latinos, including subpopulations of women and children, returning prisoners,
injection drug users, and men having sex with men (MSM) who are not injection drug users
in preventive, substance abuse treatment, and medical services.
The Serve More Project expands and enhances UAS’ Integrated HIV/AIDS-Substance
Abuse Treatment (IHASAT) program, funded by the Substance Abuse and Mental Health
Services Administration (SAMHSA), with special focus on the burgeoning Hispanic popula-
tion. UAS’ robust HIV/AIDS services are linked with dual disorders treatment providers.
Serve More’s objectives are to (1) apply evidence-based practices; (2) enhance cultural
competency of services; (3) create new bilingual outreach and case management positions
located strategically in collaborating agencies; (4) create a bilingual community resource spe-
cialist position; (5) create a community action council to plan collaborative response to the
focus population needs; (6) improve services to returning prisoners; (7) increase outreach to
women and engagement services; and (8) increase and enhance collaborative partnerships.
Serve More’s project goals include both process and client outcomes. Process goals are to
(1) expand outreach efforts, with emphasis on Latinos, Blacks and women, to reach an addi-
tional 3,060 (1,440 to 5,000) of the focus populations each grant year; (2) increase the num-
ber of individuals receiving HIV testing by UAS’ staff by 220 annually (280–500); (3) provide
case management to at least 160 project clients annually; (4) provide substance abuse treat-
ment to at least 60 project clients annually; (5) track for evaluation follow up 75 individuals
each year of the grant period, focusing on Latinos and Latinas; (6) refer 100% of outreach
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
82 LEADERSHIP SKILLS
contacts requesting substance abuse and/or mental health treatment; (7) link 95% of all
positive HIV case findings who receive their test results to HIV care and services; (8) achieve
at least 50% medical adherence; and (9) achieve 80% substance abuse treatment adherence.
Project client outcomes are to increase the percentage of clients who (1) do not use alcohol
or illegal drugs; (2) are currently employed or attending school; (3) have a permanent place to
live in the community; (4) experience no alcohol or illegal drug-related health, behavioral or
social consequences; or (5) have no involvement with the criminal justice system; and to
decrease the percentage of clients who (6) inject illegal drugs; (7) engage in unprotected sexual
contact; or engage in unprotected sexual contact with (8) a person high on drugs; (9) an
injected drug user; or (10) a person who is HIV+ or has AIDS.
Target Population and Geographic Area Served—The focus populations to be served by
the Serve More Project are Hispanics/Latinos and Blacks/American Americans.
Subpopulations include (1) men who inject drugs, including non-Intravenous Drug Using
(IDU) men who have sex with men (MSM); (2) women, including women and children;
and (3) the recently incarcerated. The geographic area for the project is King County,
Washington.
2. The Chocolate Chip Cookie Evaluation Exercise [This idea is adapted from
Preskill and Russ-Eft (2004). It is one of my students’ favorite exercises, and they now use
it with their students.]
Have participants get into small groups of no more than four people. This works well in
work settings as well as in classes. The task of each group is to develop an evaluation system
to determine the “ideal” chocolate chip cookie. The only caveat is that all members of the
group must agree to the process developed. Each group should develop a set of criteria that
individual members will be able to use to rate how closely any individual chocolate chip
cookie nears “perfection.” This means the criteria must be understood similarly by all, with
an agreed-on benchmark. For example, one student group I gave this exercise to indicated
that “shape” was an important attribute of the perfect cookie. I challenged them on this
because the group did not indicate what shape was preferred, and I correctly pointed out
that all cookies have a shape. After all group members have agreed to a set of criteria, the
leader gives each group a cookie from several different varieties of store-bought or home-
made cookies. Group members must then individually go through all of the criteria for each
cookie and, based on the criteria chosen, choose the “best” cookie from amongst those they
were given.
Often, groups come up with very different criteria and individuals, using the same criteria,
rate the same cookie very differently. This variation in criteria and ratings provides a very
good basis for understanding the underlying principles of criterion-based evaluation and
measurement issues.
3. Design Your Own Logic Model
It can be a very helpful exercise in understanding how to construct a logic model to use a
program or intervention with which you are very familiar. With one or two other people who
share your knowledge, develop a logic model (if you are a student, you might choose your
educational program; if you are employed, use the program you are employed by). Be sure
to construct a problem statement—what problem is being addressed? (Knowing the purpose
is sometimes a difficult question to answer, but it is essential.) When you are done, show
your work to another group or talk about it in class. What were the easier parts of the pro-
cess, and which were the more challenging?
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Logic Models and Program Evaluation 83
4. Using a Logic Model to Plan an Evaluation
Using the logic model that you just created in Exercise 3, discuss how you would ideally
evaluate this intervention. What are the most important process and client outcomes to
measure? What measures will you use? Who will collect the information? How will it be
analyzed to determine whether the recipients of the intervention changed? Which were the
easier parts of the process, and which were the more challenging parts?
ASSIGNMENTS
1. Do an online search for “logic model” (you’ll find millions of results, including
YouTube videos). Find three different sources (not including Wikipedia) describing
what a logic model is and how to develop one. Compare and contrast these with the
information in this chapter. Make two different logic models of a simple program
you’re familiar with, following the guidelines from two different sources. Which vari-
ations make the most sense to you?
2. Conduct an online search for a program evaluation of a program that you are either
familiar with or that you would like to know more about. Write a paper about the
evaluation, answering these questions: What is the program being evaluated? What are
the results of the evaluation, both process and outcome? How strong do you think the
evaluation research design was? What measures were used, and what are their
strengths and weaknesses? How much credence do you place in the results? Finally,
how does this evaluation affect your willingness to try the program with the clients in
your (possibly hypothetical) agency?
3. Find a set of three to four program objectives or outcomes for a program you’re famil-
iar with and briefly describe them. Look for standardized instruments to be able to
accurately measure these objectives. Describe the source of the instruments and what
makes them good measures for the objectives or outcomes you have found. Write up
the information as a memo recommending these instruments to the lead researcher of
the program evaluation team.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.
Watson, L. D. (., & Hoefer, R. A. (2013). Developing nonprofit and human service leaders : Essential knowledge and skills. SAGE Publications, Incorporated.
Created from capella on 2023-03-02 22:49:07.
C
op
yr
ig
ht
©
2
01
3.
S
A
G
E
P
ub
lic
at
io
ns
, I
nc
or
po
ra
te
d.
A
ll
rig
ht
s
re
se
rv
ed
.