RADIANCE BLOG

Category: Evaluations

Defining Best Practices and Evidence-Based Programs

The field of evaluation, like any field, has a lot of jargon.  Jargon provides a short-hand for people in the field to talk about complex things without having to use a lot of words or background explanation, but for the same reason, it’s confusing to people outside the field. A couple of phrases that we get frequent questions about are “best practices” and “evidence-based programs”.

“Evidence-based programs” are those that have been found by a rigorous evaluation to result in statistically significant outcomes for the participants. Similarly, “best practices” are evidence-based programs or aspects of evidence-based programs that have been demonstrated through rigorous evaluation to result in the best outcomes for participants.  Sometimes, however, “best practices” is used as umbrella term to refer to a continuum of practices with varying degrees of support, where the label “best practices” anchors the high end of the continuum.  For example, the continuum may include the subcategory of “promising practices,” which typically refer to program components that have some initial support, such as a weakly significant statistical finding, that suggest those practices may help to achieve meaningful outcomes.  Those practices may or may not hold up to further study, but they may be seen as good candidates for additional study.

Does following “best practices” mean your program is guaranteed to have an impact on your participants?  No, it does not.  Similarly, does using the curriculum and following the program manual for an evidence-based program ensure that your program will have an impact on your participants? Again, no.  Following best practices and using evidence-based programs may improve your chances of achieving measurable results for your participants, but if your participants differ demographically (i.e., are older or younger, higher or lower SES, etc.) from the participants in the original study, or if your implementation fidelity does not match the original study, the program/practices may not have the same impact as they did in the original study.  (Further, the original study may have been a type 1 error, but that’s a topic for another day.)  That is why granting agencies ask you to evaluate your program even when you are using an evidence-based program.

To know whether you are making the difference you think you’re making, you need to evaluate the impact of your efforts on your participants.  If you are using an evidence-based program with a different group of people than have been studied previously, you will be contributing to the knowledge base for everyone about whether that program may also work for participants like yours.  And if you want your program to be considered evidence-based, a rigorous evaluation must be conducted that meets established criteria by a certifying organization like the Blueprints program at the University of Colorado Boulder, Institute of Behavioral Science, Center for the Study and Prevention of Violence or the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Evidence-based Programs and Practices (NREPP).

So, it is a best practice to use evidence-based programs and practices that have been proven to work through rigorous, empirical study, but doing so doesn’t guarantee success on its own. Continued evaluation is still needed.


When experiences can lead you astray

Many organizations tell me that they hear from their participants all the time telling them how much the program changed their lives.  Understandably, those experiences matter a lot to organizations and they want to capture those experiences in their evaluations.

Recently I heard a podcast that perfectly captured the risks in relying too heavily on those kinds of reports.  There are two related issues here.  The first is that while your program may have changed the lives of a few participants, your evaluation is looking to determine whether you made a difference for the majority of participants.  The second is that you are most likely to hear from participants who feel very strongly about your program, and less likely to hear from those who were less affected by it.  An evaluation will ensure that you are hearing from a representative sample of participants (or all participants) and not just a small group that may be biased in a particular direction.

An evaluation plan can ensure you capture both qualitative and quantitative measures of your impact in a way that accurately reflects the experiences of your participants.


Engagement in evaluation

Engaging program participants in the evaluation is known as participatory evaluation.  (See Matt Bruce’s recent blog on participatory research for more detail about this approach.) The logic of participatory evaluation often resonates with human services providers.  It empowers service recipients to define their needs and goals for the program.

It can be eye opening for program staff to hear participants’ views of what is most important to them, and what they’re hoping to get out of the program.  For example, program aspects that are critical to participants may be only incidental to program staff.  This kind of input can lead to improved understanding of the program logic, as well as changes to desired outcomes.

In what ways could you bring participants into your evaluation process?

 


Writing an RFP

So you’ve finally reached a point where you feel like you need more information to move forward as an organization, and, even more importantly, you’ve been able to secure some amount of funding to do so. Suddenly you find yourself elbow deep in old request-for-proposals (RFPs), both from your organization and others, trying to craft an RFP for your project. Where do you start?

We write a lot of proposals in response to RFPs at Corona, and based on what we’ve seen, here are a few suggestions for what to include in your next RFP:

  • Decision to be made or problem being faced. One of the most important pieces of information that is often difficult to find, if not missing from an RFP, is what decision an organization is trying to make or what problem an organization is trying to overcome. Instead, we often see RFPs asking for a specific methodology, while not describing what an organization is planning to do with the information. While specifying the methodology can sometimes be important (e.g., you want to replicate an online survey of donors, you need to perform an evaluation as part of a grant, etc.), sometimes specifying it might limit what bidders suggest in their proposals.

Part of the reason why you hire a consultant is to have them suggest the best way to gather the information that your organization needs. With that in mind, it might be most useful to describe the decision or problem that your organization is facing in layman’s terms and let bidders propose different ways to address it.

  • Other sources of data/contacts. Do you have data that might be relevant to the proposals? Did your organization conduct similar research in the past that you want to replicate or build upon? Do you have contact information for people who you might want to gather information from for this project? All these might be useful pieces of information to include in an RFP.
  • Important deadlines. If you have key deadlines that will shape this project, be sure to include them in the RFP. Timelines can impact proposals in many ways. For example, if a bidder wants to propose a survey, a timeline can determine whether to do a mail survey, which takes longer, or a phone survey, which is often more expensive but quicker.
  • Include a budget, even a rough one. I think questions about the budget are the number one question I see people ask about an RFP. While a budget might scare off a more expensive firm, it is more likely that including a budget in an RFP helps firms propose tasks that are financially feasible.

Requesting proposals can be a useful way to get a sense of what a project might cost, which might be useful if you are trying to figure out how much funding to secure. If so, it’s often helpful to just state in your RFP that your considering different options and would like pricing for each recommended task, along with the arguments for why it might be useful.

  • Stakeholders. Who has a stake in the results of the project and who will be involved in decisions about the project?  Do you have a single internal person that the contractor will report to or perhaps a small team?  Are there others in the organization who will be using the results of the project?  Do you have external funders who have goals or reporting needs that you hope to be met by the project?  Clarifying who has a stake in the project and what role they will play in the project, whether providing input on goals, or approving questionnaire design, is very helpful. It is useful for the consultant to know who will need to be involved so they can plan to make sure everyone’s needs are addressed.

Writing RFPs can be daunting, but they can also be a good opportunity for thinking about and synthesizing an important decision or problem into words. Hopefully these suggestions can help with that process!


Beyond the logic model: Improve program outcomes by mapping causes of success and failure

Logic modeling is common in evaluation work, but did you know there are a variety of other tools that can help visualize important program elements and improve planning to ensure success?

One such tool is success mapping.  A success map can be used to outline the steps needed to implement a successful program.  It can also be used to outline the steps needed to accomplish a particular program improvement.  In a success map the steps are specific activities and events to accomplish, and arrows between steps indicate the sequence of activities, in a flow chart style. Compared to a logic model, a success map puts more emphasis on each step of implementation that must occur to ensure that the program is a success.  This can help the program team ensure that responsibilities, timelines, and other resources are assigned to all of the needed tasks.

A related tool, called fault tree analysis, takes an inverse approach to the success map.  Fault tree analysis starts with a description of an undesirable event (e.g., the program fails to achieve its intended outcome), and then reverse engineers the causal chains that could lead to that failure.  For example, a program may fail to achieve intended outcomes if any one of several components fails (e.g., failure to recruit participants, failure to implement the program as planned, failure of the program design, etc.).  Step-by-step, a fault tree analysis backs out the reasons that particular lines of failure could occur.  This analysis provides a systematic way for the program team to think about which failures are most likely and then to identify steps they can take to reduce the risk of those things occurring.

These are just two of many tools that can help program teams ensure success.  Do you have other favorite tools to use?


Fresh Evaluation Ideas for Spring

As an evaluator, it’s really easy to draft a thousand lines of questioning to capture every nuance in every conceivable outcome that might result from a particular program.  I want to know everything, and I want to understand everything deeply, and so do the organizations I work with.

DataYet collecting too much data burdens both program participants and the evaluation team, (and in some cases can change how participants respond to particular items – see details here). The hard part of evaluation work is distilling the goals down to their essence and choosing the highest impact measures.

This is especially important in situations where frequent measures are needed because services are adapting to meet changing needs.  A recent NPR story illustrates this, also showing how technology makes continuous monitoring possible.  The story points out that organizations providing aid in disasters often decide what to do at the outset, and then don’t have much information about how it went until they evaluate it formally after the program ends.  But in a few recent cases, including the international response to the Ebola epidemic, a group has implemented a short survey administered weekly by text messages to cell phones of residents in affected communities.  The survey measures only around five key objectives of the Ebola response (e.g., impacts on travel, trust in communications, etc.), and because it is implemented weekly, it provides ongoing updates on progress toward the desired objectives.  The data helps steer the program activities to best meet the most pressing needs.

It is both useful and inspiring to learn about thoughtful, creative solutions that other evaluators have developed to help organizations reach their goals.  We’re always looking to learn and grow so that we can best serve organizations who are themselves continuously improving as they work to make the world a better place.

 


The Power of Ranking

One of the fun tools of our trade is index development.  It’s a way to rank order things on a single dimension that takes into account a number of relevant variables.  Let’s say you want to rank states with respect to their animal welfare conditions, or rank job candidates with regard to their experience and skills, or rank communities with respect to their cost of living.  In each of these cases, you would want to build an index (and indeed, we have, for several of those questions).

Index-based rankings are all the rage.  From the U.S. News & World Report ranking of Best Colleges to the Milliken Institute’s Best Cities for Successful Aging, one can find rankings on almost any topic of interest these days.  But these rankings aren’t all fun and games (as a recent article in The Economist points out), so let’s take a look at the stakeholders in a ranking and the impacts that rankings have.

  1. The Audience/User. Rankings are a perfect input for busy decision makers.  They help decision makers maximize their choices with very little effort.  As such, they influence behavior, driving decisions about where to apply to college, whom to hire, where to go on vacation, where to move in retirement, and so on.  But if the rankings are based on different variables than are important to the users, users can be misled.
  2. The “Ranked”. For the ranked, impacts reflect the collective decisions of the users.  Rankings impact colleges’ applicant pools, cities’ tourism revenues, and local economies.  And on the flip side, rankings influence the behavior of those being ranked who will work to improve their standing on the variables included in the index.  As the old adage goes, “what gets measured gets done.”
  3. The “Ranker”. The developer of the index holds a certain amount of power and responsibility.  There are both mathematical and conceptual competencies required (in other words, it’s a bit of a science and an art).  The developer has to decide which variables to include and how to weight them, and those decisions are often based on practical concerns as much or more than on relevance to the goal of the measurement.  (There is usually a strong need to use existing data sources and data that is available for all of the entities being ranked.)  Selecting certain variables and not others to include in the index can have downstream impacts on where ranked entities focus their efforts for improvement, even when those included variables were chosen for expediency rather than impact.

To illustrate, I built an index to rank “The Best Coffee Shops in My Neighborhood.”  I identified the five coffee shops I visit the most frequently in my neighborhood and compiled a data set of six variables: distance from my home, presence of “latte art,” amount of seating, comfort of seating, music selection, and lighting.

Coffee_Latte Art

 

 

 

 

 

My initial data set is below.  First, take note of the weight assigned to each variable.  Music selection and seating comfort are less important to my ranking than distance from home, latte art, amount of seating, and lighting.  Those weights reflect what is most important to me, but might not be consistent with the preferences of everyone else in my neighborhood.

Index Table

Next, look at the data.  Distance from home is recorded in miles (note that smaller distances are considered “better” to me, so this will require transformation prior to ranking).  Latte art is coded as present (1) or absent (0).  This is an example of a measure that is a proxy for something else.  What is important is the quality of the drink, and the barista’s ability to make latte art is likely correlated with their training overall – since I don’t have access to information about years of experience or completion of training programs, this will stand in instead as a convenience measure.  Amount of seating is pretty straightforward.  Shop #5 is a drive-through.   Seating comfort is coded as hard chairs (1) and padded seats (2).  Music selection is coded as acceptable (1) and no music (0).  Lighting is coded as north-facing windows (1), south-facing windows (2), and east- or west-facing windows (3), again, because that is my preference.

After I transform, scale, aggregate, and rank the results, here is what I get.

Index Table 2

 

 

 

 

These results correspond approximately with how often I visit each shop, suggesting that these variables have captured something real about my preferences.

Now, let’s say I post these rankings to my neighborhood’s social media site and my neighbors increase their visits to Shop #2 (which ranked 1).  My neighbors with back problems who prefer hard seat chairs may be disappointed with their choices based on my ranking.  The shop owners might get wind of this ranking and will want to know how to improve their standing.  Shops #3 and #5 might decide to teach their employees how to make latte art (without providing any additional training on espresso preparation), which would improve their rankings, but would be inconsistent with my goal for that measure, which is to capture drink quality.

With any ranking, it’s important to think about what isn’t being measured (in this example, I didn’t measure whether the shop uses a local roaster, whether they also serve food, what style of music they play, what variety of drinks they offer, etc.), and what is being measured that isn’t exactly what you care about, but is easy to measure (e.g., latte art).  These choices demonstrate the power of the ranker and have implications for the user and the ranked.

Perhaps next we’ll go ahead and create an index to rank Dave’s top ski resorts simultaneously on all of his important dimensions.

What do you want to rank?


Who you gonna call?

With Halloween approaching, we are writing about scary things for Corona’s blog. This got thinking about some of the scary things that we help to make less scary.  Think of us as the people who check under the bed for monsters, turn on lights in dark corners, bring our proton packs and capture the ectoplasmic entities … wait, that last one’s the Ghostbusters.  But you get the idea.

As an evaluator I find that evaluators often have a scary reputation.  There is a great fear that evaluators will conclude your programs aren’t working and that will be the end of funding and the death of your programs.  In reality, a good evaluator can be an asset to your programs (a fear-buster, if you will) in a number of ways:

  1. Direction out of the darkness.  Things go wrong … that’s life.  Evaluation can help figure out why and provide guidance on turning it around before it’s too late.  Maybe implementation wasn’t consistent, maybe some outcome measures were misunderstood by participants (see below), maybe there’s a missing step in getting from A to B.  Evaluators have a framework for systematically assessing how everything is working and pinpointing problems quickly and efficiently so you can address them and move forward.
  2. Banisher of bad measures.  A good evaluator will make sure you have measures of immediate, achievable goals (as well as measures of the loftier impacts you hope to bring about down the road), and that your measures are measuring what you want (e.g., questions that are not confusing for participants or being misunderstood and answered as the opposite of what was intended).
  3. Conqueror of math.  Some people (like us) love the logic and math and analysis of it all.  Others, not so much.  If you’re one of the math lovers, it’s nice to have an evaluation partner to get excited about the numbers with you, handle the legwork for calculating new things you’ve dreamed up, and generally provide an extra set of hands for you.  If you’re not so into math, it’s nice to be able to pass that piece off to an evaluator who can roll everything up, explain it in plain language, and help craft those grant application pieces and reports to funders that you dread.  In either case, having some extra help from good, smart people who are engaged in your work is never a bad thing, right?

This fall, don’t let the scary things get in your way.  Call in some support.


Four questions to ask before starting your evaluation

Checkered FlagEvaluation is a helpful tool to support many different decisions within an organization.  Evaluation can take on many forms (e.g., summative, formative, developmental, outcomes, process, implementation, etc.), and the first step is to identify what kind of evaluation will be most useful to you right now.  Regardless of whether you need to measure your outcomes or refine your processes, in order to plan your evaluation you will first need to get a handle on these four questions:

  1. What are you trying to accomplish with your work?  What are your goals?  How do you hope to change the community, the individuals you serve, the policies or systems in which you operate?
  2. What are you doing to get there?  What are the activities you’ve chosen to work toward your goals?  Do you operate one program or many?  Do you lobby for policy changes?  Do you run educational campaigns?  How do your activities align with your goals?
  3. How stable are your activities year over year?  Does your program run like a well-oiled machine with clear rules for operation?  Are you looking to make improvements to how you carry out your activities, or changes to your mix of activities?  Do you plan to remain nimble in your actions, responsive to changes in the environment, rather than pursuing a fixed set of activities?
  4. What are you hoping to gain from the evaluation?  Do you need to document your outcomes for a sponsor or granting agency? Are you looking for ways to improve your internal communications or efficiencies? Do you need to determine which of your strategies is the most effective to pursue going forward?

Answering these questions will help determine the kind of evaluation you need, and also help to identify any gaps between what you’re doing and where you’re trying to get.  Together they will put you on the path to a productive evaluation plan.