RADIANCE BLOG

Category: Evaluations

There’s No Place Like Home

Photo by Matt Collamer on Unsplash

If you have walked through downtown Denver recently, you know that it is hard to miss the growing homeless population. Civic Center Park has become a meeting place for many in the homeless population—a place where they can gather to share stories, food, and cell phones.  Each year, Denver conducts a “Point in Time” (PIT) survey that aims to count the number of people experiencing homelessness. The US Department of Housing and Urban Development (HUD) conducts an annual Point-in-Time (PIT) survey to track the rate of homelessness across the nation. Individual cities are responsible for collecting the data, with assistance from Local Homeless Coalitions, and provide the data to HUD, as well as publish a local report. In Denver, the Metro Denver Homeless Initiative oversees the PIT survey. The 2018 Denver PIT survey found that 5,317 people are experiencing homelessness in the city and county of Denver, competing for a total of approximately 1,000 emergency shelter beds (MDHI 2018). This number is up from the PIT count of 3,336 homeless persons in 2017.

Over time, the city of Denver has taken various approaches to “solving” the issue of homelessness. In 2003, the Denver Department of Human Services published a report titled “A Blueprint for Addressing Homeless in Denver” which outlined a ten-year action plan aimed at ending “chronic homelessness in Denver that will also address homeless prevention and the enhancement of services for populations with special needs” (Denver Homeless Planning Group 2003: 4). In 2005, Proclamation 53 was signed by then-mayor John Hickenlooper, expressing official support for Denver’s Road Home program—an initiative to secure housing for the city’s homeless population. Despite these, and other, city initiatives the homeless population in Denver continues to grow and housing costs surge past national averages. While the numbers may seem bleak, one Denver non-profit has followed the path laid out by other major cities such as Seattle, WA and Austin, TX and searched for an innovative solution. This solution came in the form of tiny houses.

Tiny homes burst onto the scene in the early 2000s. Small, sometimes mobile, homes with sleek designs offered a minimalist housing solution to people seeking a break from the materiality of the modern world. In Denver, tiny homes are now being used to provide a safe housing solution for some of Denver’s homeless population. Beloved Community Village, located in Denver’s River North (RiNo) district, consists of 11 tiny homes, housing up to 22 people. The self-governing community opened in July 2017, operating as a 180-day pilot project. In January 2018, Beloved Community Village was forced to relocate after their six-month lease with the Urban Land Conservancy expired. Luckily, the community was able to relocate only 200 feet away onto another property owned by the Urban Land Conservancy. Unfortunately, the Urban Land Conservancy and the city of Denver have only officially approved another 180-day lease agreement for the tiny house village, leaving the permanency of Beloved Community Village in question.

According to Beloved Community Village website, the village’s purpose “is to provide a home base and safe place for those who are presently in Denver and have no other place to live. With this collection of secure and insulated homes, we provide a viable solution in the midst of the current housing crisis.” While Beloved Community Village has been successful thus far in living and embodying their purpose, one has to wonder whether the tiny home model can be expanded to accommodate even more homeless residents in the Denver-metro area and throughout the state of Colorado. In May 2018, the organization behind Beloved Community Village, the Colorado Village Collaborative, revealed they are actively working to open another tiny home community at St. Andrew’s Episcopal Church in downtown Denver. The new village will have eight tiny homes, designated specifically for women and transgender homeless residents.

Affordable housing remains a crucial need in Denver and across the nation, as housing costs continue to rise and wages continue to stagnate. Cities and towns must face this problem head-on and work to understand how and why their communities are affected in order to develop strategies and initiatives to tackle homelessness. Homelessness is only one problem though and does not exist in isolation, thus cities need to ensure they understand the greater context and vulnerabilities unique to their community. The issues involved span everything from zoning laws and development to population growth and migration to mental health and criminal justice services. In 2016 and 2017, Corona Insights conducted a needs assessment for the city of Longmont. The research found that some of the greatest needs facing community residents are the ability to find affordable housing options and in turn, paying for housing. Furthermore, between 2010 and 2014, the availability of rental properties with a monthly rent below $800 decreased by 33%. The completion of the needs assessment and its subsequent report in Longmont equipped the city government with knowledge to better meet the human service needs of their residents.

Homelessness is a pervasive issue in many urban centers and rural areas across the country, with no end in sight. Local governments and non-profit organizations both have roles to play in addressing homelessness. Communities and organizations interested in addressing homelessness may benefit from commissioning a community needs assessment to uncover systemic challenges in their local area, and committing to enact changes informed by the assessment findings.  Armed with information and compassion, we can begin to dismantle the barriers that lead to homelessness. The time is now.


Creative Ways to Get Useful and Actionable Data for a Small Budget Needs Assessment

The American Evaluation Association invited their Topic Interest Groups (TIGs) to each take over their blog for a week in 2018. As part of the Needs Assessment TIG, Beth and Kate were invited to write one of the blogs with tips for doing needs assessments. With help from Matt Bruce, they wrote about how to do a needs assessment with a small budget. This post originally appeared on the AEA365 blog on March 21, 2018.


Hello! We’re Beth Mulligan and Kate Darwent from Corona Insights, a firm that provides research, evaluation, and strategic consulting services for government and nonprofit organizations.  We are often contacted by clients who have both very limited resources and a very strong desire to understand and address the needs in their community (whether “their community” is low-income residents of a city or county, library patrons, Latinx children in their school district, or some other group). Here are some suggestions for creative ways to get useful and actionable data for a small budget needs assessment.

  1. Use secondary data sources.  Start by searching for and reviewing relevant existing reports or datasets.  This may include reports from state agencies or national organizations that reveal insights about your target population, relevant Census data, or previous studies conducted by your client.  Making sure you know what is already known before collecting new data is the first step to managing limited resources.
  2. Use your client’s resources creatively.  Although the client may have a limited budget to pay for outside help, they may be able to offer their own time and effort, or may have volunteer staff available, or may have other budgets for materials like printing or mailing that they can use.  Help the client to determine where they most need your help and expertise, and where they can take on tasks themselves with your guidance.
  3. Remember that perfect is the enemy of good.  Although we may prefer to conduct 15 key person interviews, would conducting two be better than zero?  Oftentimes, yes.  And though we would like to survey everyone in the community by mail, and send no fewer than two follow-up mailings, is the information we will get from a single mailing better than nothing? Would the information from an open-link survey or an intercept survey at some community events be better than nothing?  The judgment about whether to use what we may think of as lower-quality methods depends on the trade-offs in each situation.  In a situation where the population is relatively small and engaged, it may be reasonable to post an open-link survey on social media.  In other situations, it may be acceptable to do two interviews with service recipients rather than a representative sample survey.  No one solution will fit all situations, but be open to various non-optimal solutions that find the best compromise between quality and cost, especially when you have difficult-to-reach target populations.

Sometimes budget restrictions shrink or disappear when the client understands the value of more expensive options.  Don’t hesitate to communicate the benefits of things like greater coverage, higher response rates, participation from more stakeholder groups, expertise in data analysis, mapping, and so on.  Hopefully you won’t have to make tradeoffs because of financial resources, but in case you do, we hope these suggestions help you maximize the resources available to help a client serve their community better.

Rad Resource:

Conducting Quality Impact Evaluations Under Budget, Time and Data Constraints.  Independent Evaluation Group, World Bank

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on theaea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


How do you measure the value of an experience?

When I think about the professional development I did last week, I would summarize it thusly: an unexpected, profound experience.

I was given the opportunity to attend RIVA moderator training and I walked away with more than I ever could have dreamed I would get. Do you know that experience where you think back to your original expectations and you realize just how much you truly didn’t understand what you would get out of something? That was me, as I sat on a mostly-empty Southwest plane (156 seats and yet only 15 passengers) flying home. While you can expect a RIVA blog to follow, I was struck by the following thought:

What does it mean to understand the impact your company, product, or service has on your customers?

I feel like I was born and raised to think quantitatively. I approach what I do with as much logic as I can (sometimes this isn’t saying much…) When I think about measuring the impact a company, product, or service has on its customers, my mind immediately jumps to numbers – e.g. who (demographically) and how satisfied are they with it. But am I really measuring impact? I think yes and no. I’m measuring an impersonal impact; one that turns people into consumers and percentages. The other kind of impact largely missed in quantitative research is the impact on the person.

If I were to fill out a satisfaction or brand loyalty survey for RIVA, I would almost be unhappy that I couldn’t convey my thoughts and feelings about the experience. I don’t want them to know just that I was satisfied. I want them to understand how profound this experience was for me. When they talk to potential customers about this RIVA moderator class, I want them to be equipped with my personal story. If they listen and understand what I say to them, I believe they would be better equipped to sell their product.

This is one of the undeniable and extremely powerful strengths of qualitative research. Interviews, focus groups, anything that allows a researcher to sit down and talk to people is creating some of the most valuable data that can be created. We can all think of a time where a friend or family member had such a positive experience with some company, product, or service that they just couldn’t help but gush about it. Qualitative research ensures that valuable of that feedback is captured and preserved. If you want to truly understand who is buying your product or using your service, I cannot stress the importance of qualitative research enough.


Measurement Ideas in Evaluation

Kate Darwent and I are just back from the annual conference of the American Evaluation Association (AEA), which was held in Washington, DC this year.  We attended talks on a wide variety of topics, and attended business meetings for two interest groups (Needs Assessment and Independent Consulting).  Below, I discuss some of the current thinking around two very different aspects of measuring outcomes in evaluation.

Selecting Indicators from a Multitude of Possibilities

One session that I found particularly interesting focused on how to select indicators for an evaluation – specifically, what criteria should be used to decide which indicators to include in an evaluation. (This is a recurring topic of interest for me;  I mentioned the problem of too many indicators in a long ago blog, here.) In evaluation work, indicators are the measures of desired outcomes.  Defining an indicator involves operationalizing variables, or finding a way to identify a specific piece of data that will indicate whether an outcome has been achieved.  For example, if we want to measure whether a program increases empathy, we have to choose a specific survey question, or scale, or behavior that we will use to measure empathy at baseline and again after the intervention to see if scores go up over that period.  For any given outcome there are many possible indicators, and as a result, it is easy to get into a situation known as “indicator proliferation”.  At AEA, a team from Kimetrica gave a talk proposing a set of criteria for selecting indicators.  They proposed eight criteria that, if used, would result in indicators that would serve each of the five common evaluation paradigms. Their criteria feel intuitively reasonable to me; if you like these here’s the reference so you can give them full credit for their thinking (Watkins, B. & van den Heever, N. J. (2017, November). Identifying indicators for assessment and evaluation: A burdened system in need of a simplified approach. Paper presented at the meeting of the American Evaluation Association, Washington, DC.).  Their proposed criteria are:

  1. Comparability over time
  2. Comparability over space, culture, projects
  3. Standardized data collection/computation
  4. Intelligibility
  5. Sensitivity to context
  6. Replicability/objectivity
  7. Scalability and granularity
  8. Cost for one data point

Propensity Score Matching

In a very different vein, is the issue of how best to design an evaluation and analysis plan so that outcomes can be causally attributed to the program.  The gold standard for this is a randomized control trial, but in many situations that’s impractical, if not impossible to execute.  As a result, there is much thinking in the evaluation world about how to statistically compensate for a lack of random assignment of participants to treatment or control groups.

This year, there were a number of sessions on propensity score matching, which is a statistical technique used to select a control group that best matches a treatment group that was not randomly assigned.  For example, if we are evaluating a program that was offered to students who were already assigned to a particular health class, and we want to find other students in their grade who match them at baseline on important demographic and academic variables so that we can compare those matched students (i.e., “the controls”) to the students who got the program (i.e., “the treatment group”), propensity score matching can be used to find that set of best-matched students from the other kids in the grade who weren’t in the class with the program.

Propensity score matching is not a particularly new idea, but there are a variety of ways to execute it, and like all statistical techniques, requires some expertise to implement appropriately.  A number of sessions at the conference provided tutorials and best practices for using this analysis method.

In our work, one of the biggest challenges to using this method is simply the need to get data on demographic and outcome measures for non-participants, let alone getting all of the variables that are relevant to the probability of being a program participant.  But, assuming the necessary data can be obtained, it is still important to be aware that there are many options for how to develop and use propensity scores in an outcome analysis, there is some controversy about the effectiveness and appropriateness of various methods, and on top of it all, the process of finding a balanced model feels a lot like p-hacking, as does the potential for trying scores from multiple propensity score models in the prediction model.  So, although it’s a popular method, users need to understand the assumptions and limitations of the method, and do their due diligence to ensure they’re using it appropriately.

~

All-in-all, we had an interesting learning experience at AEA this year and brought back some new ideas to apply to our work. Attending professional conferences is a great way to stay on top of developments in the field and get energized about our work.


Keeping it constant: 3 things to keep in mind with your trackers

When conducting a program evaluation or customer tracker (e.g., brand, satisfaction, etc.), we are often collecting input at two different points in time and then measuring the difference. While the concept is straightforward, the challenge is keeping everything as consistent as possible so we can say that the actual change is NOT a result of how we conducted the survey.

Because we can be math nerds sometimes, take the following equation:

A change to any part of the equation to the left of the equal sign will result in changes to your results. Our goal then is to keep all the survey components consistent so any change can be attributed to the thing you want to measure.

These include:

  1. Asking the same questions
  2. Asking them the same way (i.e. research mode)
  3. And asking them to a comparable group

Let’s look at each of these in more detail.

Asking the same questions

This may sound obvious, but it’s too easy to have slight (or major) edits creep into your survey. The problem is, we then cannot say if the change we observed between survey periods is a result of actual change that occurred in the market, or if the change was a result of the changing question (i.e., people interpreted the question slightly differently).

Should you never add or change a question? Not necessarily. If the underlying goal of that question has changed, then it may need to be updated to get you the best information going forward. Sure, you may not be able to compare it looking back, but getting the best information today may outweigh the goal of measuring change on the previous question.

If you are going to change or add questions to the survey, try to keep them at the end of the survey so the experience of the first part of the survey is similar.

Asking them the same way

Just as changing the actual question can cause issues in your tracker, changing how you’re asking them can also make an impact. Moving from telephone to online, from in-person to self-administered, and so on can cause changes due to how respondents understand the question and other social factors. For instance, respondents may give more socially desirable answers when talking to a live interviewer than they will online. Reading a question yourself can lead to a different understanding of the question than when it is read to you.

 

Similarly, training your data collectors with consistent instructions and expectations makes a difference for research via live interviewers as well. Just because the mode is the same (e.g., intercept surveys, in-class student surveys, etc.) doesn’t mean it’s being implemented the same way.

Asking a comparable group

Again, this may seem obvious, but small changes in who you are asking can impact your results. For instance, if you’re researching your customers, and on one survey you only get feedback from customers who have contacted your help line, and on another survey you surveyed a random sample of all customers, the two groups, despite both being customers, are not in fact the same. The ones who have contacted your help line likely had different experiences – good or bad – that the broader customer base may not have.

~

So, that’s all great in theory, but we recognize that real-life sometimes gets in the way.

For example, one of the key issues we’ve seen is with changing survey modes (i.e., Asking them the same way) and who we are reaching (i.e., Asking a comparable group). Years ago, many of our public surveys were done via telephone. It was quick and reached the majority of the population at a reasonable budget. As cell phones became more dominant and landlines started to disappear, while we could have held the mode constant, the group we were reaching would change as a result. Our first adjustment was to include cell phones along with landlines. This increased costs significantly, but brought us back closer to reaching the same group as before while also benefiting from keeping the overall mode the same (i.e., interviews via telephone).

Today, depending on the exact audience we’re trying to reach, we’re commonly combining modes, meaning we may do phone (landline + cell), mail, and/or online all for one survey. This increases our coverage (https://www.coronainsights.com/2016/05/there-is-more-to-a-quality-survey-than-margin-of-error/), though it does introduce other challenges as we may have to ask questions a little differently between survey modes. But in the end, we feel it a worthy tradeoff to have a quality sample of respondents. When we have to change modes midway through a tracker, we work to diminish the possible downsides while drawing on the strengths to improve our sampling accuracy overall.


Defining Best Practices and Evidence-Based Programs

The field of evaluation, like any field, has a lot of jargon.  Jargon provides a short-hand for people in the field to talk about complex things without having to use a lot of words or background explanation, but for the same reason, it’s confusing to people outside the field. A couple of phrases that we get frequent questions about are “best practices” and “evidence-based programs”.

“Evidence-based programs” are those that have been found by a rigorous evaluation to result in statistically significant outcomes for the participants. Similarly, “best practices” are evidence-based programs or aspects of evidence-based programs that have been demonstrated through rigorous evaluation to result in the best outcomes for participants.  Sometimes, however, “best practices” is used as umbrella term to refer to a continuum of practices with varying degrees of support, where the label “best practices” anchors the high end of the continuum.  For example, the continuum may include the subcategory of “promising practices,” which typically refer to program components that have some initial support, such as a weakly significant statistical finding, that suggest those practices may help to achieve meaningful outcomes.  Those practices may or may not hold up to further study, but they may be seen as good candidates for additional study.

Does following “best practices” mean your program is guaranteed to have an impact on your participants?  No, it does not.  Similarly, does using the curriculum and following the program manual for an evidence-based program ensure that your program will have an impact on your participants? Again, no.  Following best practices and using evidence-based programs may improve your chances of achieving measurable results for your participants, but if your participants differ demographically (i.e., are older or younger, higher or lower SES, etc.) from the participants in the original study, or if your implementation fidelity does not match the original study, the program/practices may not have the same impact as they did in the original study.  (Further, the original study may have been a type 1 error, but that’s a topic for another day.)  That is why granting agencies ask you to evaluate your program even when you are using an evidence-based program.

To know whether you are making the difference you think you’re making, you need to evaluate the impact of your efforts on your participants.  If you are using an evidence-based program with a different group of people than have been studied previously, you will be contributing to the knowledge base for everyone about whether that program may also work for participants like yours.  And if you want your program to be considered evidence-based, a rigorous evaluation must be conducted that meets established criteria by a certifying organization like the Blueprints program at the University of Colorado Boulder, Institute of Behavioral Science, Center for the Study and Prevention of Violence or the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Evidence-based Programs and Practices (NREPP).

So, it is a best practice to use evidence-based programs and practices that have been proven to work through rigorous, empirical study, but doing so doesn’t guarantee success on its own. Continued evaluation is still needed.


When experiences can lead you astray

Many organizations tell me that they hear from their participants all the time telling them how much the program changed their lives.  Understandably, those experiences matter a lot to organizations and they want to capture those experiences in their evaluations.

Recently I heard a podcast that perfectly captured the risks in relying too heavily on those kinds of reports.  There are two related issues here.  The first is that while your program may have changed the lives of a few participants, your evaluation is looking to determine whether you made a difference for the majority of participants.  The second is that you are most likely to hear from participants who feel very strongly about your program, and less likely to hear from those who were less affected by it.  An evaluation will ensure that you are hearing from a representative sample of participants (or all participants) and not just a small group that may be biased in a particular direction.

An evaluation plan can ensure you capture both qualitative and quantitative measures of your impact in a way that accurately reflects the experiences of your participants.


Engagement in evaluation

Engaging program participants in the evaluation is known as participatory evaluation.  (See Matt Bruce’s recent blog on participatory research for more detail about this approach.) The logic of participatory evaluation often resonates with human services providers.  It empowers service recipients to define their needs and goals for the program.

It can be eye opening for program staff to hear participants’ views of what is most important to them, and what they’re hoping to get out of the program.  For example, program aspects that are critical to participants may be only incidental to program staff.  This kind of input can lead to improved understanding of the program logic, as well as changes to desired outcomes.

In what ways could you bring participants into your evaluation process?

 


Writing an RFP

So you’ve finally reached a point where you feel like you need more information to move forward as an organization, and, even more importantly, you’ve been able to secure some amount of funding to do so. Suddenly you find yourself elbow deep in old request-for-proposals (RFPs), both from your organization and others, trying to craft an RFP for your project. Where do you start?

We write a lot of proposals in response to RFPs at Corona, and based on what we’ve seen, here are a few suggestions for what to include in your next RFP:

  • Decision to be made or problem being faced. One of the most important pieces of information that is often difficult to find, if not missing from an RFP, is what decision an organization is trying to make or what problem an organization is trying to overcome. Instead, we often see RFPs asking for a specific methodology, while not describing what an organization is planning to do with the information. While specifying the methodology can sometimes be important (e.g., you want to replicate an online survey of donors, you need to perform an evaluation as part of a grant, etc.), sometimes specifying it might limit what bidders suggest in their proposals.

Part of the reason why you hire a consultant is to have them suggest the best way to gather the information that your organization needs. With that in mind, it might be most useful to describe the decision or problem that your organization is facing in layman’s terms and let bidders propose different ways to address it.

  • Other sources of data/contacts. Do you have data that might be relevant to the proposals? Did your organization conduct similar research in the past that you want to replicate or build upon? Do you have contact information for people who you might want to gather information from for this project? All these might be useful pieces of information to include in an RFP.
  • Important deadlines. If you have key deadlines that will shape this project, be sure to include them in the RFP. Timelines can impact proposals in many ways. For example, if a bidder wants to propose a survey, a timeline can determine whether to do a mail survey, which takes longer, or a phone survey, which is often more expensive but quicker.
  • Include a budget, even a rough one. I think questions about the budget are the number one question I see people ask about an RFP. While a budget might scare off a more expensive firm, it is more likely that including a budget in an RFP helps firms propose tasks that are financially feasible.

Requesting proposals can be a useful way to get a sense of what a project might cost, which might be useful if you are trying to figure out how much funding to secure. If so, it’s often helpful to just state in your RFP that your considering different options and would like pricing for each recommended task, along with the arguments for why it might be useful.

  • Stakeholders. Who has a stake in the results of the project and who will be involved in decisions about the project?  Do you have a single internal person that the contractor will report to or perhaps a small team?  Are there others in the organization who will be using the results of the project?  Do you have external funders who have goals or reporting needs that you hope to be met by the project?  Clarifying who has a stake in the project and what role they will play in the project, whether providing input on goals, or approving questionnaire design, is very helpful. It is useful for the consultant to know who will need to be involved so they can plan to make sure everyone’s needs are addressed.

Writing RFPs can be daunting, but they can also be a good opportunity for thinking about and synthesizing an important decision or problem into words. Hopefully these suggestions can help with that process!


Beyond the logic model: Improve program outcomes by mapping causes of success and failure

Logic modeling is common in evaluation work, but did you know there are a variety of other tools that can help visualize important program elements and improve planning to ensure success?

One such tool is success mapping.  A success map can be used to outline the steps needed to implement a successful program.  It can also be used to outline the steps needed to accomplish a particular program improvement.  In a success map the steps are specific activities and events to accomplish, and arrows between steps indicate the sequence of activities, in a flow chart style. Compared to a logic model, a success map puts more emphasis on each step of implementation that must occur to ensure that the program is a success.  This can help the program team ensure that responsibilities, timelines, and other resources are assigned to all of the needed tasks.

A related tool, called fault tree analysis, takes an inverse approach to the success map.  Fault tree analysis starts with a description of an undesirable event (e.g., the program fails to achieve its intended outcome), and then reverse engineers the causal chains that could lead to that failure.  For example, a program may fail to achieve intended outcomes if any one of several components fails (e.g., failure to recruit participants, failure to implement the program as planned, failure of the program design, etc.).  Step-by-step, a fault tree analysis backs out the reasons that particular lines of failure could occur.  This analysis provides a systematic way for the program team to think about which failures are most likely and then to identify steps they can take to reduce the risk of those things occurring.

These are just two of many tools that can help program teams ensure success.  Do you have other favorite tools to use?