Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

* indicates required




We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.

 



Nonprofit Data Heads

Here at Corona, we gather, analyze, and interpret data for all types of nonprofits.  While some of our nonprofit clients are a little data shy, many are data-heads like us!  Indeed, several nonprofits (many of which we have worked for or partnered with) have developed amazing websites full of easy to access datasets.

Here are 4 of my favorite nonprofit data sources…check them out!!

The Data Initiative at the Piton Foundation

Not only do they sponsor Mile High Data Day, but the Piton Foundation produces a variety of user friendly data interfaces.  I really like the creative ways they allow website visitors to explore data–not just static pie and bar charts. Instead, their interface is dynamic and extremely customizable. While their community facts tool pulls most (but not all) of its data from the US Census, this tool is very easy and fun to use.  Further, they have already defined and labeled neighborhoods across the Denver Metro area, making it easy for users to compare geographies without trying to aggregate census tract or block group numbers. This is an invaluable feature for data users who don’t have access to GIS. I also appreciate the option to display margin of error on bar charts when its available.

Highlights:

  • Easy to use from novice to expert data user
  • Data available by labeled neighborhood
  • 7-County Denver Metro focus

Explore

OpenColorado

With over 1,500 datasets, OpenColorado is a treasure trove of raw data.  While this site doesn’t have a fancy user interface, it does provide access to data in many different file types, making it a great website for the intermediate to advanced data user with access to software such as GIS, AutoCAD, or Google Earth.  Most data on OpenColorado is from Front Range cities (e.g., Arvada, Boulder, Denver, Westminster) and counties (e.g., Boulder, Denver, Clear Creek), but unfortunately it is far from a comprehensive list, so you’d need to look elsewhere if your searching for information from Arapahoe County, for example.

There are over 200 datasets specific to the City and County of Denver.  I opened a few that caught my eye, including the City’s “Checkbook” dataset that shows every payment made from the City (by City department) to payees by year.  I give kudos to Denver and OpenColorado for facilitating this type of fiscal transparency.  I also downloaded a dataset (CSV) of all Denver Police pedestrian and vehicle stops for the past four years, which included the outcome of each stop along with the address, latitude and longitude.  For a GIS user, this is especially helpful if you want to search for patterns of police activity compared to other social and geographic factors.  Even without access to spatial software, this dataset is useful because it includes neighborhood labels.  I created a quick pivot table in Excel to see the top ten neighborhoods for cars being towed (so don’t park your car illegally in these neighborhoods).

Highlights:

  • Tons of raw data
  • Various file types, including shapefiles and geodatabases that are compatible with GIS, and KML files that are compatible with GoogleEarth
  • Search for data by geography, tags, or custom search words

Kids Count from the Colorado Children’s Campaign

Kids Count is a well-respected data resource for all things kids.  Each year, the Colorado Children’s Campaign (disclaimer, they are also our neighbor, working just two floors below us) produces the Kids Count in Colorado report, which communicates important child well-being indicators and indices statewide and by county when available.  The neat thing about Kids Count is that it’s also a national program, so you can compare how indicators in a specific county compare to the state and nation. In addition to the full report available as a PDF, you can also interact with a state map and point and click to access a summary of indicators by county.  Mostly, their data is not available in raw form, but their report does explain how they calculated their estimates and provides tons of contextual information that makes their key findings much more insightful.

Highlights:

  • Compare county data to state and national trends
  • Reports include easy to understand analysis and interpretation of data
  • Learn about trends overtime and across demographic groups

Outdoor Foundation

If you’re looking for information about outdoor recreation of any type in any state, there is probably an Outdoor Foundation report that has the data you’re seeking.  Based in Boulder, Colorado, the Outdoor Foundation’s most common reports communicate studies of participation rates by activity type, both at a top level and also by selected activity types such as camping, fishing, and paddle sports (haven’t yet heard of stand-up paddle boarding?  It’s one of the fastest growing in terms of participation).  The top-line reports show trends over the past ten years, while the more detailed Participation Reports break out participation, and other factors such as barriers to participation, by various demographics.  Multiple other special reports, focusing on topics such as youth and technology, round out what’s available from this site.

The participation and special reports are helpful, but I’m most impressed with the Recreation Economy reports, which are available nationwide and within each state.  These reports estimate the economic contribution of outdoor recreation, including jobs supported, tax revenue, and retail sales.  For example, the outdoor recreation economy supported about 107,000 jobs in Colorado in 2013.  Unfortunately, the raw data is not available for further analysis, but the summary results are still interesting and helpful.

Explore:


Art meets architecture in Denver this weekend

Looking for something fun to do this weekend in-between rides on the new A Line to DIA? Check out the arts and cultural activities during Doors Open Denver. Art meets architecture through pop-ups ranging from a nomadic art gallery to poetry, drama, and music performances among the 11 offerings. My favorite? Graffiti art. If you’ve been secretly wanting to learn the art of graffiti painting – and you’re 55 or older – then we’ve got the creative outlet for you. Bust through stereotypes as you create graffiti art inspired by two of Denver’s architectural gems.

  • April 23rd, 1-3 pm – Saturday’s pop-up will be hosted by Clyfford Still Museum on their front lawn. Clyfford Still Museum will give 4 – 20 minute architectural tours each day at 11:00, 11:30, 2:00 and 2:30.
  • April 24th, 1-3 pm – Sunday’s pop-up will be hosted by the new Rodolfo “Corky” Gonzales Library and include 3 tours led by architect Joseph Moltabano of Studiotrope, a Denver-based architecture and design agency. DPL staff will share how the library’s design informs their work. Since Sunday is Día del Niño the artist will be prepared to host a multi-generational event at the library.VSA Colorado/Access Gallery

Thanks to our collaborative partners: VSA Colorado/Access Gallery, studiotrope design, Denver Public Library, Studiotrope Designand Clyfford Still Museum. I’d like to give a special shout out to Damon McLeese of Access Gallery; Joseph Montalbano  of DPLstudiotrope; Ed Kiang, Viviana Casillas and Diane Lapierre of DPL; and Sonia Rae of Clyfford Still Museum.

Please join me in thanking the Bonfils-Clyfford Still Museum Stanton Foundation for funding this engaging spotlight on art and architecture.

For more information visit this Doors Open Denver link. 


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?


Your Baby Is Increasingly Special and Unique, Apparently

It seems like when I’m in the mall and hear parents talking to their kids, I hear unusual names more and more often.  I’ve been developing a theory that parents are enjoying creativity more and valuing tradition less when that birth certificate rolls around, so in keeping with Corona Insights tradition, I thought I’d explore it a little more with some data analysis.  Off I went to the Social Security Administration website to put together a database of names.

I took a look at the most popular baby names in 2014, and compared them with those of 2004, 1994, 1984, and so on, all the way back to 1884.  Are unusual names more common in 2014?  It was straightforward to analyze, even if it meant sifting through a lot of data.

First, I looked at the 30 most popular names in each decade, and compared them to the total number of babies born.  If there’s a trend toward giving babies more unusual names, then we would expect a smaller concentration of babies with the most common names.

And wow, is that true, particularly for girls.  Let’s examine female names first.

If we look at the 30 most common female baby names, they constituted 41 percent of baby girl names in 1884.  There was some variation over the next 70 years but not much, ranging from 36 to 43 percent.  In 1954, the figure still stood at 40 percent for the girls destined to duck under their desks in the Cold War.  (As an important methodological note, recognize that these aren’t the same 30 names that were most common in 1884 – I adjusted the top 30 in each decade to reflect the most popular names of each particular decade.  This holds true throughout the analysis – I’m not tracking the popularity of a specific set of names, but rather I’m examining the likelihood of parents following popular trends in naming.)

But then something happened.  By 1964, the figure had declined to 32 percent.  It stayed roughly at that level until 1994, when it dropped further to 24 percent.  And since then, it had declined dramatically to 18 percent in 2004 and 16 percent in 2014.  The most common female names in 2014 are not very widespread.

If the most common names are less widely used, the next question is what other names are being used?  Are parents merely spreading their wings a little to other relatively well-recognized names, or are they pushing the boundaries of names?  To test this, I broadened my analysis and looked at the 100 most common female names.  In 1884, the most common 100 names covered 70 percent of girls born that year.  Moving forward in time, we see a very similar pattern that we saw for the top 30.  The figure declined slightly through 1954 (65%), and then those hippies from the 60s started becoming parents.  The figure dropped 58% by 1964, 51% by 1974, and continues to decline.  In 2014, the top 100 female names covered only 31 percent of births.

So how much dispersion do we actually have here?  Let’s look at the top 500 female names in each decade.  Most of us probably couldn’t even come up with 500 different names, so surely they’re covering almost the entire female population, right?

Well, that certainly used to be the case.  In 1884, the top 500 names covered 90 percent of the female baby population, and sure enough, it follows the same pattern as my earlier analyses.  The figure floated between 87 and 89 percent up until 1954, with remarkable consistency.  After all, who can’t find a favorite name among the top 500?

A lot of modern people, apparently.  The figure dropped to 85 percent in 1964, 75 percent in 1974, and currently stands at 58 percent.  Think about that for a moment.  42 percent of girls today have a name that does not fall into the top 500 most common names of her decade.

How does such a phenomenon happen?  One might speculate that this is due to a trend for adopting spelling variants.  Evelyn, for example, has branched into both Evelyn and Evelynn.  While I suspect that this is a significant factor, though, it appears to not be the main factor.  Instead, what we see among our top 500 names for 2014 is that many names appear to be newly created, or at least exceedingly rare in past decades because they’ve never appeared on a top-500 list until now.  Names like Brynlee and Cataleya and Myla and Phoenix have replaced more standard names.

Another theory that I can’t confirm at this point is that perhaps the United States has more diverse immigration these days, which could be producing a greater diversity of baby names.

Now let’s take a look at male names.

The first thing we see is that male names have historically been compressed relative to female names.  Looking across all of the decades since 1884, there are 1,286 male names that have placed in the top 500 in popularity, while there are 1,601 female names.  So are male names still more concentrated among fewer options?  We’ll repeat the analysis we just did for female names.

If we look at the 30 most common male baby names, they constituted 56 percent of baby boy names in 1884.  Per our earlier observation, this is much more concentrated than the 41 percent that we saw for females.  Similar to female trends, though, the proportion was relatively stable for decades afterwards, still standing at 54 percent in 1954.

The proportion began dropping in the 1960s, but was more stable than female names.  By 1964, the figure had declined gracefully to 51 percent, then 46 and 45 percent in the 1970s and 1980s.  The major decentralization for boys began in earnest in 1994, when the figure dropped to 35 percent, then 25 percent in 2004 and 20 percent in 2014, which isn’t notably higher than the female figure at this point.

An interesting difference by gender occurs when we examine the top 100 male names.  Whereas the distribution of female names was only minor through the 1950s, the distribution of male names actually decreased during that era.  In other words, the 100 most common names became slightly more concentrated for boys from 1884 to 1954.  Names became more dispersed through the 1920s, but the trend then reversed.  The proportion of boys with top-100 names dropped from 74 percent to 69 percent between 1884 and 1924, then rose back to 76 percent by 1954.  Perhaps during hard times of depression and war, parents get more conservative when naming boys.  Or maybe mothers working on World War II assembly lines became enamored with mass production.

However, from 1954 on, male names paralleled the diffusion of female names, dropping steadily to only 42 percent today.  This is still more concentrated than the 31 percent figure for females, but is far lower today than at any time in the past 130 years.

Finally, we look at the top 500 male names.  Have males had the same dispersal as females?

Contrary to other findings, male names were actually slightly more dispersed among the top 500 than female names in 1884.  The 500 most popular male baby names constituted 89 percent of births, compared to 90 percent for females.  But this discrepancy didn’t last long.  While the top 500 female names dispersed slightly from 1884 through 1954, male names actually converged, reaching a high point of 94 percent in 1954.  So while parents were practicing more creativity in female names over this period, they were becoming less daring with male names, choosing more often to follow popular trends.

However, creativity took hold soon thereafter.  Male convergence dropped slightly to 93 percent by 1964, then dropped steadily to a figure of 71 percent in 2014.  So again, parents are increasingly choosing uncommon names for their babies in modern times, though to a much greater extend with boys than with girls.  As with the girls, these boys’ names appear to be a combination of new spellings and also new names that have never before shown up in the top 500, names such as Daxton and Finnegan and Kasen.

This is all well and interesting, but what does it mean?

I’m first interested in the differences for women versus men?  Why do parents feel greater freedom to give a female child an uncommon name?  Do they feel a greater need to make a female child stand out from the crowd, and if so, why?  Are males better situated to succeed with a more traditional name, or do more men simply get named after their fathers or other family members?  Is the difference sexism in a very indirect form, or is there some logical reason?  I’m at a loss to come up with a logical reason that doesn’t reflect different attitudes toward girl babies than boy babies, but I’d love to hear your theories.

While the level of standardization differs between males and females, though, the patterns are moving in the same direction, and doing so strongly.  Why are babies – both boys and girls – increasingly likely to be given uncommon names?  One can surmise that it describes a society where individualism is being sought out more and more.  It may also point toward a lesser desire or obligation to pass down family names and a lesser emphasis on tradition.  So are we increasingly a nation of creative individualists or are we increasingly lost and rootless?  Or both?


Building Empathy

In the past year I’ve been involved with a few projects at Corona that involve evaluating programming for teenagers. One commonality across these projects is that the organizations have been interested in building empathy in teenagers. As I’ve been reading through the literature on empathy, I’ve been thinking about how building empathy should be a goal of most nonprofits.

Perhaps not surprisingly, there’s research demonstrating that people are more likely to donate when they feel empathy for the recipient. This research builds upon the classic psychology research demonstrating that empathy increases the likelihood of altruism, especially when there are costs to being altruistic. It’s clear that empathy can play an important role in motivating people to give altruistically, but how can we build empathy especially for others who are not very similar to ourselves?

One useful way to build empathy in marketing materials is to create stories that allow people to connect to those who need help or to those who are helping. The idea that organizations should be engaging in storytelling to engage and attract stake holders has been recently promoted. Stories are most powerful when people are able to lose themselves in a character.  This is why reading or seeing a story from the first person perspective can be so powerful.

While you don’t necessarily need research to write an empathy-building story to use in marketing materials, research can provide useful information for creating those stories. Any data or information that you have collected about your donors or your recipients can provide a great foundation for creating a story. And if you develop new, empathy-building marketing materials, you might consider testing the impact of those materials.


DIY Tools: Network Graphing

Analyzing Corona’s internal data for our annual retreat is one of my great joys in life.  (It’s true – I know, I’m a strange one.)  For the last few years I’ve included an analysis of teamwork at Corona.  Our project teams form organically around interests, strengths, and capacity, so over the course of a year most of us have worked with everyone else at the firm on a project or two, and because of positions and other specializations some pairs work together more than others.  Visualizing this teamwork network is useful for thinking about efficiencies that may have developed around certain partnerships, and thinking about cross-training needs, and so on.  The reason I’m describing this is that I’ve tried out a few software tools in the course of this analysis that others might find useful for their data analysis (teamwork or otherwise).

For demonstration purposes, I’ve put together a simple example dataset with counts of shared projects.  In reality, I prefer to use other metrics like hours worked on shared projects because our projects are not all of equal size, and I might have worked with someone on one big project where we spent 500 hours each on it, and meanwhile I worked on 5 different small projects with another person where we logged 200 hours total.

But to keep it simple here, I start with a fairly straightforward dataset.  I have three columns: the first two are the names of pairs of team members (e.g., Beth – Kate, though I’m using letters here to protect our identities), and the third column has the number of projects that pair has worked on together in the last year.  To illustrate:

My dataset contains all possible staff pairs.  We have 10 people on staff, so there are 45 pairs.  I want to draw a network graph where each person is a vertex (or node), and the edge (or line) between them is thicker or thinner as a function of either the count of shared projects or the hours on shared projects.

This year I used Google Fusion Tables to create the network graph.  This is a free web application from Google.  I start by creating a fusion table and importing my data from a google spreadsheet.  (You can also import an Excel file from your computer or start with a blank fusion table and enter your data there.)  The new file opens with two tabs at the top – one called Rows that looks just like the spreadsheet I imported and the other called Cards that looks like a bunch of notecards each containing the info in one row of data.  To create the chart, I click the plus button to the right of those tabs and select “Add chart”.   In the new tab I select the network graph icon in the lower left, and then ask to show the link between “Name 1” and “Name 2” and weight by “Count of Shared Projects”.  It looks like this:

There are a few things I don’t love about this tool.  First, it doesn’t seem to be able to show recursive links (from me back to me, for example).  We have a number of projects that are staffed by a single person, and being able to add a weighted line indicating how many projects I worked on by myself would be helpful.  As it is, those projects aren’t included in the graph (I tried including rows in the dataset where Name 1 and Name 2 are the same, but to no avail).  As a result, the bubble sizes (indicating total project counts) for senior staff tend to be smaller on average, because more senior people have more projects where they work alone, and those projects aren’t represented.  Also, the tool doesn’t have options for 2D visualizations, so if you need a static image you are stuck with something like the above which is quite messy.

However, the interactive version is quite fun as you can click and drag the nodes to spin the 3D network around and highlight the connections to a particular person.

Another tool option that I’ve used in the past (and that is able to show recursive links and 2D networks) is an Excel template called NodeXL.  You can download the template from their website – you’ll need to install it (which requires a restart of your computer) – and then to use it just open your Windows start menu and type NodeXL. Instructions here.  I had some difficulties using it with Office 2016, but in Office 2013 it worked quite well.

If you try these out, share your examples with us!

 


Making improvements through A/B testing

This one or that oneThis one or that oneDid you know that when you visit Amazon.com the homepage you see may be different than the one someone else sees, even beyond the normal personalized recommendations? It’s been widely reported how Amazon is continually tweaking their homepage by running experiments, or A/B tests (sometimes referred to as split tests), to tease out what makes a meaningful impact on sales. Should this button be here or there? Does this call to action work?

For some research questions, asking people their opinion yields significant insight. For others, people just cannot give you an accurate answer. Would you be more likely to open an email with a question as a subject line or with a bold statement? You don’t really know until you try.

So, how does this work? In essence, you’re running experiments, and with any scientific experiment, you will want your control group (e.g., you don’t change anything) and your experiment group (e.g., the one you’re altering a variable with). Ideally, you randomize people into each so you don’t inadvertently influence your results by how people were selected.

So now, you have two groups. While you may want to test several items, it is easiest to test one item at a time (and run multiple experiments to test each subsequent item). This will help you isolate the impact of your change – change too many things and you won’t know what made the difference or whether if some changes were working against each other.

Finally, launch the tests and measure what happens. Did open rates differ between the two? Did engagement increase? Differences aren’t always dramatic, but even a slight change at scale can have significant impact. For instance, if we increase response on a survey by 2%, that could mean 100 additional responses for essentially no additional cost. If the change costs money – for instance one marketing piece costs more than the other – then a cost benefit analysis will need to be performed. Sure, “B” performed better, but better enough to cover the additional expense of doing it?

A few final quick tips: A/B testing is an ongoing endeavor. Maximum learning will occur over time by running many experiments. Remember, things change, so running even the same experiment over and over can still yield new insights. Finally, you don’t always have to split your groups in half. If you have 2,000 customers, you don’t need to split them into two groups of 1,000. Peeling off just 500 for an experiment may be enough and lower the chance of adverse effects.

Ok, enough with the theoretical. How does this work in real-life?

Take our own company as an example. Corona engages in A/B testing, both for our clients and our own internal learnings. For instance, we may tweak survey invites, incentive options, or other variables to gauge impact on response rates. Through such tests we’ve teased out the ideal placement for the link to the survey within an email, from whom such requests should come from, and many other seemingly insignificant variables (though they are anything but insignificant).

How about your organization? Let’s say you’re a nonprofit, since many of our clients are in the nonprofit sector. Here are a few ideas to get you started:

  • eNewsletters. Most newsletter platforms have the ability to do A/B testing. Test subject lines, content, colors, everything. Test days and send times.
  • Website. Depending on your platform, this may be easy or more difficult. Test appeals, images, and donate call to actions.
  • Ad testing. Facebook ads, Google ads, etc. Most platforms allow you make tweaks to continually optimize your performance.
  • Mailings. Alter your mailing to change the appeal, call to action, images, or even form of the mailing (e.g., letter vs. postcard).
  • Programming. In addition to marketing and communications, even your services could possibly be tested. What service delivery model works best? Creates the biggest change?

What other ideas would you want to test?


Where are we now? The new next era nonprofit

I spent the other afternoon sitting around a large table chatting with professionals from across the sector about leadership, and the competencies that an effective leader will need in 2025. As we were chatting about today’s realities – and the social, political, technical and economic factors affecting nonprofits – it struck me that we’ve been here before. Or at least I have. Where’s that you may ask? Contemplating the “next era” of the sector.

Nonprofit While our social consciousness is slow to evolve and too slow to change (think social equity and gender identity) we are witnessing change in the form of driver-less cars, “smart” cities, neuroscience, and the record number of Americans not in the workforce. Those topics weren’t showing up on my Facebook feed five years ago. Back then we weren’t contemplating car-free micro-apartments in Denver either.

What else is on the nonprofit leader’s to-do list today? Six recurring topics with new twists.

  1. $ - Figure out what impact investing really is and whether or not we can do it. I know you are secretly wondering if this really is a game changer or simply a spin on the same old, same old. It’s a game changer.
  2. Inclusiveness – Learn how we can create inclusive and accessible organizations that welcome and engage diverse people. We can’t keep kicking this can down the road.
  3. Innovation – Explore the edges of our work, seeking new ideas from unexpected places leveraging tools like design thinking.
  4. Mission impact – Admit to ourselves that we don’t really understand our customers or how to positively impact their lives in a meaningful way and that we may need to toss out some of our favorites.
  5. Engagement – Realize that too often we treat people transactionally. We think of them in buckets – volunteers, Facebook followers, donors, etc. We haven’t optimized our business models to cultivate engagement. Check out my Synergistic Business ModelTM if you’d like to learn more about this all-to-often ignored cornerstone of the nonprofit business model.
  6. Sustainability – Fess up that our business models aren’t really sustainable and that we need thoughtful, committed and generous people to stand by us for the next few years while we invest in figuring things out – or, more bravely, exit the market and let someone new and fresh bring 2025 solutions to the marketplace.

There are no bright, defining lines between the sectors, only smudges that get fainter every time we step on them. Younger generations could care less about your tax status. They want to know you are authentic, relevant, impactful and efficient. They expect you to do good. Period. Gen Y and the boomers are learning from them.

What competencies will a nonprofit leader likely need in 2025? My list begins with “intelligence” and the courage to explore, experiment and collaborate. Higher education is looking at multi-disciplinary learning. Perhaps nonprofits need to consider busting their silo’ed approaches too.

What’s on your list?

2025 will be here before we know it. Are you ready?


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.