Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

* indicates required




We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.

 


The Race to the Rockies – Colorado Migration Part 1

I admit, I am one of many in the horde of people who have recently migrated to Colorado. Indeed, there are tens of thousands of us moving here each year at one of the highest rates in the country. But who really is “us”? Who are the people moving into Colorado in droves? This blog will be part 1 of a 2-part blog series exploring who is moving into Colorado. In this first blog, we’ll be looking at generational migration patterns over time and migration by race and ethnicity.

Generational Movement in Colorado

Utilizing the Census Bureau’s Population Estimates for 2010 to 2015, I broke this question down by two basic demographics, age and sex. In the following graph, generations were grouped roughly using Pew Research Center’s definition for each generation1 Each generation has net migration (those moving to Colorado minus those who left) graphed from 2011 through 2015.

Unsurprisingly, we see that net migration has been positive for each generation since 2013. Most recently, each generation in Colorado had a net positive increase of 16,000 or more. Millennials have been moving in at the highest rate, with over 30,000 having moved into Colorado between 2014 and 2015. Baby Boomers also prefer moving into Colorado rather than out, with a net migration over 25,000.

As a non-native, what I find most interesting is how this has changed over time. From 2010 to 2012 we see more Generation Xers moving out of Colorado, with a net migration loss of nearly 10,000 having left the state in 2011. It wasn’t until 2013 where we saw more moving in, with a large uptick (over 10,000 more) in 2015. Also unexpected was the increase of nearly 40,000 Baby Boomers that occurred in 2011.

Millennials, on the other hand, have been consistently moving into Colorado, with 2015 seeing a strong increase in the number moving into the state. They are also the only generation which show a substantial difference between genders, with about 4,000 more males moving into Colorado than females in 2015. Needless to say, as a male Millennial who has moved into Colorado, I am thankful to already be married.

Race and Ethnicity Movement in Colorado

Using the same data source (Population Estimates), I looked at those moving into Colorado from 2010 to 2015 by race and Hispanic/non-Hispanic ethnicity.

The total percentage change in population from 2010 to 2015 was 8.5%, making Colorado the third ranking state by population growth rate since 2010. When looking at percentages, many Native Hawaiian and Other Pacific Islanders appear to be moving into the state, though 2015 saw an increase of only about 2,000 since 2010. Many Asians have also been moving into the state, with there being a 22 percent increase equating about 30,000 new residents. Those who identify as multi-racial have moved into Colorado at similar numbers.

Colorado also has a large Hispanic population. In fact, we are one of nine states that have a Hispanic population of over 1 million. Between 2010 and 2015, we saw an increase in Hispanic population of just over 12 percent, with an additional 125,000. The Hispanic population currently represents approximately 21% of Colorado’s total population.

Now that we have a better idea of the age, race, and ethnicity of those moving into Colorado, we can get a better idea of some of the characteristics behind our newest residents. In my second and final blog on the topic, I will explore these various characteristics to help complete the picture of these new Coloradans.

1 Due to the data available in the Population Estimates tables, some generations in the graph includes ages +/- 1 or 2 years from Pew’s definition, and the Silent Generation was combined with the Greatest Generation. The graph also doesn’t include those 19 and younger, though the age cutoff between Millennials and the following generation has not yet been determined.


Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.


How representative is that qualitative data anyway?

When we do qualitative research, our clients often wonder how representative the qualitative data is of the target population they are working with.  It’s a valid question.  To answer, I have to go back to the purpose of conducting qualitative research in the first place.

The purpose of qualitative research is to understand people’s perceptions, opinions, and beliefs, as well as what is causing them to think in this way.  Unlike quantitative research, the purpose is not to generalize the results to the population of interest.  If eight out of ten participants in a focus group share the same opinion, can we say that 80% of people believe that particular opinion?  No, definitely not, but you can be pretty confident that it will be a prevalent opinion in the population.

While qualitative data is not statistically representative of a population, we still have guidelines that we follow to make sure we are capturing reliable data.  For example, we suggest conducting at least three focus groups per unique segment.  Qualitative research is fluid by nature, so data gathered from across three groups allows us to see consistent themes and patterns across groups, and assess if there are any outliers or themes exclusive to one group that may not be representative of the unique segment as a whole.

Still not sure which methodology will best be able to answer your research questions?  We can help you choose!


Car vs. Bike

I like to ride my bike whenever I get a chance.  I ride to the store, to the park, to take my son to preschool, and sometimes just for fun.  While I’ve never been in an accident with a moving car, I’ve witnessed several bike vs. car accidents, and its something I want to avoid.

Do you know what Denver neighborhoods have the most bike vs. car accidents?  I wanted to find out.  Luckily, we can use special mapping software to analyze existing accident data to determine where these types of accidents are statistically more or less likely to happen.  Here’s how I did it.

  1. I downloaded all traffic accidents in the City and County of Denver from the last five years (accessed here).
  2. I filtered to all hit-and-run accidents involving a bike, a total of 345 incidents in their database.
  3. I mapped the accident locations using our mapping software.
  4. I added the City and County of Denver boundary to the map (I excluded the DIA neighborhood because it is disproportionally large compared to the area of bikeable road).
  5. To find locations around Denver with statistically higher or lower clusters of bike vs. car hit-and runs, I ran a hot-spot analysis using location point-counts and a fish-net grid.

The result was a large hot spot of accidents in Central Denver, stretching from Baker neighborhood to Sunnyside, and from Sloans Lake to Colorado Blvd.  It’s clear that a lot of accidents happen on Broadway/Lincoln Street and a lot are on Colfax Avenue.  Many Denver neighborhoods fall into the neutral category (light yellow), meaning accidents happen here, but we cannot find any clusters where accidents are statistically more or less likely.  If you look at the edges of the city limits, we find a handful of neighborhoods overlapping cold-spot clusters.  Specifically, Fort Logan (just south of Bear Valley), Hampden, University Hills, North Stapleton, and Gateway/Green Valley Ranch are all neighborhoods where hit-and-run bike vs. car accidents are statistically uncommon, according to this analysis.

While its helpful to know that central Denver has a lot more accidents than the surrounding neighborhoods, this isn’t surprising considering a lot more bike and cars are simultaneously navigating around Denver’s core.  I wanted to get more specific and useful insights, so I re-ran the hotspot analysis focusing just on the area east of Federal, south of 49th, west of Colorado, and north of Alameda.  Downtown Denver still lights up as a hot spot for hit-and-run bike accidents, but again not to the scale that I found useful.  So I zoomed in once more, this time exploring the area within Blake, Speer, 6th, and Downing.

The result: the greatest concentration of these accidents happen in the Central Business District (especially along 20th Street) and east-west along Colfax Avenue and 16th Street.  If I was in charge of reducing the number of car-bike accidents in Denver, I would prioritize these two areas.  Of course this analysis can’t suggest what type of actions would reduce accidents (e.g., new rules, more enforcement, better education), but it gives us a place to start.

I am interested in biking, so this dataset was of interest to me.  What interests you and your organization?  Do you want to know if your challenges (e.g., crime, complaints, reduced sales) or opportunities (complements, desired behavior, brand recognition) are really clustered? If so, give us a call and we can discuss how mapping may be a good way to gain a new perspective and help answer your important questions.

 

 


Shhhhhhhh!! Did you know there are three secrets to strategic success?

I’m often asked, “How can we ensure our strategic plan doesn’t sit on a shelf?” The question typically arises as executives and boards consider how best to approach the planning process – and which consultant can facilitate a successful outcome.

By its very nature, a strategic planning process raises expectations and anxiety. And no plan is worth the investment if it sits on a shelf.

While the question is spot-on, it’s being asked of the wrong person. The next time I find that question directed at me I think I’ll pull the small mirror out of my purse as I say, “What a great question. It’s simple really. Success actually begins with you. You’ll need three things: committed leadership, access to resources and accountability for results. I can help you get there, but you’ve got to keep the dust off the plan.”

Strategic success is as easy as 1, 2, 3.Wanna know a secret? Strategic success is as easy as…

 

 

 

 

 

 

 

 


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.



Nonprofit Data Heads

Here at Corona, we gather, analyze, and interpret data for all types of nonprofits.  While some of our nonprofit clients are a little data shy, many are data-heads like us!  Indeed, several nonprofits (many of which we have worked for or partnered with) have developed amazing websites full of easy to access datasets.

Here are 4 of my favorite nonprofit data sources…check them out!!

The Data Initiative at the Piton Foundation

Not only do they sponsor Mile High Data Day, but the Piton Foundation produces a variety of user friendly data interfaces.  I really like the creative ways they allow website visitors to explore data–not just static pie and bar charts. Instead, their interface is dynamic and extremely customizable. While their community facts tool pulls most (but not all) of its data from the US Census, this tool is very easy and fun to use.  Further, they have already defined and labeled neighborhoods across the Denver Metro area, making it easy for users to compare geographies without trying to aggregate census tract or block group numbers. This is an invaluable feature for data users who don’t have access to GIS. I also appreciate the option to display margin of error on bar charts when its available.

Highlights:

  • Easy to use from novice to expert data user
  • Data available by labeled neighborhood
  • 7-County Denver Metro focus

Explore

OpenColorado

With over 1,500 datasets, OpenColorado is a treasure trove of raw data.  While this site doesn’t have a fancy user interface, it does provide access to data in many different file types, making it a great website for the intermediate to advanced data user with access to software such as GIS, AutoCAD, or Google Earth.  Most data on OpenColorado is from Front Range cities (e.g., Arvada, Boulder, Denver, Westminster) and counties (e.g., Boulder, Denver, Clear Creek), but unfortunately it is far from a comprehensive list, so you’d need to look elsewhere if your searching for information from Arapahoe County, for example.

There are over 200 datasets specific to the City and County of Denver.  I opened a few that caught my eye, including the City’s “Checkbook” dataset that shows every payment made from the City (by City department) to payees by year.  I give kudos to Denver and OpenColorado for facilitating this type of fiscal transparency.  I also downloaded a dataset (CSV) of all Denver Police pedestrian and vehicle stops for the past four years, which included the outcome of each stop along with the address, latitude and longitude.  For a GIS user, this is especially helpful if you want to search for patterns of police activity compared to other social and geographic factors.  Even without access to spatial software, this dataset is useful because it includes neighborhood labels.  I created a quick pivot table in Excel to see the top ten neighborhoods for cars being towed (so don’t park your car illegally in these neighborhoods).

Highlights:

  • Tons of raw data
  • Various file types, including shapefiles and geodatabases that are compatible with GIS, and KML files that are compatible with GoogleEarth
  • Search for data by geography, tags, or custom search words

Kids Count from the Colorado Children’s Campaign

Kids Count is a well-respected data resource for all things kids.  Each year, the Colorado Children’s Campaign (disclaimer, they are also our neighbor, working just two floors below us) produces the Kids Count in Colorado report, which communicates important child well-being indicators and indices statewide and by county when available.  The neat thing about Kids Count is that it’s also a national program, so you can compare how indicators in a specific county compare to the state and nation. In addition to the full report available as a PDF, you can also interact with a state map and point and click to access a summary of indicators by county.  Mostly, their data is not available in raw form, but their report does explain how they calculated their estimates and provides tons of contextual information that makes their key findings much more insightful.

Highlights:

  • Compare county data to state and national trends
  • Reports include easy to understand analysis and interpretation of data
  • Learn about trends overtime and across demographic groups

Outdoor Foundation

If you’re looking for information about outdoor recreation of any type in any state, there is probably an Outdoor Foundation report that has the data you’re seeking.  Based in Boulder, Colorado, the Outdoor Foundation’s most common reports communicate studies of participation rates by activity type, both at a top level and also by selected activity types such as camping, fishing, and paddle sports (haven’t yet heard of stand-up paddle boarding?  It’s one of the fastest growing in terms of participation).  The top-line reports show trends over the past ten years, while the more detailed Participation Reports break out participation, and other factors such as barriers to participation, by various demographics.  Multiple other special reports, focusing on topics such as youth and technology, round out what’s available from this site.

The participation and special reports are helpful, but I’m most impressed with the Recreation Economy reports, which are available nationwide and within each state.  These reports estimate the economic contribution of outdoor recreation, including jobs supported, tax revenue, and retail sales.  For example, the outdoor recreation economy supported about 107,000 jobs in Colorado in 2013.  Unfortunately, the raw data is not available for further analysis, but the summary results are still interesting and helpful.

Explore:


Art meets architecture in Denver this weekend

Looking for something fun to do this weekend in-between rides on the new A Line to DIA? Check out the arts and cultural activities during Doors Open Denver. Art meets architecture through pop-ups ranging from a nomadic art gallery to poetry, drama, and music performances among the 11 offerings. My favorite? Graffiti art. If you’ve been secretly wanting to learn the art of graffiti painting – and you’re 55 or older – then we’ve got the creative outlet for you. Bust through stereotypes as you create graffiti art inspired by two of Denver’s architectural gems.

  • April 23rd, 1-3 pm – Saturday’s pop-up will be hosted by Clyfford Still Museum on their front lawn. Clyfford Still Museum will give 4 – 20 minute architectural tours each day at 11:00, 11:30, 2:00 and 2:30.
  • April 24th, 1-3 pm – Sunday’s pop-up will be hosted by the new Rodolfo “Corky” Gonzales Library and include 3 tours led by architect Joseph Moltabano of Studiotrope, a Denver-based architecture and design agency. DPL staff will share how the library’s design informs their work. Since Sunday is Día del Niño the artist will be prepared to host a multi-generational event at the library.VSA Colorado/Access Gallery

Thanks to our collaborative partners: VSA Colorado/Access Gallery, studiotrope design, Denver Public Library, Studiotrope Designand Clyfford Still Museum. I’d like to give a special shout out to Damon McLeese of Access Gallery; Joseph Montalbano  of DPLstudiotrope; Ed Kiang, Viviana Casillas and Diane Lapierre of DPL; and Sonia Rae of Clyfford Still Museum.

Please join me in thanking the Bonfils-Clyfford Still Museum Stanton Foundation for funding this engaging spotlight on art and architecture.

For more information visit this Doors Open Denver link. 


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?