RADIANCE BLOG

Category: In Action


Corona Summer Camp 2015: AAPOR

Ah, summer camp. For those of us who were generally allergic to the outdoors as kids, summer camp did not necessarily mean cabins, building fires, and outdoor recreation. In my case, summer camp usually meant summer orchestra, which was the best. Not only did I get to play music for hours every day, I also got to spend time with a bunch of other orchestra kids who would not even bat an eye if I wanted to discuss which Suzuki book they were on or the pros and cons of catgut strings.

In many ways, AAPOR feels similar to those years of summer orchestra—conference attendees discuss sampling issues and response rates as if they were discussing the weather. This year over one thousand people met at AAPOR to discuss all the intricacies of public opinion research. Beth and I had a really difficult time choosing which talks to go to because they all sounded so relevant and interesting.

This conference in particular gives us a good sense of the changing landscape of survey research and of the big issues in the field. One big issue was the mode to use for survey research. Phone surveys have been the gold standard for opinion research for a while, but with the increasing rate of cell phone use (especially cell phone only households) and the decreasing response rates, it has become harder and more expensive to maintain phone survey quality. There was at least one presentation about a survey that has transitioned over to cell phone only. Other talks discussed the quality of online panel surveys or using multiple survey modes (e.g., a paper survey with phone, an online survey with paper, etc.) to increase response rates.

Another big issue was incorporating all the available data into public opinion research. New research has started looking at whether social media and other big datasets can be analyzed as another way of measuring public opinion.  A final big issue had to do with transparency in survey research. Although there is a ton of survey data floating around, it is not always clear who is following correct and ethical survey methodology standards and who is not. Moreover, a lot of survey data reports do not include enough information to even judge what standards are being used. AAPOR’s Transparency Initiative encourages organizations to include with public release of data certain survey methodology information that allows people to judge the quality of the data.

Beth and Kate w Nate Silver at AAPOR 2015On top of all the great discussions of survey methodologyy and of the future of public opinion research, Beth and I also got to meet (briefly) Nate Silver, who received an award from AAPOR this year. (Someone may have texted this photo to her mom.) After the awards reception, AAPOR held a casino night to raise money for student awards. Silver participated in the poker tournament and kindly posed for many photos. He also has posted his own thoughts  about some issues raised at the conference.

PHOTO CREDIT: DAVID QUACH


The Lifesavers Conference

As with any industry, it is important in market research to keep up with the latest thinking and practices by regularly attending workshops and conferences.  For this reason, members of the Corona staff can occasionally be found at conferences put on by the American Association of Public Opinion Research (AAPOR), Market Research Association (MRA), and American Marketing Association (AMA).  However, here at Corona, we have an additional challenge of keeping up not only on the latest and greatest research practices, but also on the issues most important to our clients.  For that reason, we also try and occasionally make an appearance at conferences focused on subject matters such as parks and recreation, traffic safety, and more.

Kevin Presenting @ Lifesavers Conference

In March of this year, Kevin and I made the trip to Chicago to participate in this year’s Lifesavers Conference.  This conference has been conducted annually for decades and brings together individuals from around the country whose jobs are dedicated to keeping Americans safe on our nation’s highways.  The minds present at these conferences have been instrumental at making changes both legislatively and in communications with the public to dramatically reduce traffic fatalities over the years, including laws aimed at requiring child safety seats and punishing drunk drivers, and communications aimed at increasing seat belt usage, reducing impaired driving, and more recently, reducing distracted driving.

The things we learned at this year’s conference were enlightening to say the least, and it would require a whole series of blog posts to cover them all.  However, a few highlights included:

  • Learning about how state traffic safety departments are effectively using social media to reach a new generation for whom traditional television advertising simply isn’t effective.
  • Learning about how small nonprofit organizations can conduct their own evaluations to make the case to funders that their work is making a true impact.
  • Understanding the efforts being made by law enforcement in Colorado and Washington to keep drivers safe in the age of legalized marijuana.

This year’s conference was particularly special for us, as Kevin had the opportunity to present the results of research we conducted with the Minnesota Office of Traffic Safety aimed at better understanding some of the characteristics of high-risk drivers (those who exhibit a combination of risky traffic behaviors, including drinking and driving, speeding, texting while driving, and not wearing a seat belt).  A few of the key findings of that research included:

  • High risk drivers tend to overestimate how common their behaviors are among their peers (drinking and driving isn’t near as common as one might think), overestimate their own driving ability (almost everyone believes they are “above average”), and underestimate the risk of their driving behaviors (those who speed regularly are considerably more likely to be in a crash than those who do not).
  • Those who text and drive know they shouldn’t and worry about being in an accident, but they do it anyway. (Another presentation at the conference suggested that texting and driving should be treated as an addition rather than a rational decision.)  On the other hand, those who speed regularly are relatively unlikely to think their behavior is a problem and are more worried about getting a ticket than being in a crash.

Overall, the conference was a great chance to catch up on some of the things going on in traffic safety and lend our own expertise as well, so we hope we have the opportunity to attend again in the future!


Colorado’s New Statewide Child Abuse Hotline

We were pleased yesterday to attend the unveiling of Colorado’s new rollout of a statewide hotline to report suspected child abuse or neglect.  Governor Hickenlooper and other dignitaries spoke on the Capitol steps, and we think it’s a great step forward for Colorado.

1-844-CO-4-KIDS
1-844-CO-4-KIDS

Our partners at Heinrich Marketing came up with several great concepts, and we were delighted to conduct concept testing that helped lead to the selection of the campaign that was announced today.  We conducted focus group research in urban and rural Colorado to weigh the strengths and weaknesses of five different concepts.  We think the selected theme is a great way to convey a complex and challenging message.

You can see examples of the campaign theme put to use here.


How Researchers Hire

Corona Insights just recently went through a round of hiring (look for a blog post soon about our newest member) and, while many of our hiring steps may be common, it did occur to me that our process mirrors the research process.

  • Set goals. Research without an end goal in mind will get you nowhere fast.  Hiring without knowing what you’re hiring for, will ensure an inappropriate match.
  • Use multiple modes.  Just as approaching research from several methodologies (e.g., quant, qual) yields a more complete picture, so too does a hiring process with multiple steps.  Reviewing resumes (literature review), screening (exploratory research), testing (quantitative), several rounds of interviews (qualitative), and mock presentation.
  • Consistency.  Want to compare differences over time or between different segments? Better be consistent in your approach.  Want to compare candidates? Better be consistent in your approach.

Needle in haystack imageI could go on about the similarities (drawing a broad sample of applicants?), but you get the idea.  The principles of research apply to a lot more than just research.

And as with any recurring research, we reevaluate what worked and what can be improved before iteration.  Therefore our process changes a little each time, but the core of it remains the same – asking the right questions and analyzing data with our end goals in mind.  Just like any good research project.

Stay tuned for a blog post about our new hire.


Data for your sleepless nights

sleepA few months ago, I purchased a fancy pedometer to start collecting more data about myself. For those of you fortunate enough to know my slothful self in real life, I’d like to interrupt your laughter to point out that one of the features I was most interested in was the pedometer’s ability to track my sleep. I’m not sure exactly how it tracks my sleep, nor how precise its measurements are, but it has pushed me to think a lot more about my sleep and about sleep in general. I decided I wanted to know how I compared to other people and to look for patterns in my own sleep data.

First of all, diving into the world of sleep data is like diving into a crazy rabbit hole. (Rabbits, by the way, sleep 8.4 hours per day, but only 0.9 hours of that are spent dreaming. Humans, however, sleep roughly 8 hours, of which 1.9 hours are spent dreaming.[1] Take that, rabbits.) Questions related to sleep are not necessarily where you would expect them to be. They are shockingly absent from the Behavioral Risk Factor Surveillance System (which measures many behaviors related to health) and yet appear on the American Time Use Survey (ATUS).

Even more interesting, you see different patterns in the data depending on the question format. In the ATUS, they have people track their time use via an activity diary. Basically the dairy has you input what activities you were doing when, with whom and where. Based on these diaries, the Bureau of Labor Statistics estimates that in 2012 Americans were getting more than eight hours of sleep per day on average. These data also show that women were getting more sleep than men, and that a woman my age was averaging almost 9 hours of sleep per night.[2]

Sadly, these numbers seemed a little high to me, and a Gallup survey seemed to agree. In a 2013 survey, when people were asked how many hours of sleep they usually got per night, people reported getting fewer than 7 hours.[3] The difference between the diary findings and the survey findings reminded me of a similar pattern in reports of what people eat. Basically, people are really bad at remembering what they ate during a week, so a daily food diary tends to be the more accurate measurement. So maybe people are also bad at recalling how much sleep they get on average during the week? However, I wonder if filling out the diary for ATUS sometimes feels embarrassing. For example, do people feel too embarrassed to admit to all the T.V. they watch/internet browsing they do, so they end up reporting more sleep?

Another sleep data source I found was this chart of the sleeping habits of geniuses.[4] I imagine that getting enough sleep probably helps all of us reach our genius potential. Based on the chart, the average amount of sleep across this sample of geniuses is about 7.5 hours, which seems reasonable. It is super interesting, though, to see how and when geniuses spread out their sleep across the day.

Back to my own data, I noticed two important things. One, the social context can have a big impact on my sleep. Beth and I went to AAPOR in May, and every night we stayed up too late discussing nerdy things that we had learned/ideas for our own analyses. This resulted in many nights of less than 7 hours of sleep. The week after AAPOR, I went to visit my sister. My sister would readily admit that she finds it almost impossible to be a functioning human being on anything less than 8 hours of sleep. Not surprisingly, I averaged more than 8 hours of sleep during that visit. So, insight number one is that I should only share hotel rooms and/or sleep at the homes of people who value sleep. Unfortunately, I don’t group my friends and family based on their sleeping habits and often I like staying up late to debate nerdy things, like what rabbits even dream about during their roughly one hour of dreaming each day. So, I’m not sure how actionable this insight really is.

Second, I noticed that I slept better when I had walked more during the day. Apparently the last laugh is on me because I’m beginning to suspect that even for those of us who really love sleep, being more physically active might be a critical component of the sleep routine.


Corona wins Gold Peak Award for Market Research

award winning market researchLast night, the Colorado American Marketing Association (CO+AMA) celebrated Colorado’s first class marketers at their annual Colorado Peak Awards. Corona Insights was honored to take home our 4th Gold Peak Award in the category of Market Research.  This year, we won the award for our member engagement and brand assessment for the American College of Veterinary Medicine (ACVIM).

Market research is fundamentally different from other categories honored at the CO+AMA Peak Awards. Market Research prepares brands and marketing campaigns for take-off. By doing proper research, companies are able to develop a sound marketing strategy that effectively reach their target audience.

In 2013 we were recognized with a Gold Peak for the research we did for Donor Alliance which resulted in a marketing campaign that addresses the trends in the data we helped uncover. In 2010 Corona took home the Silver Peak award for our rebranding and in 2011 Corona won a Gold Peak award for our market research work to inform the University of Denver Sturm College of Law’s strategic plan.

The 26th annual gala was held that Wings Over the Rockies and featured an aerospace theme. Kevin Raines, CEO, and Kassidy Benson, Marketing and Project Assistant accepted the award on behalf of the firm.

031


A dose of data for your springtime allergies

blooming-springtimeLike many people, I have “seasonal allergies.”  March and April bring sneezing fits and foggy brain days for me.  Often I get a sore throat and headaches.  One year I went through three strep throat tests and a course of antibiotics before my doctor decided my swollen throat was caused by allergies.

Knowing you’re allergic to “something” isn’t all that helpful.  Sure, you can keep antihistamines on hand and treat the symptoms as they arise, but you have no way to predict when symptoms will hit or minimize your exposure to the allergen.

A common first step in identifying the cause is to do a skin allergy test.  Typically, this involves getting pricked in the back with approximately 20 solutions containing the most common allergens.  The doctor marks off a grid pattern on your skin and each box gets pricked with one item and then you wait and see whether any of the pricked areas swell up or show other signs of allergic reaction.

I’ve had this done, but unfortunately (though not uncommonly) I didn’t react to any of the items tested.  Which, doesn’t mean you’re not allergic to something, just that you’re not allergic to one of the things tested.

Research on myself hadn’t provided any usable information, so recently I turned to external data instead.  Where I live, the city provides daily pollen counts for the highest pollen sources from about February through November.  They don’t provide aggregated data, however, so I had to build my own database of their daily postings.  In the part of town where I live, Ash, Juniper, and Mulberry are the most prevalent allergens during the time when my symptoms are greatest.

Last year, my worst day was April 1.  Even with my allergy pills, I sneezed the entire day.  Here’s what the pollen count showed for my area of town during that time:

Pollencount2013

Ash pollen counts peaked on April 1.  Juniper and Cottonwood were also relatively high, but Juniper had been fairly high for weeks without me having corresponding symptoms.

This year, my allergies were not so bad at all.  I was out of town for a week in mid-March and for two separate weeks in early and mid-April, which certainly helped, but I only had a few foggy-brain days in late March and mid-April.  The pollen counts for this year:

allergies 2014

Ash was lower overall compared to the previous year, and once again seemed to line up best with my symptoms.  This is a correlational analysis, so it doesn’t provide a definitive diagnosis, but because different allergens peak at different times, it offers some ability to rule out other things.  And it’s more efficient (and painless!) compared to the skin test.

Armed with this information, I did some additional research on the predominant types of Ash trees where I live (Modesto and Green Ash), and the geographic range for those species.  If I’m planning to travel to Ash-free zones, I can try to schedule those trips for the spring.  And otherwise, I can keep an eye on the pollen counts and try to stay inside with the windows closed when Ash counts are particularly high.

It’s not perfect data, but like most tough decisions, we have to do the best we can with limited data and our powers of educated inference.  Hopefully less sneezing awaits!

 


How to ask demographic questions

diversityAsking demographic questions (e.g., age, gender, marital status) should be the easiest of survey questions to ask, right?  What if I told you asking someone how old they are will yield different results than asking in what year they were born, or that asking a sensitive question (e.g., How much money did you make last year?) in the wrong way or at the wrong time may cause a respondent to abandon the survey?

Considering today’s identity security concerns, socially desirable bias, and dipping response rates, asking demographic questions is full of potential pitfalls. Although gathering it might be tricky, demographic data are often critical to revealing key insights.  In this post, we present three tips on how best to ask demographic questions on a survey.

  • When to ask demographic questions: Our general rule-of-thumb is to ask demographic questions at the end of a survey, when survey fatigue is less likely to influence answers. Respondents are more likely to answer demographic questions honestly, and will have a better survey taking experiences, if they have already viewed the other questions in the survey. However, we sometimes find it is best to ask a few easy demographic questions at the beginning, so survey-takers start to feel comfortable and see that their feedback will be useful. For example, when researching a specific place (like a city or county), I like asking participants how long they have lived in this place as the first question on the survey.
  •  How you ask the question will determine the type of data you will collect: It is important to consider how demographic data will be used in analysis before finalizing a survey instrument; not doing so might make it difficult to answer the study’s research questions. One consideration is whether to collect specific numbers (e.g., Please enter the number of years you lived in your current home) or provide a range of values and ask participants to indicate which range best described them (e.g., Have you lived in your current home for less than 1 year, 1-2 years, 3-5 years, etc)?  This decision depends on several factors, the primary factor being how the data will be used in analysis.  Collecting specific numbers (i.e., continuous data) typically allows for more advanced analyses than responses within a range of numbers (i.e., categorical data), but these advanced analyses may not be needed, or even suitable, to answer your research questions.   The sensitivity level inherent in the question is also a factor; for example, survey-takers are more likely to indicate that they are within a range of income levels than they are to provide their exact income. In our experience, the benefit of collecting contentious income data is not worth the cost of causing participants to skip the income question.
  •  Information needed to ensure the results represent the population: It is common for certain groups of people to be more likely to respond to a survey than other groups. To correct for this, we often calculate and apply weights so that the data more closely resemble the actual study population.  If you plan to collect demographic data in order to weigh your results, then you will want to match survey question categories with the categories in the data you will use for weighing.  For example, if you would like to weigh using data from the U.S. Census, then you will want to use the same ranges that are available at the geographic extent of your analysis.  Keep in mind that some demographic variable are lumped into smaller categories for larger geographic areas (e.g., states) and into larger categories for smaller geographies (e.g., census tracts). All of these factors must be considered before collecting data from the population.

Haphazard demographic questions can decrease, rather than increase, the value of a survey. At Corona, we always carefully consider research goals, potential analyses, and the population when we design survey questions. Designing a survey might not be as easy as it appears, and we have many more tips and insights than what we could share here.  If you would like guidance on how best to ask demographic questions on your survey, contact us.  We promise that asking us for guidance is easier than asking demographic questions.


Societal values in music

Words-in-Popular-SongsWe stumbled across the interesting data visualization today, which shows how commonly different words or phrases have appeared in Billboard’s Top 100 songs over the past 50 years or so. 

As we scroll through the tables, the most obvious pattern is the increase in profanity (described as “foul words”) since 1990.  Prior to that era, it was almost unheard of to include profanity in a popular song, but … times have changed.

However, we find some more subtle patterns to be more of interest.  The word “love” has become notably less common since the turn of the century, along with the word “home”, and in its place we now hear more references to “sex” and “money”.  Is this a reflection of a less grounded society?  Or is it a contributing factor?