I serve on an advisory committee for a college program that trains organizational leaders, and at our last meeting there was a discussion about the curriculum for a research class. The committee chair looked at me and said, “Hey, you work in this field. What’s the most valuable research topic we can teach leaders?”

I’ve waited for that question my whole life or at least for the twenty years I’ve worked as a research consultant. My answer was swift and enthusiastic: “the ability to tell good research from bad research.”

Not everyone is a researcher

It’s great to make data-driven decisions, and in the modern world there’s more data than ever to help us. But if it’s going to add value, that data obviously needs to be correct, and unfortunately, bad research is more common than you think. The proliferation of do-it-yourself data collection tools, such as SurveyMonkey, along with the increased availability of pre-digested data online has made everyone a researcher, whether he/she has the skills or not. Marketing firms, technology firms, grant-writing firms and even money-conscious executive directors now also do research in addition to their core jobs. Unfortunately for everyone, that often results in research that is unreliable and inaccurate. If you base strategic or tactical decisions on bad data, then you’ll probably make some bad decisions even if your intent is good.

Good vs. bad research

 

There’s a simple rule I preach often to clients: It’s better to have good research than no research, but it’s better to have no research than bad research.

So how do you tell the good research from the bad research? When asked to write this article, I struggled with it a bit because I could write a book on how to critique a research report and it still wouldn’t cover everything a good research reader needs to know. So what value can I provide in 1,000 words or less?

The more I thought about it, though, the more I realized there’s one clue that almost always exists in bad research, and it’s easy to recognize if you know to look for it. When you read a research report, just ask yourself the following question:

“Did the researcher study the right population, or did he/she just study a convenient population?”

This is far and away the most common problem I see with weak research. A wannabe researcher may say, “We need to do a survey to see what people think about this issue. I know–let’s send a SurveyMonkey survey to the people in my Outlook contacts!” Or he/she will say, “’Let’s do a focus group. I’ll get some of my friends together and we’ll see what they think.”

Well, the good news is this is a cheap way to do things. The bad news is you get what you pay for–meaningless results that may actually take you in the wrong direction. The key is to recognize who the research is supposed to represent and who it actually represents. If they’re different, you may have a problem.

As an example, I’ll walk through a couple of case studies of research-gone-wrong incidents that have happened over the past few years.

Case in point #1

A local grantwriting firm was hired a few years back to conduct a survey of nonprofits for a governmental agency. Instead of drawing a sample of area nonprofits (the proper approach), it sent a do-it-yourself survey to nonprofits on its marketing list. So what’s wrong with this? Plenty. Among other weaknesses, this list was undoubtedly skewed toward larger and older organizations, because that’s who usually hires consultants. Second, is that grantwriter’s marketing list skewed toward a particular type of nonprofit it typically works with, such as human services or arts or animals? Is it skewed toward foundations or agencies or organizations that aggressively pursue grant funding?

A savvy research reader will ask, “Would we have gotten different results if we had hired a different consultant and surveyed that second consultant’s marketing list instead?” Probably. So if that’s true, what did the grantwriter really measure? Why not just draw a random sample of nonprofits and do it right?

Case in point #2

In another example with a happier ending, an agency wished to do a survey about health issues. It had a marketing firm on board, and the marketing firm had dollars in its eyes. It offered to conduct a survey, despite having no expertise in the field, and made plans to field a do-it-yourself survey by sending links to individuals in its Outlook Contacts. So what was wrong with that? Pretty much everything. Who’s going to be in the Outlook Contacts for a marketing firm? I can pretty much guarantee it’s going to be working-age people, likely skewing young. It’s going to be people who work in professional services like…you know…marketing. And it’s probably going to consist mostly of people with salaried positions who have college degrees. How many oilfield workers are going to be in that contact list? Fast food workers? Retirees?

It would have been a disaster but fortunately the client was savvy in that case and sought outside advice. The do-it-yourself survey was canceled and the money spent in better ways.

No research is better than bad research

Obviously, there are many different types of research and many other tips I could offer for critiquing research. However, asking yourself a couple of simple questions can go a long toward telling you how trustworthy that research report is: Who are we trying to study, and who exactly did we study? While there are always some compromises that must be made, those compromises should be minimized to the greatest extent possible, and amateur researchers often miss the boat on this simple but powerful truth.

So the next time you need to make a data-driven decision, think about this as a clue to the quality of your data. Because after all, it’s better to have good research than no research, but it’s better to have no research than bad research.

The blog was originally featured on Causeplanet.org.