Is your Neighbor an Engineer?

While Kevin has an engineering degree, I do not—my degree is in social sciences.  After reading Kevin’s blogs about income patterns of folks with engineering degrees, I was inspired to take a fresh look at degrees from a spatial perspective. I wondered where engineers are most likely to live, where social scientists are likely to live, and is there is a relationship between them?  What better place to explore than our own back yard.

Since Kevin and I both like maps, I pulled some data from the American Community Survey into our mapping software to take a look. The universe of this data is all adults, 25 years or older, who have obtained a bachelor’s degree or higher.

Interestingly, in terms of raw numbers, the census tracts with the greatest number of engineering degrees are on the south to northwest outskirts of the Denver area, especially around the Boulder area (see map 1).  Maybe people with engineering degrees like to live near the foothills? Out of adults with a bachelor’s degree or higher in the Denver area, about eight percent have an engineering degree.

Adults with Engineering Degrees












However, some of these census tracts are rather large, so I looked at the density of people with engineering degrees by tract (see map 2). When we look at the number of engineering degrees per square mile, we start seeing dense pockets in the heart of Denver and Boulder, but still some good representation in the Southern suburbs.  In case you are curious, there is an average of 106 people with engineering degrees per square mile in Denver and its immediate suburbs (i.e., area within the blue box).

Density of Engineering Degrees












As I previously mentioned, I have a degree in social sciences, and I wanted to know if people with my degree were likely to live near people with engineering degrees? First, I mapped the number of social science degrees (see map 3). Around Denver, about nine percent of people with a bachelor’s degree have one in social sciences.   By visually comparing Maps 1 and 3, I didn’t see any strong similarities in number of degrees by census tract. Social science degrees appear to be most numerous on the southeast and east side of Denver, with some in Boulder too.

Adults with Social Science Degrees












What about density of social science degrees?  There is an average of 139 people with social science degrees per square mile in the Denver area (see map 4). We see some dense areas near downtown Denver, and I can visually start to pick out similarities between densities on maps 2 and 4. Remarkably, the census tract with the greatest density of social science degrees is just east of the Capital building.  Considering this census tract is just a few blocks from our office, we seem to have a nice supply of potential labor nearby.

Density of Social Science Degrees










So what does this tell us?  We know that, on average, there is a slightly greater proportion of people with social science degrees than engineering degrees (i.e., 9% vs. 8%), and there is a greater average density of social scientists than engineers.  I guess engineers like room to spread their elbows and social scientists like to live near other people.

Are engineers likely to be neighbors with social scientists?   When analyzed by raw number, there is a positive correlation between the two degrees. For about every one step up in the number of social science degrees by census tract, there is about a half-step up in the number of engineering degrees.

Does density play a role?  It appears so.  When looking at the relationship between degrees based on density (i.e., number of degrees per square mile), we see that the correlation is stronger than the correlation based on raw numbers of degrees.  This means social scientists and engineers are more likely to live in the same census tract in dense urban areas than in rural areas.

Now that I’ve answered this question, its on to my next project where I aim to prove that my proximity to a doughnut shop has a positive and strong correlation with my personal happiness.

The Wages of Engineering Grads

In two previous blogs (Once an Engineer, Always an Engineer? & The Career Evolution of Engineers),

we examined the long-term occupational patterns of people with engineering degrees, by looking at a sample of 1,250 Colorado residents who hold bachelor’s degrees in engineering.

We saw that many people with engineering degrees do not actually become working engineers.  In fact, only about half of engineering degree holders are working in technical fields.  So the question arises of whether they sacrifice income to do so, or whether they thrive in other occupations.

We conducted an analysis of pay by occupation and degree type (engineer/non-engineer) to explore this issue.  In order to develop as close a comparison as possible, we considered only full-time workers (40+ hours per week) in Colorado who reported making more than minimum wage and who hold a bachelor’s degree or higher.  This eliminates workers in family businesses, entrepreneurial startups, retired people, and workers who choose to work less than full-time.  By limiting the analysis to people who hold college degrees,  we also provide a comparison of the value of an engineering degree against other degrees in fields other than engineering.

So we can now address the question of “Is an engineering degree portable if a graduate decides not to pursue engineering as an occupation?”

The following exhibit shows the mean wage of full-time degreed workers in different major occupational categories.  Partway down the table, we see that full-time engineering grads working in engineering occupations have a mean income of slightly less than $106,000 per year.  We can draw several other conclusions by comparing this figure to other occupations.

While the available data are very limited in numbers in many cases, we can still see patterns.

  • First, engineering grads make similar incomes or even higher in several other fields. Engineers who have gone into business, computers, construction, legal fields, management, medical fields, personal services, and sales have managed to secure similar incomes as they would have in engineering.  So in some cases there’s no financial sacrifice to leave engineering for another career.
  • Second, when engineering grads go into lower-paying fields, they tend to out-earn their coworkers who hold other types of degrees. This is particularly intriguing, because one would theorize that engineers are not receiving specialized training for those fields, while at least some other degree holders are.
  • Third, there are some fields where engineering degrees are not portable, but for the most part these are field where college degrees do not add as much value. Cleaning, protection (security), and food service are the three fields where the engineering graduates actually earned less than their other degreed peers.
  • Fourth, we see a large body of people who list their professions as engineers, and yet do not hold engineering degrees (or at least not bachelor’s degrees). These people have notably lower wages than those who hold engineering degrees.  Given the specificity of the field, it’s possible that this is an indicator of other occupations co-opting the term “engineer” when they aren’t engineers in a technical sense.

So what conclusions can we draw from this?  For people like myself who have left engineering careers, and for others who are considering leaving the field, the results are heartening.  We can conclude that engineering degrees are highly portable to other fields.  In many fields, an engineering grad can completely replace his or her engineering income, leveraging those skills in new arenas.  And even if an engineering grad decides to pursue a field that pays less than engineering does, the degree itself is still a good thing, rewarding them with higher earnings compared to our peers in the new profession.

Overall, it appears that you can’t really go wrong with an engineering degree, so I’ll continue to display my diploma proudly.

The career evolution of engineers

In a previous blog, I examined the long-term occupational patterns of people with engineering degrees.  Looking at a sample of 1,250 Colorado residents who hold bachelor’s degrees in engineering, I examined their current occupations.

Now, one might expect that career choices might grow more diverse with age.  My theory before examining the data was that younger engineering grads would be more likely to work in classic engineering jobs, while older grads are more likely to find other opportunities as their interests evolve and they become exposed to other opportunities.

Annnnd, my theory was wrong.  When we examine the proportion of jobs held by engineering graduates of different ages, we don’t see a strong pattern.  Regardless of age, roughly half of engineering grads are working directly in technical fields.

Technical Occupation of CO Engineers by AgeNote  – Data includes only employed persons.

While employment in technical occupations doesn’t vary by age, there are some minor differences, as seen above.  Younger grads are more likely to be classic engineers, and haven’t worked up to management levels yet.  In fact, their profile is very similar to those age 50 to 64 if we assume that rises to management levels occur over time.

Those in the 65+ age category are similar to the next generation after them, but without a significant presence in computer fields.

Perhaps the most interesting group is the 30-44 age group.  They are more likely than any other age group to be in computer occupations, and less likely to be a classic engineer.  Is th is a function of their generation in particular, or will the younger engineers migrate toward computer occupations as they age?  Only time will tell.

Most Common Occupational Cat of CO Engineers by AgeNote:  Figures include both working and non-working individuals.  Computer and manager occupations in this table do not differentiate between technical and non-technical occupations.

The above table shows the rise to management among some engineering grads, and also movement into sales for those in the middle age categories, as well as the bulge in computer workers in the 30 to 44 group.  Among the Under 30 group, we see a significant portion not in the work force, which is primarily students going straight to graduate school, as well as teachers, mostly at the college level.

Once an engineer, always an engineer?

The two founding principals at Corona Insights began our careers as engineers.  We both worked in the aerospace industry for about five years before we met on a Top Secret military project.  We fell in love, got married, and together we left engineering as a profession.  We went in different professional directions for several years before once again becoming coworkers at Corona Insights.

I’ve always felt a little guilty about leaving engineering as a profession.  I still use the greatest skills I gained in engineering school, those of critical thinking, problem solving, and speaking the language of math and data.  But I also learned a lot of interesting stuff that I don’t use any more.  Getting my aerospace engineering degree was the biggest intellectual challenge I’ve ever had, and most of it is just a faint memory now.  I get surprisingly few opportunities to use that senior aerothermochemistry class.

By leaving the field, were Karla Raines and I an oddity?  I’ve always assumed so.  Engineering is a good-paying field that has a pretty high barrier to entry and generally stable employment, so why would somebody leave?

Well, I recently had a chance to test that theory.  I mined some federal census data to see what happens to people with engineering degrees once they get that diploma.  I examined the occupations of 1,250 Colorado residents who hold bachelor’s degrees in engineering, and I compared it against their current occupations.

What I found surprised me a lot.  While occupations can be defined many ways, I discovered that only 30 percent of employed people with engineering degrees describe their occupation as “engineering”.

Now, that doesn’t necessarily mean that 70 percent have left the field.  Some may just describe their jobs differently.  For example, another 14 percent described an occupation that was computer-related, with 11 percent in fields that are potentially related to engineering (programming, operations research, etc.), so they’re likely still in a technical field that could be related to engineering.

On top of that, another 1 percent describe themselves as scientists of some sort, so they could still be in fields directly arising from their degree.

And we don’t want to discount people who have moved into management.  Approximately 21 percent of working people with engineering degrees describe themselves as managers.  Looking more deeply, I can separate more technical management fields such as engineering managers, operations managers, and industrial managers from other fields like public relations managers, human resources, and such to estimate that 9 percent are still in technical jobs.  (I threw CEOs into that mix as well for those engineers who rise to the top or are entrepreneurs.)

So if we add all of those up, we have 30 percent of engineering grads working as engineers, 11 percent working in technical computer fields, 9 percent working as technical managers, and 1 percent working as scientists. That means that 51 percent of engineering grads are working in technical fields.

So what are the rest of them doing?  I provide a table below that includes the major occupational fields of engineering graduates of all ages.  In my next blog post, I’ll break it down by age.

Most Common Major Occupational Categories of Colorado Residents With Engineering DegreesNote:  Figures include both working and non-working individuals.  Computer and manager occupations in this table do not differentiate between technical and non-technical occupations.

While engineering graduates wander far and wide in their career choices, we see that many of them maintain their analytical interests.

To provide more context, I also examined the most common detailed occupations of engineering grads in Colorado.  The top 20 detailed occupations are shown in the following table.

Most Common Occupations of Colorado Residents With Engineering Degrees* – Primarily older engineering grads who are retired or younger grads who go straight into graduate school.

Again, engineers wander far and wide.  While many maintain their engineering focus, we see general business managers, art designers, marketers, and even lawyers in the mix.  In a future post, we’ll see how that translates to income.

Which Corona blog posts did you read this year? We have a guess.

It will be of little surprise that anyone who manages a blog keeps track of their website stats. It’s probably even less of a surprise that a research and strategy firm does so.

So, what was our most popular content from our blog in 2014?

  1. Putting the Pieces Together: Combining Qualitative and Quantitative Research Methods.
  2. How to ask demographic questions
  3. The cautionary tale of 5 scary strategic planning mistakes; Part I: Don’t self-sabotage (be sure to check out all five parts)
  4. Can you spot statistically significant differences in your data?
  5. Millionaires at McDonalds

Did you have any other favorites from this year?

Thank you for taking the time to read this year.

~ Your Blogging Coronerds

The Corona-Nerd Gift Guide

Time is ticking on holiday gift buying.  Don’t panic though as Corona has you covered, at least for the nerds in your life.

Here are 10 gift ideas for your nerds, or at least those with an offbeat sense of humor.

1.  The complete National Geographic Atlas set.

Who doesn’t like a high quality map?  Day dream of far off places for hours and stay up on current geopolitical boundaries.

2. Data visualization book

Maybe you want to give something even more visually stunning.  David McCandless’s stunning infographics help you visualize data and connect the dots.

3.  How to program data visualization

Or, perhaps, you want to program your own.  Learn how from  a reporter and visual journalist at

 4. ACME Klein Bottle

For the mathematician.  Don’t know what a Klein bottle is? 

 5.  A normal (distribution) pillow

For the statistician.    Other distributions also available (perhaps for the not-so-normal?).

6. “Correlation does not equal causation” t-shirt

Everyone who works in research should have one.


7. Tetris lamp

Instead of those 8-bit gifts, get the gift that keeps on entertaining.

8. Gallium spoon

Gallium “melts” in hot liquid. Perfect for scaring your coworkers during their morning tea or coffee.

9.Delayed Gratification

Give the gift that keeps on giving, albeit slowly.  A leading publication of “slow journalism,” they report with the power of hindsight.

10. The official Coronerd

Beth Mulligan, Principal, knitted our unofficial mascot last year for all employees. Unfortunately, not for sale at this time.



To Force or not to Force (an answer): It’s a complicated question

Survey design can be a complex and nuanced process.  We have made a multitude of posts on the subject, including asking the right people to participate, and how to ask the right questions, but one area we don’t talk about a lot is how the answers you provide in a survey can influence your results.  This is less of an issue in a verbal survey (such as a telephone survey) since interviewers are able to Answeraccept answers that aren’t directly given, but this can be a critical issue in online surveys and mail surveys since respondents can read all of the answers available to them before determining how they want to respond.   Here are a few of the decisions you may need to consider when designing questions for these types of surveys:

None of the above and Other

In cases where you provide respondents with a list of options to choose from, it is important that every respondent is able to answer the question. For example, consider the following hypothetical question for a survey of college students:

  • Which of the following best describes your year in school?
    • Freshman
    • Sophomore
    • Junior
    • Senior

In many cases, that question may be perfectly reasonable.  However, what if a student is in their fifth year?  Or what if the school has non-traditional students?  In these cases, respondents will likely either select a random answer or simply abandon your survey.  Neither of these possibilities is ideal, so it may be useful to include a “none of the above” option or an “other” option so that everyone can feel comfortable selecting an answer.

Our recommendation: Be comprehensive in your answer options, and give “other” and “none of the above” choices any time you aren’t sure you have all of the possibilities covered.

Don’t know

Another way in which this issue manifests itself is with regard to providing a “don’t know” option.  For example, consider the following hypothetical question:

  • How would you rate your opinion of Organization X?
    • Very positive
    • Somewhat positive
    • Neutral
    • Somewhat negative
    • Very negative

By not including a “don’t know” option, you are in effect forcing people to make a decision about the organization.  However, if someone in fact doesn’t know enough about the organization, they may potentially choose an answer that they don’t truly believe or abandon their response altogether.

On the other hand, including a “don’t know” option in your survey may not always be ideal either.  While a portion of your respondents may legitimately need to choose that answer, a number of others may choose that answer simply because it’s an easy option that doesn’t require a lot of thought.  If you are confident that all respondents know the organization well, including a “don’t know” option may actually reduce the quality of your data due to respondents taking the easy way out.  (As an aside, this is a similar issue as including a “neutral” option, but we’ll save that for another blog post!)

Our recommendation: Leave “don’t know” off if you believe that respondents should be able to form an opinion about the question, but include the option if there’s a good chance that many respondents may truly not have an answer.

Prefer not to answer

Similar to the above, when asking questions about sensitive topics, it can be beneficial to allow respondents to choose not to answer the question.  Household income, for example, is often valuable to use as a segmentation criteria about consumers, but many survey respondents are reluctant to share such information.  Including a “prefer not to answer” option may make respondents more comfortable with participating in your survey.  However, even more so than the “don’t know” option, suggesting that it’s OK to not answer the question will undoubtedly increase the number of respondents who do so, dramatically increasing the amount of missing data for the question.

Our recommendation: Put sensitive questions toward the end of the survey so that respondents are already comfortable with answering, but don’t include a “prefer not to answer” option unless the question is particularly objectionable for respondents.  Instead, consider allowing respondents to simply skip the question as described below.

Not forcing responses

That brings us to our final topic which is, in a sense, an answer option that you can’t even see: not requiring responses to survey questions in an online survey.   Even if you’ve tried to be comprehensive in your answers and provide alternate answers such as those described above, there is always a possibility that a respondent may simply be unwilling to answer a question.   If you require them to do so, they will likely either choose a random response or just abandon your survey entirely.  If having missing data will cause problems with your survey (for example, if meeting some criteria is required to direct respondents to appropriate questions) or in your analysis (for example, in a segmentation group you will use in your analysis), it may be necessary to require a response, but be sure to consider the implications of doing so.

Our recommendation: Use forced responses sparingly.  It’s usually better to allow someone to skip a question than it is to have data for a question that isn’t accurate.

The Power of Ranking

One of the fun tools of our trade is index development.  It’s a way to rank order things on a single dimension that takes into account a number of relevant variables.  Let’s say you want to rank states with respect to their animal welfare conditions, or rank job candidates with regard to their experience and skills, or rank communities with respect to their cost of living.  In each of these cases, you would want to build an index (and indeed, we have, for several of those questions).

Index-based rankings are all the rage.  From the U.S. News & World Report ranking of Best Colleges to the Milliken Institute’s Best Cities for Successful Aging, one can find rankings on almost any topic of interest these days.  But these rankings aren’t all fun and games (as a recent article in The Economist points out), so let’s take a look at the stakeholders in a ranking and the impacts that rankings have.

  1. The Audience/User. Rankings are a perfect input for busy decision makers.  They help decision makers maximize their choices with very little effort.  As such, they influence behavior, driving decisions about where to apply to college, whom to hire, where to go on vacation, where to move in retirement, and so on.  But if the rankings are based on different variables than are important to the users, users can be misled.
  2. The “Ranked”. For the ranked, impacts reflect the collective decisions of the users.  Rankings impact colleges’ applicant pools, cities’ tourism revenues, and local economies.  And on the flip side, rankings influence the behavior of those being ranked who will work to improve their standing on the variables included in the index.  As the old adage goes, “what gets measured gets done.”
  3. The “Ranker”. The developer of the index holds a certain amount of power and responsibility.  There are both mathematical and conceptual competencies required (in other words, it’s a bit of a science and an art).  The developer has to decide which variables to include and how to weight them, and those decisions are often based on practical concerns as much or more than on relevance to the goal of the measurement.  (There is usually a strong need to use existing data sources and data that is available for all of the entities being ranked.)  Selecting certain variables and not others to include in the index can have downstream impacts on where ranked entities focus their efforts for improvement, even when those included variables were chosen for expediency rather than impact.

To illustrate, I built an index to rank “The Best Coffee Shops in My Neighborhood.”  I identified the five coffee shops I visit the most frequently in my neighborhood and compiled a data set of six variables: distance from my home, presence of “latte art,” amount of seating, comfort of seating, music selection, and lighting.

Coffee_Latte Art






My initial data set is below.  First, take note of the weight assigned to each variable.  Music selection and seating comfort are less important to my ranking than distance from home, latte art, amount of seating, and lighting.  Those weights reflect what is most important to me, but might not be consistent with the preferences of everyone else in my neighborhood.

Index Table

Next, look at the data.  Distance from home is recorded in miles (note that smaller distances are considered “better” to me, so this will require transformation prior to ranking).  Latte art is coded as present (1) or absent (0).  This is an example of a measure that is a proxy for something else.  What is important is the quality of the drink, and the barista’s ability to make latte art is likely correlated with their training overall – since I don’t have access to information about years of experience or completion of training programs, this will stand in instead as a convenience measure.  Amount of seating is pretty straightforward.  Shop #5 is a drive-through.   Seating comfort is coded as hard chairs (1) and padded seats (2).  Music selection is coded as acceptable (1) and no music (0).  Lighting is coded as north-facing windows (1), south-facing windows (2), and east- or west-facing windows (3), again, because that is my preference.

After I transform, scale, aggregate, and rank the results, here is what I get.

Index Table 2





These results correspond approximately with how often I visit each shop, suggesting that these variables have captured something real about my preferences.

Now, let’s say I post these rankings to my neighborhood’s social media site and my neighbors increase their visits to Shop #2 (which ranked 1).  My neighbors with back problems who prefer hard seat chairs may be disappointed with their choices based on my ranking.  The shop owners might get wind of this ranking and will want to know how to improve their standing.  Shops #3 and #5 might decide to teach their employees how to make latte art (without providing any additional training on espresso preparation), which would improve their rankings, but would be inconsistent with my goal for that measure, which is to capture drink quality.

With any ranking, it’s important to think about what isn’t being measured (in this example, I didn’t measure whether the shop uses a local roaster, whether they also serve food, what style of music they play, what variety of drinks they offer, etc.), and what is being measured that isn’t exactly what you care about, but is easy to measure (e.g., latte art).  These choices demonstrate the power of the ranker and have implications for the user and the ranked.

Perhaps next we’ll go ahead and create an index to rank Dave’s top ski resorts simultaneously on all of his important dimensions.

What do you want to rank?

Big Insights can come in Little Numbers

On many of our research projects, the sample size (i.e., number of people who are surveyed) directly relates to research cost.  Costs typically increase as we print and mail more surveys or call more people. Normally, the increase in sample size is worth the extra cost because the results are more likely to accurately reflect the population of interest; however, is a big sample size always necessary?

Here is an example of when sample size does not need to be very large in order to draw valuable insights.  Let’s say you are the communications manager for an organization that wants to improve public health by increasing broccoli consumption.  For the past year, you have diligently been publicizing the message that broccoli is good for your digestive system because it is high in fiber.  Lately, your director has wondered if another message may be more effective at persuading people to eat broccoli—maybe a message that touts broccoli’s ample amount of Vitamin-C, which can help fight off the common cold. Switching your communication campaign’s key message would be expensive, but probably worth it if your new message was dramatically more effective at persuading behavior. However, if the Vitamin-C message was only marginally more effective, then it might not be worth spending the money to switch.  Your boss tasks you with conducting research to compare the effectiveness of the fiber message to the Vitamin-C message. Pencils Graph

If you have undertaken message research in the past, you may have heard that you need a large and randomly drawn sample in order to draw reliable insights.  For example, you might survey your population and collect 400 responses from a group who saw the original fiber message and 400 responses from those who saw the new Vitamin-C message.  While collecting up to 800 responses might be valuable for some types of analysis, it is probably unwarranted to answer the research question described above. Indeed, you might only need to collect about 130 responses (65 from each group) to answer the question “Which message is more effective?”  Why so few?

Sixty-five responses from each group should reveal a statistically significant difference if the effect size is moderate or greater. (In social science research, we use the term effect size as a way to measure effectiveness.  For example, is a new message more or less effective than an old message? A small effect size is less effective than a large effect size, and you need to apply careful analysis to detect a small effect, while a large effect is obvious and easy to detect).   With 65 responses, analysis should reveal a statistically significant difference if the effect size is at least moderate.

So what does moderate mean?  A helpful way (although not technically accurate) to understand effect size is to think of it as a lack of agreement between two groups (e.g., those who saw the fiber message and those who saw the Vitamin-C message).  With 65 responses from each group, a statistically significant result would mean there was no more than 66 percent agreement between the groups (technically, we mean less than 66 percent distribution overlap). For most communication managers, that is a substantial effect.  If the result pointed to the new Vitamin-C message being more effective, it’s probably worthwhile to spend the money to switch messaging!  If analysis did not find a statistically significant difference between the messages, then it’s not advisable to switch because the increased effectiveness (if any) of the new message would be marginal at best.

If cost is no factor, then a bigger sample size is usually better, but I have not yet met a client who said cost didn’t matter. Rather, our clients are typically looking for insights that are going to help them produce meaningful and substantial impacts. They look for a good value and timely results.  By understanding the intricacies of selecting an appropriate sample size, we stretch our client’s research dollars.  Give us a call if you would like to discuss how we could stretch your research dollars.

Welcome, Mollie!

We are delighted to welcome Mollie Boettcher as the newest Multi-Ethnic Group Of People Holding The Word Welcomemember of the Corona Insights team!

As our newest Associate, Mollie will specialize in qualitative research practices including, but certainly not limited to: recruiting research participants, conducting focus groups and interviews, then analyzing and interpreting qualitative data for clients seeking data-driven guidance.

Mollie attended the University of Winsonsin—La Crosse, where she majored in Business Management and minored in Chemistry. Mollie graduated with her B.S. in 2010.

When Mollie is not hard at work in her office, you can find her out hiking or snowshoeing in the Rocky Mountains. She also likes to explore Denver, including taking her dog on walks in Wash Park, and enjoying the many unique restaurants and breweries that Denver has to offer.