Research Methodology for the books The Male Factor and For Women Only in the Workplace

MF-FWOWP

Research Methodology

by Charles D. Cowan, PhD, Managing
Partner, Analytic Focus LLC

When Shaunti Feldhahn contacted me to help her with her first relationship book (For Women Only: What You Need to Know About the Inner Lives of Men), we began by discussing both her goals and my standards for conducting a quality survey. We agreed on many points; we’re both researchers and wanted the facts, unvarnished and untainted.

Which is why it’s been a pleasure to continue to work with Shaunti over the years as she has pursued her research and published on important issues regarding the interactions between individuals. But when Shaunti contacted me for help on the book you’re now reading, she added another qualification: the sophisticated business reader must be comfortable that the research was of the highest quality and find the survey unassailable.

As with the other surveys we had worked on together, the reliability of results was paramount: it had to be statistically defensible. A survey or any research effort is a series of complex, interrelated steps, and at no point in the research process could biases be introduced.

That is why, for each survey, I have insisted that Shaunti start by writing down the hypotheses she would be testing. There are several reasons for this. The most important is that once we know the issues underlying the hypotheses, we can begin the arduous and time-consuming process of developing the questions that will be asked of every survey participant. Each hypothesis focuses on an issue to be examined and allows us to test it. It gives us a basis for discussing what the survey will be about and what the relevant population would be.

Once Shaunti began developing these hypotheses, we set up parallel tracks as we created the process that would eventually culminate in the survey: we had to develop the questions that would be asked and had to create the best mechanics for the survey itself.

Developing the Survey Questions

On the first track, Shaunti was responsible for starting to develop the survey questionnaire. Based on the feedback from her interviews with men in the workplace, Shaunti wrote out a question or a set of questions that would get at each hypothesis she wanted to test (each area of “surprise” for the woman reader) and then submitted them to me. My job was to review the questions, push back on wording or content, and make sure they were unbiased. Any potentially leading questions were rewritten and the language clarified so that anyone from an entry-level staff person to a CEO could understand, find it relevant, and respond.

Then Shaunti tested her questions in real-world environments, made changes, submitted the new questionnaire to me, and started the whole process over again. And again. Just this process of developing and testing the hypotheses and questions took at least six months of concentrated effort, and is a large part of why the research for this book (as well as her others) is precise and reliable.

Choosing the Survey Company and Developing the Mechanics

On a parallel track, I worked with a team of experts to set up the survey mechanics. The first and most important decision was regarding what company would conduct the survey, and that choice was easy: Decision Analyst, which had conducted most of Shaunti’s other surveys and has that “unassailable” reputation we were looking for.

One reason Decision Analyst is so well respected is that their rigorous methods and quality control ensure reliability for online surveys. For this survey, we knew that the respondents would have to be contacted and provided the survey online so they could complete the questionnaire on their own computer. It gave the respondents the necessary privacy to answer sensitive questions that otherwise might have been biased by the presence (either in person or on the phone) of an interviewer. Although the standardized nature of multiple-choice questionnaires can make them cost efficient, the questionnaire format could limit a respondent’s point of view to some extent and possibly force participants to opt for an answer. In order to provide for some flexibility without giving up the necessary standardization, our questionnaire consisted of a mixture of thirty-eight open- and closed-ended questions (dichotomous, multiple response, multiple choice, nominal, and verbatim questions).

To increase response rates, lower participant drop-off levels, and maintain rigorous quality control of participants, most credible survey companies (including Decision Analyst) offer monetary as well as nonmonetary incentives to those who agree to take surveys. In this case, nonmonetary incentives included anonymity and confidentiality, as well as appeals to participants regarding the importance and magnitude of the research project that they would be part of. Monetary incentives included the opportunity to participate in a $10,000 monthly cash award sweepstakes and a small check once the survey was completed.

When I worked with Decision Analyst to design the mechanics of the survey, we had to first determine a sampling frame that covers the population and then select a sample from that population to be questioned. For this survey, we did not draw a “simple” random sample. We set up controls on the sampling process to ensure a proportional distribution of the population across regions of the United States (northeast, central, south, and west) and across age groups. Our goal was to have completed surveys from six hundred employed men (including one hundred executives), ages twenty-five to sixty-five with a relevant mix of demographics (white collar/blue collar, full-time/part-time, across a spectrum of occupations, size of company, and seniority).

We also needed to survey one hundred white-collar women ages twenty-five to sixty-five as a control group. Ensuring this sort of proportional distribution is much more time-consuming and expensive, but also delivers very reliable raw data. The sample is random, but it is not “simple”—it is complex and designed to meet a number of important goals.

Once the questionnaire was designed, it was programmed into a computer system that would allow the questionnaire to be taken online. Before the official survey was launched, Decision Analyst conducted a thorough quality-control process, including pretesting the online version of the questionnaire with a small group of randomly selected individuals who were not part of the overall sample. Several rounds of pretesting ensured that the programming was done correctly, that all answers could be recorded and collected, that there were no typos or ambiguous instructions, and that all “skip instructions” worked properly. (For example, if the respondent was a woman, Decision Analyst had to be sure that she would skip the male version of certain questions and see only the female version.)

Once the survey was good to go, invitations were sent out to the sample population and the survey was actually conducted. I’ll let Felicia Rogers of Decision Analyst explain how.

Felicia Rogers of Decision Analyst, on conducting the survey itself

The survey fieldwork and data processing were carried out by Decision Analyst Inc., a full-service marketing research and consulting firm.

Upon receiving the questionnaire from Shaunti, Decision Analyst programmed the survey instrument to be administered online with members of its American Consumer Opinion online panel. This is a proprietary, double-opt-in panel of households that have agreed to participate in Internet surveys exclusively for Decision Analyst. The panel currently includes more than eight million men, women, and children throughout the United States, Canada, Europe, Latin America, and Asia. Decision Analyst’s panels are recognized within the industry for their high quality of participants and the results they yield. For this project, approximately fifty thousand panelists were invited to participate, with several thousand being screened for qualification.

The survey itself was very comprehensive, requiring twelve to fifteen minutes for respondents to complete, on average. Members of American Consumer Opinion were invited via e-mail to complete a brief screening questionnaire. Qualified respondents (those who met the criteria for proportional distribution) were then invited to continue immediately through the full survey on Decision Analyst’s secure Web server. Data was collected over a period of two weeks during late August and early September 2008, until each demographic quota was met.

Once data collection was complete, a set of complete cross tabulations was provided via Decision Analyst’s Logician Online Reporting System, as well as to Analytic Focus in a raw format, for Shaunti to perform her comprehensive analysis.

Once the data was gathered, it was analyzed for accuracy and consistency. The data was processed using a statistical software package designed to tabulate the data and give reliability measures. Derivative variables were created by cross-tabulating the data. This in turn permitted us to group more than one variable into various subgroups, providing for a more specific characterization of participants and their responses.

My staff and I tabulated the survey results for Shaunti, providing both comprehensive Excel worksheets and specifically requested cross-tabs as she proceeded in her analysis and uncovered facts she wanted to investigate further. We also gave Shaunti indicators for the reliability of the results.

Understanding What is Statistically Significant

Reliability in this context does not mean that there was a question regarding the veracity of the respondents. Reliability for a survey has to do with the variation associated with responses because of the use of a sample. From a large population of men (several million), I can find millions of different samples of six hundred men. However, each sample must be reflective of the overall population, but each varies from the overall population in some small way. The larger the sample, the less it is likely to differ from the general population, but it will still differ in some fashion.

Statistics and probability allow me to compute how much a sample is likely to differ from the underlying population. In the case of a sample size of six hundred, one would expect to have, worst case, a variation of plus or minus 4 percent, with 95 percent confidence. This means that on two separate questions (“I prefer chocolate to vanilla” and “I prefer iced tea to coffee”), if 52 percent prefer chocolate and only 48 percent prefer iced tea, I cannot say with 95 percent certainty that these numbers are statistically different, given my variation. A difference of 54 percent versus 46 percent, on the other hand, is one I can detect ninety-five times out of one hundred. This simply means that if I drew one hundred samples independent from one another, each with six hundred respondents, that if the true values in the population were 54 percent and 46 percent, I’d be able to detect that difference ninety-five times out of the hundred. To achieve the desired reliability in the questionnaire, you have to select a large enough sample at the beginning of the survey to achieve that end.

One other key value to note is what happens to the reliability when comparing subgroups in the sample. If half of respondents are in one group and half are in the other, then the reliability decreases— but not by as much as you might expect. The variability for the results from a group of three hundred is plus or minus 5.6 percent.

As we tabulated the survey results for Shaunti, and as she broke her analysis down into various subgroups (for example, the results among white-collar executives at large companies), we also gave her guidance on what constitutes a real difference versus what would not be statistically different for that subsample. In other words, what would be “statistically significant” and thus a reportable result, instead of results falling within the range of sampling variability for that subsample.

In the end, most of the main results were clearly statistically significant. Shaunti set aside those that weren’t and did not pursue those areas of inquiry further.

It was a delight to work with Shaunti again. Although she is passionate about the subjects she is investigating, she has the researcher’s desire to be true to the data. She also has the humility to understand that there are technical areas outside her knowledge base where I might be able to help. I hope I have.