Improving Survey Question By Fowler F J Mr Jiroj

Summary, application, and questions

Improving Survey Questions
1. Questions as Measures: An Overview
What is a Good Question?
Characteristics of Questions and Answers That Affect Measurement
Question Evaluation
2. Designing Questions to Gather Factual Data
Question Objectives
Definition of Concepts and Terms
Knowing and Remembering
The Form of the Answer
Reducing the Effect of Social Desirability on Answers
3. Questions to Measure Subjective States
Describing and Evaluation People, Places, and Things
Measuring Responses to Ideas
Measuring Knowledge
Multi-Item Measures
The Relativity of Answers About Subjective States
4. Some General Rules for Designing Good Survey Instruments
What to Ask About
Wording Questions
Formatting Survey Instruments
Training Respondents
5. Pre-survey Evaluation of Questions
Focus Group Discussions
Intensive Individual Interviews
Field Pretesting
Adapting Pretest Strategies to Self-Administered Questionnaires
Tabulating Answers
6. Assessing the Validity of Survey Questions
Studying Patterns of Association
Validating Against Records
Comparing Alternative Question Forms
Consistency as a Measure of Validity
7. Question Design and Evaluation Issues in Perspective
8. Application in Finance
9. Questions
10. Reference

Improving Survey Questions

1. Questions as Measures: An Overview

1.1 What is a Good Question?
• A good question is one that produces answers that are reliable and valid measures of something we want to describe.

1.2 Characteristics of Questions and Answers That Affect Measurement
• There are 5 basic characteristics of questions and answers that are fundamental to a good measurement process; 1) Questions need to be consistently understood. 2) Questions need to be consistently administered or communicated to respondents. 3) What constitutes an adequate answer should be consistently communicated. 4) Unless measuring knowledge is the goal of the question, all respondents should have to the information needed to answer the question accurately. 5) Respondent s must be willing to provide the answers called for in the questionnaire.

1.3 Question Evaluation
• There are two types of question evaluation: those aimed at evaluating how well questions we propose to ask meet the five standards, which can be thought of as process standards, and those aimed at assessing the validity of answers that result.
• There is a set of evaluative strategies to find out how well answers to questions produce valid measurements. 1) Analysis of resulting data to evaluate the strength of predictable relationships among answers and with other characteristics of respondents. 2) Comparisons of data from alternatively worded questions asked of comparable samples. 3) Comparison of answer against records. 4) Measuring the consistency of answers of the same respondents at two points in time.
• If two forms of the same question are asked of the same samples, or comparable samples, the distributions of answer should be the same if measurement is error free.
• Re-interviewing a sample of respondents, asking the same questions twice and comparing the results, can provide useful information about the validity and reliability of answers.

2. Designing Questions to Gather Factual Data
• Five challenges to writing a good question: 1) defining objectives and specifying the kind of answers needed to meet the objectives, 2) ensuring that all respondents have a shared, common understanding of the meaning of the questions, 3) Ensuring that people are asked questions to which they know the answer, 4) asking questions that respondents are able to answer in the terms require by the questions, and 5) asking the questions respondents are willing to answer accurately.

2.1 Question Objectives
• The objective defines the kind of information that is needed. Designing the particular questions to achieve the objective is another step.
• The soundest advice any person beginning to design a survey instrument could receive is to produce a good, detailed list of question objectives and an analysis plan that outlines how the data will be used.
• By relating proposed questions to an outline of objectives, weakness in the specified objectives can be identified. By stating the objectives in advance, researchers are reminded that designing questions that people are able and willing to answer is a separate task, distinct from defining research objectives.

2.2 Definition of Concepts and Terms
• One basic part of having people accurately report factual or objective information is ensuring that all respondents have the same understanding of what is to be reported so that the researcher is sure the at the same definitions have been used across all respondents.
• If definitions are very complex, respondents can be asked a series of questions. It does not make any sense to try to communicate a shared definition to all respondents.
• Proper question design means making certain that the researcher and all respondents are using the same definitions when people are classified or when events are counted.

2.3 Knowing and Remembering
• There are three possible sources of problem concerning whether or not respondents have the information needed to answer the questions: 1) the respondent may not have the information needed to answer the question, 2) the respondent may once have known the information but have difficulty recalling it, and 3) for questions that require reporting events that occurred in a specific time period, respondents may recall that the events occurred but have difficulty accurately placing them in the time frame called for in the question.
• Three principles about recalling are 1) the more recent the event, the more likely it is to be recalled, 2) the greater the impact or current salience of the event, the more likely it is to be recalled, and 3) the more consistent an event was with the way the respondent thinks about things, the more likely it is to be recalled.

2.4 The Form of the Answer
• Most questions specify a form the answers are supposed to take. The form of the answer must fit the answer the respondent to give. Asking people the questions they know is important.

2.5 Reducing the Effect of Social Desirability on Answers
• Studies of response accuracy suggest the tendency for respondents to distort answers in ways that will make them look better or will avoid making them look bad.
• People vary in what they consider to be sensitive.
• The interview should be set to minimize the forces on respondents to distort answers.
• There are three general classes of steps a researcher can take to reduce response distortion: 1) assure confidentiality of responses and communicate effectively that protection is in place, 2) communicate as clearly as possible the priority of response accuracy, and 3) reduce the role of an interviewer in the data collection process.
• Some suggestion for designing good questions: 1) avoid ambiguous words; define the key terms in questions, 2) minimize the difficulty of the recall and reporting tasks given to respondents, 3) for objectives that pose special definitional or are recall challenges, use multiple questions, 4) give respondents help with recall and placing events in time by encouraging the use of association and other memory aids, 5) make sure the form of the answer to be given fits the reality to be described, and 6) design all aspects of the data collection to minimize the possibility that any respondent will feel his or her interests will be best served by giving an inaccurate answer to a questions.

3. Questions to Measure Subjective States
• The largest number of survey questions asks about respondent’s perceptions or feeling about themselves or others.

3.1 Describing and Evaluation People, Places, and Things
• When designing questions, it is important that everyone be answering the same question.
• Researchers have designed numerous strategies for evoking answers form respondents. The most common task is some variation of putting the object of the answer on a continuum.
• Characteristics of categories or scales. Two key issues include: 1) How many categories to offer and 2) whether to use scales defined by numbers or by adjectives.
• When long, complex scales are presented by telephone, sometimes it is found that it produces biases simply because respondents cannot remember the categories well. Another issue is whether to use numerical or adjectival labels.
• The open-ended approach has several advantages. It does not limit answers to those the researcher thought of, so there is opportunity to learn the unexpected but the diversity of answers may make the results hard to analyze.
• When defining response alternatives on scales, it is useful to use adjectives that can be differentiated and do not mean the same thing.

3.2 Measuring Responses to Ideas
• The agree-disagree question form, and its variants are used in survey research. When asking about ideas or policies, it is appropriate but they require considerable care to produce good measure.
• Three problems of agree-disagree question forms are 1) many questions in this form do not produce interpretable answers, 2) these questions usually sort people into only two groups, and 3) they often are cognitively complex.

3.3 Measuring Knowledge
• Knowledge is measured in four ways: 1) asking people to self-report what they know, 2) True-false questions, 3) multiple choice questions, and 4) open-ended short-answer questions.
• The strength of open questions is that there are virtually no false positives; respondents either give the right answer or they do not. In contrast, in the multiple choice or true-false format, a random guess will pick the right answer at some rate.
• The disadvantage of the open form of the question is that it may prove a low estimate of active knowledge, because some people who could recognize the correct answer, or retrieve it given more items, will fail to retrieve it in a survey situation.
• Three points that apply to all survey questions are 1) for the short-answer question form, it is particularly important that what constitutes an adequate answer is clearly specified by the questions, 2) measures of knowledge are question-specific, just like measure of other subjective states, and 3) the value of a measure of knowledge is usually dependent on how well answers are distributed.

3.4 Multi-Item Measures
• One important way to improve the measurement of subjective states is to combine the answers from more than one question into index.
3.5 The Relativity of Answers About Subjective States
• It is important to differentiate between effects of question form on the distribution of answers and effects on the validity of the answered.
• By changing wording, response order, or other things about data collection it is possible to change the distribution of answers to a question in a positive or negative direction.

4. Some General Rules for Designing Good Survey Instruments

4.1 What to Ask About
• The strength of survey research is asking people about their firsthand experiences: what they have done, their current situations, their perceptions.
• Beware of asking about information that is acquired only secondhand.
• Beware of hypothetical questions.
• Beware of asking about causality.
• Beware of asking respondents about solutions to complex problems. Ask one question at a time. Avoid asking two questions at once.
• Avoid questions that impose unwarranted assumptions.
• Beware of questions that include hidden contingencies.

4.2 Wording Questions
• A survey question should be worded so that every respondent is answering the same question. To extent possibilities, the words in questions should be chosen so that all respondents understand their meaning, and all respondents have the same sense of what the meaning is.
• To the extent that words or terms must be used that have meanings that are likely not to be shared, definitions should be provided to all respondents.
• The time period referred to by a question should be unambiguous. Questions about feelings or behaviors must refer to a period of time.
• If what is to be covered is too complex to be included in a single question, ask multiple questions.
• If a survey is to be interviewer administered, wording of the questions must constitute a complete and adequate script such that, when interviews read the question as worded, respondents will be fully prepared to answer the question.
• If definitions are to be given, they should be given before the question itself is asked.
• A question should end with the question itself. If there are response alternative, they should constitute the final part of the question.
• Clearly communicate to all respondents the kind of answer that constitutes an adequate answer to a question.
• Specify the number of responses to be given to questions for which more than one answer is possible.

4.3 Formatting Survey Instruments
• Design survey instruments to make the tasks of reading questions, following instructions, and recording answers as easy as possible for interviewers and respondents.

4.4 Training Respondents
• Measurement will be better to the extent that people answering questions are oriented to the task in a consistent way.
• Three areas in which respondents require training are 1) the goals of the task, 2) the data collection process and how to play the respondent role, and 3) the purposes or goals of any particular subpart of the survey.

5. Pre-survey Evaluation of Questions
• Before a question is asked in a full scale survey, testing should be done to find out if people can understand the questions, if they can perform the tasks that questions require, and if interviewers can and will reads questions as worded.
• Three main kinds of questions evaluation activities that are the focus: 1) Focus group discussion, 2) Intensive individual interviews, and 3) Field pretesting.

5.1 Focus Group Discussions
• Focus groups most often have been used to help define the topic and the research questions.
• Two main aspects of survey instrument design to which focus group discussions could contribute: 1) to help examine the assumptions about the reality about which people will be asked. 2) to evaluate assumptions about vocabulary, the way people understand terms or concepts that will be used in survey instruments.
• An interview schedule consists of a set of questions about a series of experiences, behaviors, and subjective states.
• When the goal of a focus group discussion is to aid the design and evaluation of survey questions, and if the focus group discussion is undertaken at the time that there is at least a draft of a survey instrument, the product of the focus group discussion should be a question by question review of the survey instrument that is drafted.
• The best strategy is to videotape the group, not to take note.

5.2 Intensive Individual Interviews
• Cognitive or intensive interviews take several forms but there are some common elements: 1) the priority process is to find out how respondents understand questions and performs the responses tasks, 2) respondents often are brought into a special setting in which interviews can be recorded and observed. Often they are referred to laboratory interviews, 3) the people who conduct cognitive interviews usually are not regular survey interviewers. They are thoroughly familiar with the objectives of the research and of individual questions so that they can be sensitive to discrepancies between the way respondents perform their tasks and what researchers envision as the way the tasks will be performed. 4) the basic protocol involves reading questions to respondents, having them answer the questions, and then some strategy for finding out what was going on in the respondents’ mind during the question and answer process.
• The selection of respondents is parallel to the selection criteria for setting up focus group. The goal is to get the representative of the group in actual survey.
• Three common procedures for trying to monitor the cognitive processes of the respondent who is answering question: 1) Think loud interview, 2) asking probe or follow up questions after each question or short series of questions, and 3) going through the questions twice, first having respondents answer the min the usual way, the n returning to the questions and having a discussion with respondents about the response tasks.
• After focus groups and the intensive individual interviews, it is necessary to test survey questions under realistic data collection process.

5.3 Field Pretesting
• Most often interviewers meet with investigators to discuss their pretest experience. Interviewers report on practical problems.
• Some limitations of systematic evaluation of survey questions are 1) the criteria for evaluation are not well articulated and interviewers might have different perceptions on the problem, 2) it is difficult to carry good interview and being observer of the interview. The interviewers could solve the problem even the questions are poorly designed, 3) Interviewers in pretests have small samples of respondents, 4) Debriefing sessions may not be a good way to get information about interviewer evaluations, and 5) some question problems may not be apparent in the course of a pretest interview.

5.4 Adapting Pretest Strategies to Self-Administered Questionnaires
• Behavior coding and systematic interviewer ratings, depend on the fact that the question and answer process is carried out orally in an interviewer-administered survey.
• Field pretest without observation: One obvious way to pretest a mail survey procedure is to replicate the mail survey: questionnaires can be mailed. Respondents may be asked to answer a few debriefing questions.
• Using observation to evaluate self- administered questions: watching people fill out self-administered questionnaires can provide useful information about question problems.
• The most universal strategy for evaluating self-administered questionnaires is to have respondents complete them, then carry out a brief interview with the respondents about the survey instrument.

5.5 Tabulating Answers
• A final step that researchers can take to help evaluate questions at the pretest stag is to tabulate the distribution of answers.
• The limitation of pretest is that typically the samples are not representative.

6. Assessing the Validity of Survey Questions
• Four approaches to evaluating the validity of survey measures: 1) studying patterns of association, 2) validating against records, 3) comparing alternative question forms, 4) consistency as a measure of validity

6.1 Studying Patterns of Association
• Three closely related approaches to assessing validity are 1) construct validity, 2) predictive validity, and 3) discriminant validity.
• Constrict validity: if several questions are measuring the same or closely related things, then they should be highly correlated with one another.
• Predictive validity: the extent to which a measure predicts the answers to some other question or a result to which it ought to be related.
• Disciminant validity: the extent to which groups of respondents who are thought to differ in what is being measured, in fact, do differ in their answer.

6.2 Validating Against Records
• Record-check studies are the best way to learn how well people report and the characteristics of events that are reported inaccurately. The most important limitation is that only certain kinds of errors can be detected with such designs. The kind of reporting that can be checked with such designs may not be representative of all the events in which a researcher is interested. Many of the most interesting and important things we ask people to repot are virtually impossible to validate.
• In some cases it is possible to evaluate the quality of data collected by survey by comparing survey results against some of the independent aggregate estimate of the same population.

6.3 Comparing Alternative Question Forms
• One important way to evaluate survey questions as measures is to ask essentially the same questions as measures is to ask essentially the same question in two different forms, then compare the results.
• When two parallel forms of questions do not yield the same results, it clearly implies error in at least one question as a measure.

6.4 Consistency as a Measure of Validity
• Usually, the consistency of answers over time is considered to be reliability. Two main ways to measure consistency of survey answers are 1) the same person can be asked the same questions twice and 2) two people can be asked the same questions.
• Consistency is clearly an important way to gain information about validity. Re-interviews are a particularly straightforward, and underutilized, strategy to measure data quality.

7. Question Design and Evaluation Issues in Perspective
• The key principles for questions designed to measure facts and objective events are asking questions people can answer, clearly defining all key terms and concept, and providing a context people will see they would serve their interest when they answer questions.
• Defining objectives is the primary problem for designers of measures of subjective states. Three keys standards for subjective questions are 1) the terms of a question must be clear, 2) the response task is easy for most people to perform, and 3) the response alternatives are designed so that respondents who differ in fat in their answers will be distributed across the response alternatives.
• Scales format is better than agree-disagree format.
• Three important premises for evaluation of survey questions: 1) questions need to be consistently 2) questions need to pose tasks that most people can perform, 3) questions need to constitute an adequate protocol for a standardized interview.
• Validity is the degree of correspondence between a measure and what is measured

8. Application in Finance
• Survey questions could be used in the field of behavioral finance.
• Survey questions could be used to measure the sentiment of the market at each period of time
• We could use the knowledge form survey to make a better investment decision.

9. Questions
• What types of research in finance needed survey?
• Why survey is needed in finance.
• In finance who is doing survey and benefit most from doing survey?

10. Reference:
Fowler, F. J., Improving survey question: design and evaluation. (1995), Sage Publication, Thousand Oaks, California.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License