Survey Design & Data Collection

Submitted by jyuan@worldbank.org on Tue, 04/12/2016 - 18:29
Image
designing indicators

Lecture Overview

  • What should you measure?
  • What makes a good measure?
  • Measurement
  • Data Collection
  • Piloting

Designing Indicators from clearsateam

Download File
Pdf as plain
Designing Indicators
1. Survey Design & Data Collection
2. Lecture Overview  What should you measure?  What makes a good measure?  Measurement  Data Collection  Piloting
3. What do we measure and where does it fit into the whole project? WHAT SHOULD YOU MEASURE?
4. What Should You Measure?  Follow the Theory of Change • Characteristics: Who are the people the program works with, and what is their environment? Sub-groups, covariates, predictors of compliance • Channels: How does the program work, or fail to work? • Outcomes: What is the purpose of the program? • Assumptions: What should have happened in order for the program to succeed?  List all indicators you intend to measure • Use participatory approach to develop indicators (existing instruments, experts, beneficiaries, stakeholders) • Assess based on feasibility, time, cost and importance
5. Methods of Data Collection  Administrative data  Surveys- household/individual  logs/diaries  Qualitative – eg. focus groups, RRA  Games and choice problems  Observation  Health/Education tests and measures
6. What makes a good measure? INDICATORS
7. The Main Challenge in Measurement: Getting Accuracy and Precision More accurate   More precise
8. Terms “Biased” and “Unbiased” Used to Describe Accuracy More accurate  “Biased” “Unbiased ” On average, we get the wrong answer On average, we get the right answer
9. Terms “Noisy” and “Precise” Used to Describe Precision  More precise “Noisy” Random error causes answer to bounce around “Precise” Measures of the same thing cluster together
10. Choices in Real Measurement Often Harder More accurate   More precise “Noisy” but “Unbiased” “Precise” but “Biased” Random error causes answer to bounce around Measures of the same thing cluster together
11. The Main Challenge in Measurement: Getting Accuracy and Precision More accurate   More precise
12. Accuracy  In theory: • How well does the indicator map to the outcome? (e.g. intelligence  IQ tests)  In practice: Are you getting unbiased answers? • Social desirability bias (response bias) • Anchoring bias (Strack and Mussweiler, 1997) Did Mahatma Gandhi die before or after age 9? Did Mahatma Gandhi die before or after age 140? • Framing effect Given that violence against women is a problem, should we impose nighttime curfews?
13. Precision and Random Error  In theory: The measure is consistent, precise, but not necessarily valid  In practice: • Length, fatigue • “How much did you spend on broccoli yesterday?” (as a measure of annual broccoli spending) • Ambiguous wording (definitions, relationships, recall period, units of question) Eg. Definition of terms – ‘household’, ‘income’ • Recall period/units of question • Type of answer -Open/Closed • Choice of options for closed questions  Likert (i.e. Strongly disagree, disagree, neither agree nor disagree, . . .) Rankings • Surveyor training/quality
14. Challenges of Measurement MEASUREMENT
15. The Basics  Data that should be easy? • E.g. Age, # of rooms in house, # in HH  What is the survey question identifying? • E.g. Are HH members people who are related to the household head? People who eat in the household? People who sleep in the household?  Pre-test questions in local languages
16. The Basics: Units of Observation Choosing Modules: Units of Observation Often this is simple: For example, sex and age are clearly attributes of individuals. Roofing material is attribute of the dwelling. Not always obvious: To collect information on credit, one could ask household’s  All current outstanding loans.  All loans taken and repaid in the last one year.  All “borrowing events” (all the times a household tried to borrow, whether successfully or not). Choice is determined by expected analytical use and reliability of information
17. The Basics: Deciding Who to Ask  “Target respondent”: should be most informed person for each module. Respondents for each module can vary.  For example: to measure use of Teaching Learning Materials, should we survey the headmaster? Teachers? SMC? Parents? Students?  Choice of modules decides target respondent, and target respondent shapes the design of questions.
18. What is hard to measure in a survey? (1) Things people do not know very well (2) Things people do not want to talk about (3) Abstract concepts (4) Things that are not (always) directly observable (5) Things that are best directly observed
19. How much tea did you consume last month? A. <2 liters B. 2-5 liters C. 6-10 liters D. >11 liters
20. 1. Things people do not know very well  What: Anything to estimate, particularly across time. Prone to recall error and poor estimation • Examples: distance to health center, profit, consumption, income, plot size  Strategies: • Consistency checks – How much did you spend in the last week on x? How much did you spend in the last 4 weeks on x? • Multiple measurements of same indicator – How many minutes does it take to walk to the health center? How many kilometers away is the health center?
21. How many cups of tea did you consume yesterday? A. 0 B. 1-3 C. 4-6 D. >6
22. What is Hard to Measure? (1) Things people do not know very well (2) Things people do not want to talk about (3) Abstract concepts (4) Things that are not (always) directly observable (5) Things that are best directly observed
23. How frequently do you yell at your partner? A. Daily B. Several times per week C. Once per week D. Once per month E. Never
24. 2. Things people don’t want to talk about  What: Anything socially “risky” or something painful • Examples: sexual activity, alcohol and drug use, domestic violence, conduct during wartime, mental health  Strategies: • Don’t start with the hard stuff! • Consider asking questions in third person • Always ensure comfort and privacy of respondent • Think of innovative techniques – vignettes, list randomization
25. How frequently does your partner yell at you? A. Daily B. Several times per week C. Once per week D. Once per month E. Never
26. What is Hard to Measure? (1) Things people do not know very well (2) Things people do not want to talk about (3) Abstract concepts (4) Things that are not (always) directly observable (5) Things that are best directly observed 27
27. “I feel more empowered now than last year” A. Strongly disagree B. Disagree C. Neither agree nor disagree D. Agree E. Strongly agree
28. 3. Abstract concepts  What: Potentially the most challenging and interesting type of difficult-to-measure indicators • Examples: empowerment, bargaining power, social cohesion, risk aversion  Strategies: • Three key steps when measuring “abstract concepts” • Define what you mean by your abstract concept • Choose the outcome that you want to serve as the measurement of your concept • Design a good question to measure that outcome  Often choice between choosing a self-reported measure and a behavioral measure – both can add value!
29. What is Hard to Measure? (1) Things people do not know very well (2) Things people do not want to talk about (3) Abstract concepts (4) Things that are not (always) directly observable (5) Things that are best directly observed
30. 4. Things that aren’t Directly Observable  What: You may want to measure outcomes that you can’t ask directly about or directly observe • Examples: corruption, fraud, discrimination  Strategies: • Audit studies, e.g. CVs and racial discrimination • Multiple sources of data, e.g. inputs of funds vs. outputs received by recipients, pollution reports by different parties • Don’t worry – there have already been lots of clever people before you – so do literature reviews!
31. 5. Things that are Best Directly Observed  What: Behavioral preferences, anything that is more believable when done than said  Strategies: • Develop detailed protocols • Ensure data collection of behavioral measures done under the same circumstances for all individuals
32. DATA COLLECTION
33. Use of Data  Reporting • On Inputs and Outputs (Achievement of physical and financial targets)  Monitoring • Of Processes and Implementation (Doing things right)  Evaluation • Of Outcomes and Impact (Doing the right thing)  Management and Decision Making • Using relevant and timely information for decision making (reporting and monitoring for mid term correction; evaluation for planning and scale up) ALL OF THE ABOVE DEPEND ON THE AVAILABILITY OF RELIABLE, ACCURATE AND TIMELY DATA
34. Problems in Data Collection  Data reliability (will we get the same data, when collected again?)  Data validity (Are we measuring what we say we are measuring?)  Data integrity (Is the data free of manipulation?)  Data accuracy/precision (Is the data measuring the “indicator” accurately?)  Data timeliness (Are we getting the data in time?)  Data security/confidentiality (Loss of data / loss of privacy)
35. Reliability of Data Collection  The process of collecting “good” data requires a lot of efforts and thought  Need to make sure that the data collected is precise and accurate.  avoid false or misleading conclusions  The survey process: • Design of questionnaire  Survey printed on paper/electronic  filled in by enumerator interviewing the respondent  data entry  electronic dataset  Where can this go wrong?
36. Reliability of Survey Data  Start with a pilot  Paper vs. electronic survey  Surveyors and supervision  Following up the respondents  Problems with respondents  Neutrality
37. Questionnaire is ready – so what’s next? PILOTING
38. Importance of Piloting  Finding the best way to procure required information • choice of respondent • type and wording of questions • order of sections  Piloting and fine-tuning different response options and components  Understanding of time taken, respondent fatigue, and other constraints
39. Steps in Piloting ALWAYS allow time for piloting and back-and-forth between team on the field and the researchers Two phases of piloting Phase 1: Early stages of questionnaire development  Understand the purpose of the questionnaire  test and develop new questions  adapt questions to context  build options and skips  Re-work, share and re-test  Build familiarity, adapt local terms, get a sense of time
40. Steps in Piloting Phase 2: Field testing just before surveying  Final touches to translation  questions and instructions  Keep it as close to final survey as possible.
41. Things to Look for During the Pilot  Comprehension of questions  Ordering of questions - priming  Variation in responses  Missing answers  More questions for clarifications? Cut questions? consistency checks?  Is the choice of respondent appropriate?  Respondent fatigue or discomfort  Need to add or correct filters? Need to add clear surveyor instructions?  Is the format (phone or paper) user-friendly? Does it need to be improved?
42. Discuss Potentially Difficult Questions with the Respondent Example 1: Simplify/clarify questions  Do you use Student Evaluation Sheets in your school? • Yes • No • Don’t know/Not sure • No response  They might not know it by this name (show them a sample)  You may need to break it up into several questions to get at what you want • Do you have them? • Have you been trained on how to use them? • Do you use them?
43. Discuss Potentially Difficult Questions with the Respondent Example 2 : Ordering questions and priming  Yesterday, how much time did you spend cooking, cleaning, playing with your child, teaching/doing homework with your child?  Do you think its important for mothers to play with children?  Do you think mothers or fathers should be more responsible for a child’s education? If Questions 2 and 3 had come before 1, there could’ve been a possible bias, order and wording of questions is important
44. Importance of Language and Translation  The local language is probably not English, which makes things tricky as to the wording of certain questions • But people may be familiar with “official” words in English rather than the local language  Translate • Ensures that every surveyor knows the exact wording of the questions, instead of having to translate on the fly  Back-translate • Helps clarify when local-language words are used that don’t have the same meaning as the original English
45. Documentation and Feedback  Notes – time, difficulties, required or suggested changes  Meetings to share inputs  Draft document  Keep different versions of the questionnaire
Date
Regional Center Tag
Resource Type Tag
Language Type Tag
Image Thumb
S