GBR_2000_MCSS_v01_M
Multi Country Study Survey 2000-2001
Name | Country code |
---|---|
United Kingdom | GBR |
Other Household Health Survey [hh/hea]
In order to develop various methods of comparable data collection on health and health system responsiveness WHO started a scientific survey study in 2000-2001. This study has used a common survey instrument in nationally representative populations with modular structure for assessing health of indviduals in various domains, health system responsiveness, household health care expenditures, and additional modules in other areas such as adult mortality and health state valuations.
The health module of the survey instrument was based on selected domains of the International Classification of Functioning, Disability and Health (ICF) and was developed after a rigorous scientific review of various existing assessment instruments. The responsiveness module has been the result of ongoing work over the last 2 years that has involved international consultations with experts and key informants and has been informed by the scientific literature and pilot studies.
Questions on household expenditure and proportionate expenditure on health have been borrowed from existing surveys. The survey instrument has been developed in multiple languages using cognitive interviews and cultural applicability tests, stringent psychometric tests for reliability (i.e. test-retest reliability to demonstrate the stability of application) and most importantly, utilizing novel psychometric techniques for cross-population comparability.
The study was carried out in 61 countries completing 71 surveys because two different modes were intentionally used for comparison purposes in 10 countries. Surveys were conducted in different modes of in- person household 90 minute interviews in 14 countries; brief face-to-face interviews in 27 countries and computerized telephone interviews in 2 countries; and postal surveys in 28 countries. All samples were selected from nationally representative sampling frames with a known probability so as to make estimates based on general population parameters.
The survey study tested novel techniques to control the reporting bias between different groups of people in different cultures or demographic groups ( i.e. differential item functioning) so as to produce comparable estimates across cultures and groups. To achieve comparability, the selfreports of individuals of their own health were calibrated against well-known performance tests (i.e. self-report vision was measured against standard Snellen's visual acuity test) or against short descriptions in vignettes that marked known anchor points of difficulty (e.g. people with different levels of mobility such as a paraplegic person or an athlete who runs 4 km each day) so as to adjust the responses for comparability . The same method was also used for self-reports of individuals assessing responsiveness of their health systems where vignettes on different responsiveness domains describing different levels of responsiveness were used to calibrate the individual responses.
This data are useful in their own right to standardize indicators for different domains of health (such as cognition, mobility, self care, affect, usual activities, pain, social participation, etc.) but also provide a better measurement basis for assessing health of the populations in a comparable manner. The data from the surveys can be fed into composite measures such as "Healthy Life Expectancy" and improve the empirical data input for health information systems in different regions of the world. Data from the surveys were also useful to improve the measurement of the responsiveness of different health systems to the legitimate expectations of the population.
Sample survey data [ssd]
The scope of the MCSS includes:
HEALTH MODULE
RESPONSIVENESS MODULE
Topic | Vocabulary |
---|---|
Multi Country Study Survey (MCSS) | Survey |
Name |
---|
World Health Organization (WHO) |
Name |
---|
National Centre for Social Research |
5,350 named individuals in the United Kingdom were systematically selected from the Electoral Register, which was stratified by local authority, and ordered by postcode.
Addresses were checked against Laing & Bussion.s Care Home and Hospital Information database and 14 addresses were removed. A further 336 named individuals were systematically selected and removed from the remaining sample, using a random start and fixed interval method on the sample sorted by local authority and postcode, leaving 5,000 addresses for the usable sample.
The 5,000 sampled individuals were sorted by local authority and postcode.
Start | End |
---|---|
2000 | 2001 |
Implementation of Survey in the Field
Surveys were conducted in various countries in three different modes. Sampling plans approved by WHO were implemented with specifications of the sampling units and stratification procedures at each sampling stage (primary, secondary and tertiary sampling levels). Several contact calls (at least four in the BFTF and ten in the household mode) were attempted and interviewers tried to contact each selected household at different times of the day and days of the week. Each contact call was recorded together with reasons for non-response.
Interviewers were supervised on a regular basis during fieldwork to ensure that expectations and production requirements were met, interviewers were performing well, information was kept confidential and professional ethics were followed, questionnaires and other materials were completed accurately and submitted on time, and lastly, that any problems were reported as soon as they arose. WHO asked supervisors to sit in on at least 10 interviews during the pilot phase to check that interviews were conducted in a standardized way. The data was entered in the following days of paper-pencil instrument finalization after editing and approval by the supervisors. Each country made a report on the following aspects of the survey implementation:
Quality Assurance
In order to monitor the quality of the data and ensure that countries complied with WHO guidelines in all household surveys the conditions under which the interviews were conducted and the problems that survey teams encountered were observed by supervisors first hand. Supervisors reviewed 10% of the questionnaires to check if options had been recorded appropriately and if questions were skipped correctly. About 10% of respondents were called or visited by the supervisor to ensure that the interview had been done, and 10% of all interviews were repeated by another interviewer within a period of one week to check for the reliability of the interview.
In addition, a site visit was scheduled to all full-length household survey sites during data collection.
During these site visits several activities were undertaken:
• Overall survey management: sampling procedures; training/supervision; selection of respondent; and timing of survey were discussed.
• Interview assessment: the WHO staff sat in at least 4 interviews to see how the interview was conducted, the interaction between interviewer and respondent, and the timing of the interview.
• A meeting with the survey team was held to discuss contacting procedures, interviews, data and logistics.
• The data in questionnaires was checked by examining the survey records and data entry program.
Site visits made in the early phases of the data collection detected any problems, ensured that the questionnaire was administered and completed correctly, and confirmed that calibration tests were performed according to the instructions provided by WHO.
Feedback During Data Collection
The data was sent to WHO in a weekly or fortnightly basis such that a quick assessment could be made of the survey for each country in terms of missing information, reliability, use of appropriate skips, etc. Following data submission certain computerized algorithms were run to identify possible errors whilst the survey teams were in the field. Feedback regarding the data quality was routinely given to the site coordinator who took relevant action to ensure good quality data.
Data Coding
At each site the data was coded by investigators to indicate the respondent status and the selection of the modules for each respondent within the survey design. After the interview was edited by the supervisor and considered adequate it was entered locally.
Data Entry Program
A data entry program was developed in WHO specifically for the survey study and provided to the sites. It was developed using a database program called the I-Shell (short for Interview Shell), a tool designed for easy development of computerized questionnaires and data entry (34). This program allows for easy data cleaning and processing.
The data entry program checked for inconsistencies and validated the entries in each field by checking for valid response categories and range checks. For example, the program didn’t accept an age greater than 120. For almost all of the variables there existed a range or a list of possible values that the program checked for.
In addition, the data was entered twice to capture other data entry errors. The data entry program was able to warn the user whenever a value that did not match the first entry was entered at the second data entry. In this case the program asked the user to resolve the conflict by choosing either the 1st or the 2nd data entry value to be able to continue. After the second data entry was completed successfully, the data entry program placed a mark in the database in order to enable the checking of whether this process had been completed for each and every case.
Data Transfer
The data entry program was capable of exporting the data that was entered into one compressed database file which could be easily sent to WHO using email attachments or a file transfer program onto a secure server no matter how many cases were in the file.
The sites were allowed the use of as many computers and as many data entry personnel as they wanted. Each computer used for this purpose produced one file and they were merged once they were delivered to WHO with the help of other programs that were built for automating the process. The sites sent the data periodically as they collected it enabling the checking procedures and preliminary analyses in the early stages of the data collection.
Data quality checks
Once the data was received it was analyzed for missing information, invalid responses and representativeness. Inconsistencies were also noted and reported back to sites.
Data Cleaning and Feedback
After receipt of cleaned data from sites, another program was run to check for missing information, incorrect information (e.g. wrong use of center codes), duplicated data, etc. The output of this program was fed back to sites regularly. Mainly, this consisted of cases with duplicate IDs, duplicate cases (where the data for two respondents with different IDs were identical), wrong country codes, missing age, sex, education and some other important variables.
Use of the dataset must be acknowledged using a citation which would include:
The user of the data acknowledges that the original collector of the data, the authorized distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
Name | Affiliation | URL | |
---|---|---|---|
Multi-Country Studies, Measurement and Health Information Systems | World Health Organization(WHO) | sagesurvey@who.int | http://www.who.int |
DDI_GBR_2000_MCSS_v01_M
2012-03-22
Version 01 (March 2012)