Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010304361 | HM538 B57 2012 | Open Access Book | Book | Searching... |
Searching... | 30000010298021 | HM538 B57 2012 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
This is a book for any researcher using any kind of survey data. It introduces the latest methods of assessing the quality and validity of such data by providing new ways of interpreting variation and measuring error. By practically and accessibly demonstrating these techniques, especially those derived from Multiple Correspondence Analysis, the authors develop screening procedures to search for variation in observed responses that do not correspond with actual differences between respondents. Using well-known international data sets, the authors exemplify how to detect all manner of non-substantive variation having sources such as a variety of response styles including acquiescence, respondents′ failure to understand questions, inadequate field work standards, interview fatigue, and even the manufacture of (partly) faked interviews.
Author Notes
Jrg Blasius is a Professor of Sociology at the Institute for Political Science and Sociology at University of Bonn, Germany.
Victor Thiessen is Professor Emeritus and Academic Director of the Atlantic Research Data Centre at Dalhousie University, Canada.
Table of Contents
About the authors | p. vii |
List of acronyms and sources of data | p. viii |
Preface | p. ix |
Chapter 1 Conceptualizing data quality: Respondent attributes, study architecture and institutional practices | p. 1 |
1.1 Conceptualizing response quality | p. 2 |
1.2 Study architecture | p. 8 |
1.3 Institutional quality control practices | p. 10 |
1.4 Data screening methodology | p. 11 |
1.5 Chapter outline | p. 12 |
Chapter 2 Empirical findings on quality and comparability of survey data | p. 15 |
2.1 Response quality | p. 15 |
2.2 Approaches to detecting systematic response errors | p. 22 |
2.3 Questionnaire architecture | p. 26 |
2.4 Cognitive maps in cross-cultural perspective | p. 30 |
2.5 Conclusion | p. 31 |
Chapter 3 Statistical techniques for data screening | p. 33 |
3.1 Principal component analysis | p. 35 |
3.2 Categorical principal component analysis | p. 41 |
3.3 Multiple correspondence analysis | p. 46 |
3.4 Conclusion | p. 55 |
Chapter 4 Institutional quality control practices | p. 57 |
4.1 Detecting procedural deficiencies | p. 58 |
4.2 Data duplication | p. 64 |
4.3 Detecting faked and partly faked interviews | p. 67 |
4.4 Data entry errors | p. 74 |
4.5 Conclusion | p. 79 |
Chapter 5 Substantive or methodology-induced factors? A comparison of PCA, CatPCA and MCA solutions | p. 81 |
5.1 Descriptive analysis of personal feelings domain | p. 84 |
5.2 Rotation and structure of data | p. 87 |
5.3 Conclusion | p. 97 |
Chapter 6 Item difficulty and response quality | p. 99 |
6.1 Descriptive analysis of political efficacy domain | p. 100 |
6.2 Detecting patterns with subset multiple correspondence analysis | p. 100 |
6.3 Moderator effects | p. 113 |
6.4 Conclusion | p. 122 |
Chapter 7 Questionnaire architecture | p. 124 |
7.1 Fatigue effect | p. 124 |
7.2 Question order effects | p. 129 |
7.3 Measuring data quality: The dirty data index | p. 133 |
7.4 Conclusion | p. 138 |
Chapter 8 Cognitive competencies and response quality | p. 140 |
8.1 Data and measures | p. 141 |
8.2 Response quality, task simplification, and complexity of cognitive maps | p. 147 |
8.3 Conclusion | p. 156 |
Chapter 9 Conclusion | p. 158 |
References | p. 164 |
Index | p. 173 |