Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010215635 | HD69.P75 P446 2008 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Data Collection
Data Collection is the second of six books in the Measurement and Evaluation Series from Pfeiffer. The proven ROI Methodology--developed by the ROI Institute--provides a practical system for evaluation planning, data collection, data analysis, and reporting. All six books in the series offer the latest tools, most current research, and practical advice for measuring ROI in a variety of settings.
Data Collection offers an effective process for collecting data that is essential to the implementation of the ROI Methodology. The authors outline the techniques, processes, and critical issues involved in successful data collection. The book examines the various methods of data collection, including questionnaires, interviews, focus groups, observation, action plans, performance contracts, and monitoring records. Written for evaluators, facilitators, analysts, designers, coordinators, and managers, Data Collection is a valuable guide for collecting data that are adequate in quantity and quality to produce a complete and credible analysis.
Author Notes
Patricia Pulliam Phillips is an internationally recognized author, consultant, and president and CEO of the ROI Institute, Inc.
Cathy A. Stawarski is program manager of the Strategic Performance Improvement and Evaluation program at the Human Resources Research Organization (HumRRO) in Alexandria, Virginia
Table of Contents
Acknowledgments from the Editors | p. xxi |
Principles of the ROI Methodology | p. xxiii |
Chapter 1 Using Questionnaires and Surveys | p. 1 |
Types of Questions | p. 1 |
Questionnaire Design Steps | p. 2 |
Determine the Specific Information Needed | p. 2 |
Involve Stakeholders in the Process | p. 3 |
Select the Types of Questions | p. 3 |
Develop the Questions | p. 3 |
Check the Reading Level | p. 3 |
Test the Questions | p. 4 |
Address the Anonymity Issue | p. 4 |
Design for Ease of Tabulation and Analysis | p. 4 |
Develop the Completed Questionnaire and Prepare a Data Summary | p. 5 |
Improving the Response Rate for Questionnaires and Surveys | p. 5 |
Provide Advance Communication | p. 5 |
Communicate the Purpose | p. 6 |
Describe the Data Integration Process | p. 6 |
Keep the Questionnaire as Simple as Possible | p. 6 |
Simplify the Response Process | p. 6 |
Use Local Manager Support | p. 7 |
Let the Participants Know That They Are Part of a Sample | p. 7 |
Consider Incentives | p. 7 |
Have an Executive Sign the Introductory Letter | p. 8 |
Use Follow-Up Reminders | p. 8 |
Provide a Copy of the Results to the Participants | p. 8 |
Review the Questionnaire with Participants | p. 9 |
Consider a Captive Audience | p. 9 |
Communicate the Timing of Data Flow | p. 9 |
Select the Appropriate Medium | p. 10 |
Consider Anonymous or Confidential Input | p. 10 |
Pilot Test the Questionnaire | p. 10 |
Explain How Long Completing the Questionnaire Will Take | p. 11 |
Personalize the Process | p. 11 |
Provide an Update | p. 11 |
Final Thoughts | p. 12 |
Chapter 2 Using Tests | p. 13 |
Types of Tests | p. 13 |
Norm-Referenced Tests | p. 13 |
Criterion-Referenced Tests | p. 14 |
Performance Tests | p. 14 |
Simulations | p. 16 |
Electromechanical Simulation | p. 17 |
Task Simulation | p. 17 |
Business Games | p. 17 |
In-Basket Simulation | p. 17 |
Case Study | p. 18 |
Role-Playing | p. 18 |
Informal Tests | p. 19 |
Exercises, Problems, or Activities | p. 19 |
Self-Assessment | p. 20 |
Facilitator Assessment | p. 20 |
Final Thoughts | p. 21 |
Chapter 3 Using Interviews, Focus Groups, and Observation | p. 23 |
Interviews | p. 23 |
Types of Interviews | p. 24 |
Interview Guidelines | p. 24 |
Develop the Questions to Be Asked | p. 24 |
Test the Interview | p. 24 |
Prepare the Interviewers | p. 25 |
Provide Clear Instructions to the Participants | p. 25 |
Schedule the Interviews | p. 25 |
Focus Groups | p. 25 |
Applications of Focus Groups | p. 26 |
Guidelines | p. 27 |
Plan Topics, Questions, and Strategy Carefully | p. 27 |
Keep the Group Size Small | p. 27 |
Use a Representative Sample | p. 27 |
Use Experienced Facilitators | p. 28 |
Observations | p. 28 |
Guidelines for Effective Observation | p. 28 |
Observations Should Be Systematic | p. 29 |
Observers Should Be Knowledgeable | p. 29 |
The Observer's Influence Should Be Minimized | p. 29 |
Observers Should Be Selected Carefully | p. 30 |
Observers Must Be Fully Prepared | p. 30 |
Observation Methods | p. 30 |
Behavior Checklist | p. 30 |
Delayed Report | p. 31 |
Video Recording | p. 31 |
Audio Monitoring | p. 32 |
Computer Monitoring | p. 32 |
Final Thoughts | p. 32 |
Chapter 4 Using Other Data Collection Methods | p. 35 |
Business Performance Monitoring | p. 35 |
Using Current Measures | p. 36 |
Identify Appropriate Measures | p. 36 |
Convert Current Measures to Usable Ones | p. 36 |
Developing New Measures | p. 37 |
Action Planning | p. 38 |
Developing an Action Plan | p. 40 |
Using Action Plans Successfully | p. 42 |
Communicate the Action Plan Requirement Early | p. 42 |
Describe the Action Planning Process at the Beginning of the Program | p. 42 |
Teach the Action Planning Process | p. 42 |
Allow Time to Develop the Plan | p. 43 |
Have the Facilitator Approve Action Plans | p. 43 |
Require Participants to Assign a Monetary Value to Each Improvement | p. 43 |
Ask Participants to Isolate the Effects of the Program | p. 44 |
Ask Participants to Provide a Confidence Level for Estimates | p. 44 |
Require That Action Plans Be Presented to the Group | p. 45 |
Explain the Follow-Up Process | p. 45 |
Collect Action Plans at the Stated Follow-Up Time | p. 46 |
Summarize the Data and Calculate the ROI | p. 46 |
Applying Action Plans | p. 48 |
Identifying Advantages and Disadvantages of Action Plans | p. 51 |
Performance Contracts | p. 51 |
Final Thoughts | p. 54 |
Chapter 5 Measuring Reaction and Planned Action | p. 55 |
Why Measure Reaction and Planned Action? | p. 55 |
Customer Satisfaction | p. 55 |
Immediate Adjustments | p. 56 |
Team Evaluation | p. 56 |
Predictive Capability | p. 56 |
Importance of Other Levels of Evaluation | p. 58 |
Areas of Feedback | p. 58 |
Data Collection Issues | p. 63 |
Timing | p. 63 |
Methods | p. 64 |
Administrative Guidelines | p. 65 |
Uses of Reaction Data | p. 67 |
Final Thoughts | p. 69 |
Chapter 6 Measuring Learning and Confidence | p. 71 |
Why Measure Learning and Confidence? | p. 71 |
The Learning Organization | p. 71 |
Compliance Issues | p. 72 |
Development of Competencies | p. 73 |
Certification | p. 73 |
Consequences of an Unprepared Workforce | p. 73 |
The Role of Learning in Programs | p. 74 |
Measurement Issues | p. 75 |
Challenges | p. 75 |
Program Objectives | p. 75 |
Typical Measures | p. 76 |
Timing | p. 77 |
Data Collection Methods | p. 79 |
Administrative Issues | p. 81 |
Validity and Reliability | p. 81 |
Consistency | p. 82 |
Pilot Testing | p. 83 |
Scoring and Reporting | p. 83 |
Confronting Failure | p. 84 |
Uses of Learning Data | p. 84 |
Final Thoughts | p. 85 |
Chapter 7 Measuring Application and Implementation | p. 87 |
Why Measure Application and Implementation? | p. 87 |
Obtain Essential Information | p. 88 |
Track Program Focus | p. 88 |
Discover Problems and Opportunities | p. 89 |
Reward Effectiveness | p. 90 |
Challenges | p. 90 |
Linking Application with Learning | p. 90 |
Building Data Collection into the Program | p. 90 |
Ensuring a Sufficient Amount of Data | p. 91 |
Addressing Application Needs at the Outset | p. 91 |
Measurement Issues | p. 92 |
Methods | p. 92 |
Objectives | p. 92 |
Areas of Coverage | p. 93 |
Data Sources | p. 93 |
Timing | p. 95 |
Responsibilities | p. 96 |
Data Collection Methods | p. 96 |
Questionnaires | p. 96 |
Progress with Objectives | p. 97 |
Use of Program Materials and Handouts | p. 97 |
Application of Knowledge and Skills | p. 97 |
Changes in Work Activities | p. 104 |
Improvements or Accomplishments | p. 105 |
Definition of the Measure | p. 105 |
Amount of Change | p. 105 |
Unit Value | p. 105 |
Basis for Value | p. 106 |
Total Annual Impact | p. 106 |
Other Factors | p. 106 |
Improvements Linked with the Program | p. 107 |
Confidence Level | p. 107 |
Perception of Investment in the Program | p. 107 |
Link with Output Measures | p. 107 |
Other Benefits | p. 108 |
Barriers | p. 108 |
Enablers | p. 108 |
Management Support | p. 108 |
Other Solutions | p. 109 |
Target Audience Recommendations | p. 109 |
Suggestions for Improvement | p. 109 |
Interviews, Focus Groups, and Observation | p. 110 |
Action Plans | p. 110 |
Barriers to Application | p. 111 |
Uses of Application Data | p. 112 |
Final Thoughts | p. 112 |
Chapter 8 Measuring Impact and Consequences | p. 115 |
Why Measure Business Impact? | p. 115 |
Impact Data Provide Higher-Level Information on Performance | p. 115 |
Impact Data Represent the Business Driver of a Program | p. 116 |
Impact Data Provide Value for Sponsors | p. 117 |
Impact Data Are Easy to Measure | p. 117 |
Effective Impact Measures | p. 117 |
Hard Data Measures | p. 118 |
Soft Data Measures | p. 120 |
Tangible Versus Intangible Measures | p. 121 |
Impact Objectives | p. 122 |
Linking Specific Measures to Programs | p. 123 |
Sources of Impact Data | p. 126 |
Data Collection Methods | p. 127 |
Monitoring Business Performance Data | p. 127 |
Identify Appropriate Measures | p. 128 |
Convert Current Measures to Usable Ones | p. 128 |
Develop New Measures | p. 129 |
Action Plans | p. 129 |
Set Goals and Targets | p. 130 |
Define the Unit of Measure | p. 130 |
Place a Monetary Value on Each Improvement | p. 131 |
Implement the Action Plan | p. 131 |
Document Specific Improvements | p. 132 |
Isolate the Effects of the Program | p. 132 |
Provide a Confidence Level for Estimates | p. 133 |
Collect Action Plans at Specified Time Intervals | p. 133 |
Summarize the Data and Calculate the ROI | p. 133 |
Performance Contracts | p. 134 |
Questionnaires | p. 134 |
Final Thoughts | p. 138 |
Chapter 9 Selecting the Proper Data Collection Method | p. 139 |
Matching Exercise | p. 139 |
Selecting the Appropriate Method for Each Level | p. 143 |
Type of Data | p. 143 |
Investment of Participants' Time | p. 143 |
Investment of Managers' Time | p. 144 |
Cost | p. 144 |
Disruption of Normal Work Activities | p. 144 |
Accuracy | p. 145 |
Built-In Design Possibility | p. 145 |
Utility of an Additional Method | p. 146 |
Cultural Bias of Data Collection Method | p. 146 |
Final Thoughts | p. 146 |
Index | p. 147 |
About the Authors | p. 153 |