Available:*
Library | Item Barcode | Call Number | Material Type | Item Category 1 | Status |
---|---|---|---|---|---|
Searching... | 30000010335753 | QA20.C65 S26 2013 | Open Access Book | Book | Searching... |
On Order
Summary
Summary
Assessment is a key driver in mathematics education. This book examines computer aided assessment (CAA) of mathematics in which computer algebra systems (CAS) are used to establish the mathematical properties of expressions provided by students in response to questions. In order to automate such assessment, the relevant criteria must be encoded and, in articulating precisely the desired criteria, the teacher needs to think very carefully about the goals of the task. Hence CAA acts as a vehicle to examine assessment and mathematics education in detail and from a fresh perspective. One example is how it is natural for busy teachers to set only those questions that can be marked by hand in a straightforward way, even though the constraints of paper-based formats restrict what they do and why. There are other kinds of questions, such as those with non-unique correct answers, or where assessing the properties requires the marker themselves to undertake a significant computation. It is simply not sensible for a person to set these to large groups of students when marking by hand. However, such questions have their place and value in provoking thought and learning. This book, aimed at teachers in both schools and universities, explores how, in certain cases, different question types can be automatically assessed. Case studies of existing systems have been included to illustrate this in a concrete and practical way.
Author Notes
Chris Sangwin is a Senior Lecturer in the School of Mathematics at the University of Birmingham. From 2000-2011 he was seconded half time to the UK Higher Education Academy "Maths Stats and OR Network" to promote learning and teaching of university mathematics. In 2006 he was awarded a National Teaching Fellowship.
Table of Contents
List of Figures | p. viii |
1 Introduction | p. 1 |
1.1 Multiple-choice questions | p. 2 |
1.2 Assessment criteria | p. 4 |
1.3 Chapters | p. 7 |
1.4 Acknowledgements | p. 8 |
2 An assessment vignette | p. 9 |
2.1 The student's perspective | p. 9 |
2.2 Assessing answers to simple questions | p. 14 |
2.3 Further integrals | p. 16 |
2.4 Discussion | p. 18 |
3 Learning and assessing mathematics | p. 19 |
3.1 The nature of mathematics | p. 19 |
3.2 Terms used in assessment | p. 21 |
3.3 Purposes of assessment | p. 22 |
3.4 Learning | p. 23 |
3.5 Principles and tensions of assessment design | p. 25 |
3.6 Learning cycles and feedback | p. 33 |
3.7 Conclusion | p. 35 |
4 Mathematical question spaces | p. 37 |
4.1 Why randomly generate questions? | p. 38 |
4.2 Randomly generating an individual question | p. 39 |
4.3 Linking mathematical questions | p. 42 |
4.4 Building up conceptions | p. 44 |
4.5 Types of mathematics question | p. 46 |
4.6 Embedding CAA into general teaching | p. 49 |
4.7 Conclusion | p. 51 |
5 Notation and syntax | p. 53 |
5.1 An episode in the history of mathematical notation | p. 54 |
5.2 The importance of notational conventions | p. 56 |
5.3 Ambiguities and inconsistencies in notation | p. 60 |
5.4 Notation and machines: syntax | p. 61 |
5.5 Other issues | p. 65 |
5.6 The use of the AiM system by students | p. 66 |
5.7 Proof and arguments | p. 67 |
5.8 Equation editors | p. 68 |
5.9 Dynamic interactions | p. 70 |
5.10 Conclusion | p. 71 |
6 Computer algebra systems for CAA | p. 73 |
6.1 The prototype test: equivalence | p. 75 |
6.2 A comparison of mainstream CAS | p. 76 |
6.3 The representation of expressions by CAS | p. 78 |
6.4 Existence of mathematical objects | p. 82 |
6.5 'Simplify' is an ambiguous instruction | p. 86 |
6.6 Equality, equivalence, and sameness | p. 88 |
6.7 Forms of elementary mathematical expression | p. 91 |
6.8 Equations, inequalities, and systems of equations | p. 94 |
6.9 Other mathematical properties we might seek to establish | p. 96 |
6.10 Buggy rules | p. 97 |
6.11 Generating outcomes useful for CAA | p. 99 |
6.12 Side conditions and logic | p. 100 |
6.13 Conclusion | p. 101 |
7 The STACK CAA system | p. 102 |
7.1 Background: the AiM CAA system | p. 102 |
7.2 Design goals for STACK | p. 103 |
7.3 STACK questions | p. 106 |
7.4 The design of STACK'S multi-part tasks | p. 107 |
7.5 Interaction elements | p. 111 |
7.6 Assessment | p. 112 |
7.7 Quality control and exchange of questions | p. 113 |
7.8 Extensions and development of the STACK system by Aalto | p. 114 |
7.9 Usage by Aalto | p. 117 |
7.10 Student focus group | p. 121 |
7.11 Conclusion | p. 125 |
8 Software case studies | p. 127 |
8.1 Some early history | p. 127 |
8.2 CALM | p. 129 |
8.3 Pass-IT | p. 132 |
8.4 OpenMark | p. 138 |
8.5 DIAGNOSYS | p. 140 |
8.6 Cognitive tutors | p. 146 |
8.7 Khan Academy | p. 147 |
8.8 Mathwise | p. 148 |
8.9 WeBWorK | p. 150 |
8.10 MathXpert | p. 154 |
8.11 Algebra tutors: Aplusix and T-algebra | p. 157 |
8.12 Conclusion | p. 160 |
9 The future | p. 162 |
9.1 Encoding a complete mathematical argument | p. 162 |
9.2 Assessment of proof | p. 166 |
9.3 Semi-automatic marking | p. 169 |
9.4 Standards and interoperability | p. 170 |
9.5 Conclusion | p. 172 |
Bibliography | p. 173 |
Index | p. 183 |