Cognitive Tasks and Learning

  • PDF / 6,505,298 Bytes
  • 410 Pages / 547.087 x 737.008 pts Page_size
  • 14 Downloads / 211 Views

DOWNLOAD

REPORT


CAIM - Computer-Aided [Assisted] Instruction in Music ▶ Technology in Music Instruction and Learning

Calibration LINDA BOL1, DOUGLAS J. HACKER2 1 Educational Foundations and Leadership, Old Dominion University, Norfolk, VA, USA 2 Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA

Synonyms Absolute accuracy; Confidence in retrieval; Prospective judgment; Retrospective judgment; Test postdiction; Test prediction

Definition Calibration is the degree to which a person’s perception of performance corresponds with his or her actual performance (Keren 1991). The degree of correspondence is determined by a person’s judgment of his or her performance compared against an objectively determined measure of that performance (Hacker et al. 2008). That judgment, which involves selfevaluation, defines calibration as a metacognitive monitoring process. To illustrate, consider the following example. Before taking an exam, a student might estimate how well he or she will perform on the exam, and

then estimate after taking the exam how well he or she did perform. If this student predicted that she would score an 85 but actually scored a 90, she is fairly accurate but a bit underconfident. Alternatively, if a student predicts that he will score a 95 and actually scores a 60, he is grossly inaccurate and overconfident. In the former case, the student’s perception of performance corresponds well with actual performance, and therefore, she is well calibrated. In the latter case, the student’s perception of performance corresponds poorly with actual performance and therefore is poorly calibrated. Although there are various methods of measuring calibration, all measures of calibration provide a quantitative assessment of the degree of discrepancy between perceived performance and actual performance (Hacker et al. 2008). The various methods can be grouped into two categories: difference scores and calibration curves. Difference scores involve calculating the difference between a person’s judged performance and his or her actual performance. Judged performance can entail judgments made on a percentage of likelihood scale or confidence scale; they can be made at a global level, in which a single judgment over multiple items is made or at the item level and averaged over multiple items; and judgments can be made before (predictions or prospective judgments) or after (postdictions or retrospective judgments) performance. Often, the absolute value of the difference between judgment and performance is taken, in which case, values closer to zero indicate greater calibration accuracy, with perfect calibration at zero. If the signed difference is calculated, a bias score is produced. Negative values are interpreted as underconfidence and positive values as overconfidence. In our example, the first student predicted an 85 and scored a 90, which means the difference score would be 5, indicating slight underconfidence; and the second student predicted a 95 and scored a 60, putting the difference at + 35, indicating larg

Data Loading...