Hello -
We are in the process of implementing a resource for our faculty to dig deep into student MCQ exam performance to evaluate the effectiveness of our questions on high stakes exams. We are currently using Classic Quizzes due to the inability to download statistics from New Quizzes in a usable format.
I noticed that the Discrimination index listed in Canvas quizzes is not accurate - it is displaying the point biserial correlation. Is this an anomaly on my end, or is this true for everyone? I'm confused because the actual calculation for the discrimination index is listed correctly in the instructor guide - but I do not see that it is even reported in the item analysis report?
It would be helpful to know the underlying calculations for each metric to understand what I can trust/not trust from the automatically reported data set.
Thanks for any advice - we are trying to adhere to best practices for assessment to increase the validity of our internal exam scores and correlation with the certification exam that they will take after graduation.