Hi all!
We are working on some software that shows the mean for a set of students. Currently, the calculation is being done in a manner that includes the students that haven't yet been graded. I have questioned the logic that our developers used and they have asked me to validate the methodology that should be used.
For example, we have 5 students and only 1 has been graded. On a 1-5 scale, the student received a 5.
The current averaging is calculated as the sum of the scores (5) divided by the number of students (5). So 5/5=1
My understanding is that this should be the sum of the scores (5) divided by the number of scores (1). So 5/1=5
Is there a "right" way to handle this? If so, can anyone point me to an article that explains this so that I can talk further with our development team?
Thanks!
Dan
We are working on some software that shows the mean for a set of students. Currently, the calculation is being done in a manner that includes the students that haven't yet been graded. I have questioned the logic that our developers used and they have asked me to validate the methodology that should be used.
For example, we have 5 students and only 1 has been graded. On a 1-5 scale, the student received a 5.
The current averaging is calculated as the sum of the scores (5) divided by the number of students (5). So 5/5=1
My understanding is that this should be the sum of the scores (5) divided by the number of scores (1). So 5/1=5
Is there a "right" way to handle this? If so, can anyone point me to an article that explains this so that I can talk further with our development team?
Thanks!
Dan