Compare/Convert Norm-Referenced Test Scores with Mean & Standard Deviation
A norm-referenced test measures your skill in comparison to other test takers. Since scoring is relative, it matters how many questions you answer correctly and also how many questions others answer correctly. IQ tests are a classic example. Standardized exams such as the SAT, ACT, GRE, GMAT, LSAT, and GED are norm-referenced tests that schools use to compare applicants. Job interviews, auditions, and even online dating profiles are types of norm-referenced tests, even though you aren't necessarily given a numeric score.
Norm-reference tests are in contrast tor criterion-referenced and ipsitive exams. A criterion-referenced test measures how well you can perform a task using absolute criteria. An ipsitive test compares your current performance to your past performance on the same test.
Often it is necessary to compare test scores of two people who took two different exams. For instance, if Sally scored 168 on the LSAT and Saul scored 121 on a Standford-Binet IQ test, who is the ''smarter'' one? If two exams are norm-referenced, and both follow the same type of score distribution, then there is a simple formula to compare their scores and convert one to the other.
Suppose the scores of Test A and Test B follow the same probability distribution but with different parameters. For example, they could both have a normal (Gaussian) distribution, or perhaps a uniform distribution.
Let's say Test A has a mean of ma and a stadard deviation of sa, and Test B has a mean of mb and a standard deviation of sb. Now suppose Alex earns a score of X on Test A, and Becky earns a score of Y on Test B. To compare Alex's and Becky's scores, we compare the quantities
Alex-Norm = (X - ma)/sa
Becky-Norm = (Y - mb)/sb
Whoever has the higher norm has the better score.
To convert Alex's score on Test A to an equivalent score on Test B, use the formula
AlexTextB = (X - ma)(sb/sa) + mb
To convert Becky's score on Test B to an equivalent score on Test A, use the formula
BeckyTestA = (Y - mb)(sa/sb) + ma
Here are several examples that show how to compare scores of different exams and how to convert scores from one exam to another.
Alex scored 168 on the LSAT (Law School Admission Test) and Becky scored 121 on an official IQ test. Who has the better relative score, what would be Alex's equivalent IQ score, and what would be Becky's equivalent LSAT score?
Since these are norm-referenced exams that both follow a Gaussian distribution, we simply need to look up the mean and standard deviation of each test. Currently, the mean LSAT score is roughly 151 and the standard deviation is about 10. An IQ test has a mean of 100 and a standard deviation of 15. This gives us
Alex-Norm = (168-151)/10 = 1.7
Becky-Norm = (121-100)/15 = 1.4
Therefore, Alex has the better relative score. To predict what Alex would have scored on an IQ test given his LSAT score, we compute
Alex IQ = (168-151)(15/10) + 100 = 125.5
To predict what Becky would have scored on the LSAT given her IQ, we compute
Becky LSAT = (121-100)(10/15) + 151 = 165
Keep in mind these comparisons and conversions are only meaningful and useful if both tests assess the same skills. The LSAT includes reading comprehension, something not on the IQ test. The IQ test has many pattern recognition questions, something not on the LSAT.
Marq and Zaq both took the QQQ exam, but several years apart. In 2005 the mean score was 201.5 with a standard deviation of 13. In 2011, the mean was 204 with a stardard deviation of 15.4. In both years the distribution of scores followed a parabolic distribution.
Marq scored 235 on the QQQ exam in 2005, while Zaq scored 242 in the year 2011. Whose score is better?
To compare their scores we compute their norms as in the previous example. This gives us
Marq-Norm = (235 - 201.5)/13 = 2.577
Zaq-Norm = (242 - 204)/15.4 = 2.468
Therefore, Marq's score was marginally better than Zaq's score. Note that it doesn't matter that the distribution is parabolic, only that it was the same type of distribution in both years.
Suzanne took a standardized test in 2001 and scored 345 on a scale of 100 to 400. The mean test score that year was 252 with a standard deviation of 36.5 points. She took the test again in 2007 and scored 344, but the mean that year was 249 with a standard deviation of 34 points. Which year's test score was better, assuming both years the scores followed a Gaussian (normal) distribution?
Suzanne's normed score for 2001 was (345 - 252)/36.5 = 2.55. Her normed score for 2007 was (344 - 249)/34 = 2.79. Therefore, her score on the 2007 exam was comparatively better, even though the numerical score was 1 point lower.