ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Books, Literature, and Writing

Numbers Can Indeed Lie

Updated on February 3, 2014
Digital Number Background
Digital Number Background | Source

In her February 2010 CCC article, "Rhetorical Numbers: A Case for Quantitative Writing in the Composition Classroom," composition professor Joanna Wolfe argues that numeracy, the ability to understand and interpret numerical data, is not taught in composition classes as much as it should be. The failure of composition instructors at all ranks to teach numeracy is perhaps understandable (since those teaching composition are likely to be among those who "just aren't good at math," as though math is not a skill set that can be learned by any sufficiently diligent student who has a decent teacher), but that does not mean it is not lamentable. Students are increasingly inundated with numerical data and numerical "data," so it is imperative that they learn to distinguish what is good number-based evidence and reasoning from bad.

What follows is a series of comments about the matter, an expansion on part of Wolfe's reasoning for incorporating numeracy instruction into such mainstream classrooms as college composition (which, as Timothy L. Carens notes in his September 2010 College English article "Serpents in the Garden: English Professors in Contemporary Film and Television," can be taken as representative of the college experience as a whole). The comments derive from my own experience in the composition classroom, trying to pass along to my students what I have learned so that they can start off further ahead than I did. More can be said, certainly, but this is at least a continuation of a conversation that very much needs to happen.

Common Abuses

That numbers can be made to misrepresent truth is amply attested; Wolfe herself notes a prevailing cultural awareness in the United States of the malleable nature of numerical data. She also references an old aphorism about three types of falsehoods--among which is statistics. My own experience tells me that statistics are indeed among the major sites of perpetration of falsehood, in the mean-median-mode divergence, fallacious cherry-picking of evidence upon which to base assertions of "truth," and manipulation of the scales in which things are presented. The deceitful framing of data leads to false conclusions being drawn from it, insidiously allowing deception while using verifiable, demonstrably correct individual bits of information. It is more dangerous than the outright lie or the subject Frankfurt and Fredal share, therefore, and must be recognized so that it can be shunned.

Table 1: Final Grades in SUBJ 101.001

Student
Final Numerical Grade
Abrams, Dan
99
Beard, Red
77
Carpenter, Joseph
72
Driver, Horace
74
Erre, Derry
77
Fisher, I.S.
75
Gatherer, Hunter
70
Hoar, Grant
80
Ixem, Hal
73
Jameson, Jim
98

Mean, Median, and Mode: Many Problems

Such numbers as mean, median, and mode are often employed to back claims about broad tendencies and to assert representative examples. They are not always equally good at doing so, however, and they are certainly not interchangeable. Nor are they uncomplicated, simple terms. For example, there are several varieties of mean. Most common is the arithmetic mean, the simple average of a set of numbers (add all the numbers and divide the result thereof by the number of numbers added together), but there are also geometric, harmonic, and other means to be considered. Typically, presented data list the arithmetic mean--but it is not a lie to call any of the other means the "mean," although each may well yield a different result. A stated mean is therefore potentially misleading on that score alone.

Even with a stated arithmetic mean, however, there is some potential for misrepresentation. Simply put, the simple average of a set of numbers is not necessarily representative of that set; it can be skewed by outliers among the set. Table 1: Final Grades in SUBJ 101.001, offers an example. The arithmetic mean among the grades of the ten students in the class is 79.5, yet only three students in the class have grades higher than that mean--and two of those are substantially above it, serving as outliers. For the arithmetic mean to be representative of the actual performance of the class, roughly equal numbers of students would have to be above and below it--and that is not the case. To claim the arithmetic mean is representative in such a case is disingenuous. In such a case, the number lies.

More representative of the group is the median, the center of the range expressed by a set of numbers. Because it separates the upper half of the range from the lower half, it is more resistant to being skewed by outliers than is the simple mean. Working again from Table 1: Final Grades in SUBJ 101.001 offers a median of 76, and, indeed, five of the represented grades are above that number, with the other five below it. Moreover, as the median is amid the bulk of the reported grades in the class, it is more likely to be representative of the class as a whole than is the simple mean. But more likely does not necessarily mean is; there is still influence by outliers possible. And clustering of values at some distance from the median can also skew it. To report that the median is representative in such cases is to make that number lie, as well.

Also to be considered despite not always being present is the mode, the most frequently occurring value in a set of numbers. If a given value is repeated, there is some suggestion that more than random distribution is at work, prompting a search for common factors of influence. A repeating value also suggests itself as a potentially representative value for the set of which it is part, as it asserts itself more strongly than do the other reported values. Again, Table 1: Final Grades in SUBJ 101.001 offers an example; the mode of the reported grades is 77, a number at the high end of the mid-70s and, given the distribution of the grades, potentially representative. But the potential is not certainty; not all sets of values have modes, and repetition does not ensure representation. It could be the case in a class, for example, that two students are high-achieving while the rest refuse to do homework. The two high achievers could both end up with 98 while their classmates range from 65 down; the mode would then not be representative, and to present it as such would be to make it lie.

Table 2: Average Final Scores in SUBJ 101 by Year

Year
Average Final Score (Arithmetic Mean)
1995
75.0
1996
69.1
1997
91.9
1998
76.4
1999
80.7
2000
87.6
2001
92.3
2002
80.2
2003
76.5
2004
87.9
2005
90.7
2006
77.7
2007
90.8
2008
72.4
2009
93.5
2010
78.5
2011
88.5
2012
84.4
2013
89.1
2014
79.5

Cherry-Picking: Better for Pies than Data

Any simulation or representation of a thing will necessarily limit the data with which it works. Doing so is the only way to be able to run a simulation; the data of the world are infinite, so working with all of that data is impossible. But not all limitations imposed upon data are necessary--and some of them are misleading without actually stating an untruth. Such limitations are cherry-picked, carefully selecting data to prove a pre-determined idea rather than looking at the available data and determining an idea from it. Cherry-picking is another way in which numbers lie.

Looking at Table 2: Average Final Scores in SUBJ 101 by Year can illustrate how. Data for some twenty years of the course are listed. Over that time, there has been a slight increase in overall class performance; the mean grade (problematic, certainly, but useful insofar as consistently applied) has gone up by nearly five points from 1995 to 2014, indicating either an increase in student capacity or creeping grade inflation (with the latter being more likely, given outliers). If only the last five years listed are used (2010-2014), the increase in mean grade is but a single point--not nearly so substantial as that across the listed twenty and probably too small to use to draw conclusions.

Looking only at the change from 2013 to 2014, however, shows a nearly ten point drop in observed mean grade--which will necessarily prompt an entirely different interpretation of the presented data than will a more expansive view. With an observed decline in observed mean, allegations of grade inflation become much harder to support, and the substantial decline prompts further investigation--or should. The disjunction of the short-term change and the long-term tendency presents a way in which the numbers can be made to lie.

Each of these, though, is a recent example, and large sets of numbers are not always presented in terms of recency. For example, Table 2: Average Final Scores in SUBJ 101 by Year lists a twelve-point increase in observed mean grade from 2008 to 2012, a five-year span. It would be technically accurate to state that "Average scores in SUBJ 101 rose by twelve points over a five-year span," even though the overall tendency has been a much more gradual increase and the average actually drops from 2011 to 2012. Numbers are often presented in such terms as "Over ten years," and the numerical changes rather than the observed values are frequently reported. The data is carefully selected to make particular points; the numbers are made to lie.

Questions of Scale, and Not How Much You Weigh

One of the more frequent ways in which numbers can be made to lie, and one of the more insidious, is through the manipulation of the scales of their presentation (which Wolfe discusses at some length in her CCC piece). This is particularly true when expressing numbers as percentages or fractions of other numbers. Something about the two simultaneously stuns people ("They're hard; I can't understand them.") and impresses upon them the "truth" of the numbers they express. Unfortunately, the two often conduce to unjust manipulation of data and people's understanding of it.

For example, Table 2: Average Final Scores in SUBJ 101 by Year notes that the reported mean score in 1998 was 76.4 and that in 1999 was 80.7; between the two years, the mean score increased by 4.3 points. That 4.3 points also figures as an increase of more than five percent between the two years (80.7 being approximately 1.0563 times 76.4); offering the change as a percentage makes it appear greater (since five is more than four) and more authoritative (since percentages, being a bit more complicated math, are "harder" and therefore must be better). Similarly, the change between 1999 and 2000 is just under seven points, but it is an increase of more than 8.5%; again, the percentage looks larger and appears more authoritative than the raw number. And the overall change across the years reported, 4.5 points over twenty years (or a tendency of a less-than-quarter-point annual increase), can be refigured as a six percent gain. In each case, presentation of the data as a percentage makes the change appear to be larger than it, in fact, is; it causes the numbers to lie.

Such lies are also frequently couched in terms without reference, so that all there is to see is the scale of change--but without a starting point, the scale itself is meaningless. To say that there is a change of more than 8.5% between 1999 and 2000 without having the numbers themselves available is ambiguous; Table 2: Average Final Scores in SUBJ 101 by Year is present here, but an approximately 8.5% increase could be the values in the table (approximately seven points), or a change from 90 to 98, from 82 to 89, from 70 to 76, or from 50 to 54. Without a raw number from which to work, a percentage can mean anything, which means it ultimately means nothing, and to suggest otherwise is not entirely honest. Yet it happens often throughout media, and people are misled thereby.

What, Then, Is to Be Done?

Knowing that numbers can be manipulated, knowing that they can in effect be made to lie, means that those who are presented with them have to be able to question them and untangle their presentations (i.e., remember classes in mathematics and rhetoric). The numbers are not to be accepted as they are without further inquiry (or the presenter of the numbers laying them out for inspection, as more reliable studies tend to do). Statistical data such as mean, median, and mode must be reviewed within the contexts of the sets from which they derive. Spans of time over which changes occur need to be vetted; the years discussed need to be specifically noted and the short-term studied in the context of the long-term. Scaling needs to be reviewed; percentage changes must be contextualized with raw numbers so that an increase of 2 to 4 is not over-inflated as an unvarnished doubling or increase to 200% of the previous value. And the words in which all the numbers are presented need to be examined as any other words. Their tone and their connotations, as well as their specific denotations, must be plumbed so that the reader can be certain that this set of numbers, at least, does not lie.

© 2014 Folgha

Comments

    0 of 8192 characters used
    Post Comment

    • Folgha profile image
      Author

      Folgha 3 years ago from Stillwater, OK

      I had the same thought. Hence the article. Many thanks!

    • Eric Flynn profile image

      Eric Wayne Flynn 3 years ago from Providence, Rhode Island

      I've always hated the term " numbers never lie"... Everything lies, especially numbers. It's the easiest way to not have real vision in this world.

      EWF