What is a Standard Deviation?
Mean and Deviation
On any set of quantitative data (data consisting of numbers) that demonstrate a normal distribution the mean and standard deviation can be calculated to help give us a picture of how this data is distributed across a large sample size.
A standard deviation is a number that measures distances from the mean within a consistent amount or percentage of change in data. While I recognize right away that this definition is not very helpful, some examples should clear up any confusion you are currently feeling.
For example, most IQ tests have a mean (average score) of 100 with a standard deviation of 15. What this means is that a certain percentage of all scores will be expected to fall within one standard deviation below this score and one standard deviation above this score. 34% of all test-takers would be expected to score between 100 and 115 (one S.D. above) and 34% of all test takers would be expected to score between 100 and 85 (one S.D. below).
So a standard deviation by definition is the amount of points away from the mean that a certain percentage of collected data will occupy. We can see here in our example that 68% of all test takers will be expected to score between 85 and 115 on an IQ test, or within one Standard Deviation of the mean.
2nd and 3rd Standard Deviations
If we extend another standard deviation (15 points scored) in each direction we will obviously encompass more scores. Two Standard Deviations above and below the average score will encompass 95% of all test scores. So we expect 95% of all scores to fall between 70 and 130. (2x15=30) in both directions. 47.5% of all score will fall between 100 and 130, and 47.5% of all scores will fall between 70 and 100. So we are including an additional 13.5% of above average scores and below average scores.
If we extend a third Stand Deviation, that is scores between 55 and 145 we expect to see 99.5% of all scores included. So in this third Standard Deviation, we have included almost all scores, an additional 2.25% above the average and 2.25% below the average. In addition the lowest .25% of scores will fall below this third Standard Deviation in our normal distribution and the highest .25% of scores will lie outside the three Standard Deviations above the average score.
So while Standard Deviations vary numerically they consistently measure percentages of data away from the average. 68% of data will always be within one Standard Deviation (that is one above and one below the average), 95% of data will always fall within two Standard Deviations (that is two above and two below) the average and ~99.5% (the exactitude varies slightly here depending on the model being used) pieces of data will fall within three Standard Deviations (three above and three below) the mean. These percentages are the consistent piece of distribution measurement with which Standard Deviation is concerned.
If this is still unclear please feeling free to post comments and questions.