ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Technology»
  • Computers & Software»
  • Computer Science & Programming

Problems Of Floating Point Arithmetic

Updated on February 6, 2014

Suppose we want to calculate Binomial Probability:

C(n, k) * pk * (1 - p)(n - k)

Here C(n, k) is number of combinations of selecting k objects from set of n objects. It is given by following:

C(n, k) = n! / (k! * (n - k)!)

To calculate binomial probability consider a simple algorithm such as following:

  1. Calculate n!
  2. Calculate k!
  3. Calculate (n - k)!
  4. Calculate pk
  5. Calculate (1 - p)(n - k)
  6. Use the numbers calculated in steps 1 through 5 to calculate binomial probability

If you attempt writing a program which implements this algorithm in a programming system such as QBasic or Fortran then you will realize that program fails for even a small value of n. Even for a small value of n we face the problem of overflow. To avoid such incident we can rearrange terms in the multiplications and divisions that we do and furthermore we do those computations using integers only. That is, we carry our fractional numbers as a/b, where both a and b are integers. When calculations are done we carry out one final floating point division taking care that at this final juncture we do not cause underflow or overflow error. Rearranging our computation avoids occurrence of overflow or underflow for smaller values of n. The problem of overflow and underflow is without doubt first problem any serious scientist encounters when she sits down to write her own computer programs. Such rearrangement of computation avoids overflow or underflow problem for small n but it does not eliminate it. The problem of overflow is simply put off until larger values of n.

But problem of overflow is not the only problem that troubles computer programmers of numerical calculations. We do all intermediate calculations in integers only to avoid truncation problems. In decimal number system even a simple number such as 1/3 becomes a recurring fraction. A simple fraction in decimal which is not recurring will become recurring fraction in binary system. Therefore, overflow problem is not the only problem that cause imprecision in numerical calculations. Loss of precision is also caused by over inability to store floating point values which stand for recurring fractions. So underflow and overflow are not the main problem of floating point arithmetic.

Not Only Overflow But Truncation Too!

Problem with floating point arithmetic is truncation which causes loss of precision. Especially when such truncated numbers are used again and again when the error caused by truncation adds up. It is to solve this problem that we must think of integer way of calculating.

Even simplest of decimal fractions which can be written as non-recurring fraction in decimal will become recurring fraction when converted to binary fraction. For example 0.1 is recurring fraction as a binary number. And, of course, even simplest of decimal fractions too are recurring. For example: one third. And 1/3 is recurring in binary too.

Even when overflow/underflow problem has not occurred the problem of truncation will invariably occur if an intermediate result was as a result of division and in binary system it was recurring fraction.

Let us visit the problem of calculation of binomial probability again. The formula is as follows:

P(k) = C(n, k) * pk * (1 - p)(n - k)

where C(n, k) is well known as number of combinations of selecting k objects out of n objects, k ≤ n.

The integer method that was described in that hub will work until we come to last step when we must divide the numerator by denominator when resulting value of P(k) is likely to be so small that there can be underflow error. How to prevent that from happening?

Preventing Underflow Error

How can we prevent underflow error in calculation of P(k)?

When it is time to evaluate fraction which will give us P(k) we simply change numerator by multiplying it with a power of 2 (or whatever radix that is used in the floating point representation of hardware/software). We select power large enough that would not cause underflow error. So final result of calculation is carried as a floating point number which we know is multiplied by a known power of radix used by target software. If we intend to use this method to calculate sum of P(k)'s (say, in a program which will calculate probability of upper or lower tail), then we should use same or biggest or smallest power of radix that will work for for first P(k) that we calculate. We use biggest if for later P(k)'s will gradually see increase in value, and therefore, possibility of exponent increasing. Using biggest power of radix to multiply to beef up the value of first P(k) will cause the exponent of calculated floating point number to at the lower edge just inside region which prevents underflow error. And later P(k)'s calculated are likely to increase then gradually this exponent too will increase, thereby remaining in acceptable region.

So, as you can see prevention of underflow in intermediate result is also not an unsolvable problem. But a problem now of adding newly computed P(k) to Sum of earlier P(k)'s will become apparent to smarter and alert minds. What happens when we calculate series of terms to be all summed up according algorithm suggested by following?

S(j) = S(j-1) + P(j)

As calculation proceeds S(j-1) becomes large compared to P(j). When two floating point number are added first a common exponent is decided upon, and then mantissas of both number are calculated with that common exponent and then these are added. (This is exponent equalization.) Here is a contrived example in radix 10 (decimal):

10+5 * 0.12345 + 10-5 * 0.98765

Let us take +5 as common exponent. So we will have to change the second number so that its exponent is also +5:

(1010 * 10-5) * (0.98765 / 1010) = 105 * 0.000000000098765

Now that we have common exponent and we can add mantissas:

0.12345 + 0.000000000098765 = 0.123450000098765

When our floating point can store only 5 decimal digits "contribution" to sum is absolutely zero. And it will remain zero as long as accumulated sum remains large number. Say, in calculation of type: S(j) = S(j-1) + P(j). Here as soon as S(j) becomes one big fat sum it is quite futile waste of computational power to keep calculating P(j). How do solve this problem?

Preventing Loss of Precision

How can we prevent the loss of precision when we add a large and a small floating point values? One way to solve this problem is to accumulate new term in a new sum. As soon as provisional S(j) becomes too big we create another S(j) setting aside old S(j). By the by accumulated sum in new S(j) too will become large enough when it can be added to old accumulated sum. Final accumulated sum is just added to old sum. If it had enough precision even after exponent equalization it will contribute something to sum else there will be no change.

I hope I have made a good case of sticking with integer way of computing fractions. Integer way is the way of maintaining numerator and denominator all through computation until it becomes absolutely necessary to carry out division of numerator by denominator.

Our next stop should be developing a framework of floating point computation in this integer way. We should develop a systematic approach which can be implemented eaily in a portable manner. Hopefully that will be my next hub.

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.

    Click to Rate This Article