- HubPages
*»* - Technology
*»* - Computers & Software
*»* - Computer Science & Programming

# Exact Floating Point Arithmetic

## Or How to Calculate Binomial Probability

Have you ever tried to calculate Binomial Probability?

The Binomial Distribution is the discrete probability distribution of the number of successes in a sequence of * n* independent yes/no experiments, each of which yields success with probability

*. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when*

**p***= 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.*

**n**### Probability mass function

If the random variable * K* follows the binomial distribution with parameters

*and*

**n***, we write:*

**p**

~ B(K,n)p

The probability of getting exactly * k* successes in

*trials is given by the probability mass function:*

**n**f(

;k,n)p= Pr(

=K)k= Probability that random variable

takes the valueKk= C(

,n) *k* (1 -p^{k})p^{(n - k)}for

= 0, 1, 2, ..., n, where; C(k,n) =k! / (n! * (k-n)!) is the binomial coefficient (hence the name of the distribution) "kchoosen", also denoted C(k,n),kC_{n}, or_{k}C^{n}. The formula can be understood as follows: we want_{k}successes (k) andp^{k}failures (1 −n − k)p. However, the k successes can occur anywhere among the n trials, and there are C(^{(n − k)},n) different ways of distributingksuccesses in a sequence ofktrials. (nParaphrased from wikipedia article on Binomial Probability).

Supposing we go about calculating C(n, k) by first calculating n! and then k! and then (n - k)! and then use following formula:

C(n, k) = n! / (k! * (n - k)!)

where m! is well known as number of permutations of m objects and it is:

m! = 1 * 2 * ... * m (m a positive integer) and 0! = 1

What happens is that even for small values of n the computer program which uses such algorithm will cause an overflow error.

Try following QBasic program:

DECLARE FUNCTION Permutations# (n AS DOUBLE)

CLS

REM To calculate C(n, k)

REM C(n, k) = n! / (k! / (n - k)!)

DIM n AS DOUBLE

DIM k AS DOUBLE

FOR n = 1 TO 100

PRINT "n="; n

FOR k = 1 TO n

nPerm = Permutations(n)

kPerm = Permutations(k)

nMinuskPerm = Permutations(n - k)

Combinations = nPerm / (kPerm * nMinuskPerm)

NEXT k

NEXT n

FUNCTION Permutations# (n AS DOUBLE)

DIM p AS DOUBLE

DIM i AS DOUBLE

p = 1#

FOR i = 1 TO n

p = p * i

NEXT i

Permutations# = p

END FUNCTION

And you will find that it generates an error "Overflow" when it comes to calculate permutations for n=35.

To calculate binomial probabilities for n>35 we need to find a way to prevent our floating point numbers becoming too large. Notice that P(k) itself is small. In fact its calculation may result in **underflow **error when n becomes large. C(n, k) on the other hand becomes very large.

First of all notice that we are computing permutaions as part of our calculations for combinations: C(n, k). And first of all we notice that C(n, k) = C(n, (n - k)). So we need not calculate C(n, k) for k ≥ (n \ 2) + 1. We just save when we were calculating for smaller k and use those numbers when we come to calculate for larger k.

Secondly, assuming k < (n - k), we are going to calculate:

C(n, k) = (n / k) * ((n - 1) / (k - 1)) * ... * ((n - k + 1) / 1)

Actually,

C(n, k) = (n * (n - 1) * ... * (n - k + 1)) / (k * (k - 1) * ... * 1)

but we rearrange the terms in numerator and denominator to get it as multiplication of "k" terms of type:

((n - (j - 1)) / (k - (j - 1)), where j runs from 1 to k

There are k such terms. And if p is also a fraction multiplying each factor by p prevents the intermediate result from generating "Overflow" error. In fact terms involving p and (1 - p) can be combined as given below:

p

^{k}* (1 - p)^{(n - k)}= p^{k}* (1 - p)^{n}* (1 - p)^{-k}= (p / (1 - p))^{k}* (1 - p)^{n}

When p < 0.5, p / (1 - p) is a number smaller than 1 and so, it helps because intermediate result from calculation of combinations factor becomes large and multiplying (p / (1 - p)) will reduce it. When p > 0.5, we can use, k of n factors in (1 - p) ^ n in above formula to help prevent intermediate number in computation of combinations becoming large.

But this is still going to involve floating point arithmetic. And at each floating point operation we may increasing errors. The reason why that happens is even when p is a fraction, the calculation of (n - j) / (k - j) will cause some truncation. This is why a new trick needs to be adopted.

## New Trick

The new trick is to put off floating point operation until last moment. Since terms like (n - j) and (k - j) are all integers and as such they may have **Common Factors**. We can calculate **Greatest Common Factor** of these before we multiply into intermediate result. And, furthermore, we carry intermediate result not as floating point number, but as a fraction represented by **two** QBasic LONG integers, one for numerator and another for denominator. This way we can put off floating point operation until the last step.

This philosophy of maintaining numerator and denominator separately can be taken one step further. We can, instead of, maintaining numerator and denominator as QBasic LONG numbers may carry these numbers as array of LONG numbers each standing for prime factor of these numbers accompanied by their powers. In this way calculation of GCF is a matter of examining two arrays for presence of a common prime factor and dividing by GCF is a matter of subtracting minimum of powers of these common prime factors. In this way sky is the limit on exact arithmetic.

Inexact arithmetic of floating point operations in modern computers is never seen as a "problem". In fact people who concern obsessively about computational power think of how fast and if massively parallel floating point operations can be done or not. But large computers can quickly produce large number of floating point numbers with little errors which may add up into large errors. That this will render results of such computations useless needs to be separately investigated and proven as unfounded before proceeding with inexact "flops only" computations. Otherwise just stick with exact flops. You can never go wrong.

A mixed strategy could be when periodically memory consumed by prime factors array method exceeds some limit then carry out the flop which otherwise would have been put off until last. Another efficiency measure could be in calculation of prime factors array of a number which is going to be "multiplied into" an intermediate result. A number which has been visited earlier can have its prime factors array "looked up" from an "internal database". That will save time spent on calculating prime factor array.

But be aware that P(k) ultimately is a very small number as mentioned above. And computation of final number will still be problem when it is time to carry out at last the floating point operation that we had put off until the end. That will cause underflow error. If calculation of P(k) was part of bigger calculation then a way must be found to maintain it in memory or in a file. One situation in which P(k) is calculated as part of larger calculation is when upper or lower tail region probability is calculated. Or when probability of a region between two k values is calculated.

## Comments

No comments yet.