Close
Register
Close Window

F17 OpenDSA entire modules

Chapter 30 Number Problems

Show Source |    | About   «  29.5. Optimal Sorting   ::   Contents   ::   30.2. The Fast Fourier Transform  »

30.1. Number Problems

30.1.1. Introduction

This moudle presents a variety of algorithms related to mathematical computations on numbers. Examples are activities like multiplying two numbers or raising a number to a given power. In particular, we are concerned with situations where built-in integer or floating-point operations cannot be used because the values being operated on are too large. Similar concerns arise for operations on polynomials or matrices.

Since we cannot rely on the hardware to process the inputs in a single constant-time operation, we are concerned with how to most effectively implement the operation to minimize the time cost. This begs a question as to how we should apply our normal measures of asymptotic cost in terms of growth rates on input size. First, what is an instance of addition or multiplication? Each value of the operands yields a different problem instance. And what is the input size when multiplying two numbers? If we view the input size as two (since two numbers are input), then any non-constant-time algorithm has a growth rate that is infinitely high compared to the growth of the input. This makes no sense, especially in light of the fact that we know from grade school arithmetic that adding or multiplying numbers does seem to get more difficult as the value of the numbers involved increases. In fact, we know from standard grade school algorithms that the cost of standard addition is linear on the number of digits being added, and multiplication has cost \(n \times m\) when multiplying an \(m\) -digit number by an \(n\) -digit number.

The number of digits for the operands does appear to be a key consideration when we are performing a numeric algorithm that is sensitive to input size. The number of digits is simply the log of the value, for a suitable base of the log. Thus, for the purpose of calculating asymptotic growth rates of algorithms, we will consider the "size" of an input value to be the log of that value. Given this view, there are a number of features that seem to relate such operations.

  • Arithmetic operations on large values are not cheap.
  • There is only one instance of value $n$.
  • There are \(2^k\) instances of length \(k\) or less.
  • The size (length) of value \(n\) is \(\log n\).
  • The cost of a particular algorithm can decrease when \(n\) increases in value (say when going from a value of \(2^k-1\) to \(2^k\) to \(2^k+1\)), but generally increases when \(n\) increases in length.

30.1.2. Exponentiation

We will start our examination of standard numerical algorithms by considering how to perform exponentiation. That is, how do we compute \(m^n\)? We could multiply by \(m\) a total of \(n-1\) times. Can we do better? Yes, there is a simple divide and conquer approach that we can use. We can recognize that, when \(n\) is even, \(m^n = m^{n/2}m^{n/2}\). If \(n\) is odd, then \(m^n = m^{\lfloor n/2\rfloor}m^{\lfloor n/2\rfloor}m\). This leads to the following recursive algorithm:

int Power(int base, int exp) {
  int half, total;
  if exp = 0 return 1;
  half = Power(base, exp/2);
  total = half * half;
  if (odd(exp)) then total = total * base;
  return total;
}

Function Power has recurrence relation

\[\begin{split}f(n) = \left\{ \begin{array}{ll} 0&n=1\\ f(\lfloor n/2\rfloor) + 1 + n \bmod 2&n>1 \end{array} \right.\end{split}\]

This has solution

\[f(n) = \lfloor \log n\rfloor + \beta(n) - 1\]

where \(\beta\) is the number of 1's in the binary representation of \(n\).

How does this cost compare with the problem size? The original problem size is \(\log m + \log n\), and the number of multiplications required is \(\log n\). This is far better (in fact, exponentially better) than performing \(n-1\) multiplications.

30.1.3. Largest Common Factor

We will next present Euclid's algorithm for finding the largest common factor (LCF) for two integers. The LCF is the largest integer that divides both inputs evenly.

First we make this observation: If \(k\) divides \(n\) and \(m\), then \(k\) divides \(n - m\). We know this is true because if \(k\) divides \(n\) then \(n = ak\) for some integer \(a\), and if \(k\) divides \(m\) then \(m = bk\) for some integer \(b\). So, \(LCF(n, m) = LCF(n-m, n) = LCF(m, n-m) = LCF(m, n)\).

Now, for any value \(n\) there exists \(k\) and \(l\) such that

\[\begin{split}n = km + l\ \mbox{where}\ m > l \geq 0.\end{split}\]

From the definition of the \(\bmod\) function, we can derive the fact that

\[n = \lfloor n/m \rfloor m + n \bmod m.\]

Since the LCF is a factor of both \(n\) and \(m\), and since \(n = km + l\), the LCF must therefore be a factor of both \(km\) and \(l\), and also the largest common factor of each of these terms. As a consequence, \(LCF(n, m) = LCF(m, l) = LCF(m, n \bmod m)\).

This observation leads to a simple algorithm. We will assume that \(n \geq m\). At each iteration we replace \(n\) with \(m\) and \(m\) with \(n \bmod m\) until we have driven \(m\) to zero:

int LCF(int n, int m) {
  if (m == 0) return n;
  return LCF(m, n % m);
}

To determine how expensive this algorithm is, we need to know how much progress we are making at each step. Note that after two iterations, we have replaced \(n\) with \(n \bmod m\). So the key question becomes: How big is \(n \bmod m\) relative to \(n\)?

\[\begin{split}\begin{eqnarray*} n \geq m &\Rightarrow& n/m \geq 1\\ &\Rightarrow& 2\lfloor n/m\rfloor > n/m\\ &\Rightarrow& m\lfloor n/m\rfloor > n/2\\ &\Rightarrow& n - n/2 > n - m\lfloor n/m\rfloor = n \bmod m\\ &\Rightarrow& n/2 > n \bmod m \end{eqnarray*}\end{split}\]

Thus, function LCF will halve its first parameter in no more than 2 iterations. The total cost is then \(O(\log n)\).

   «  29.5. Optimal Sorting   ::   Contents   ::   30.2. The Fast Fourier Transform  »

nsf
Close Window