Close
Register
Close Window

OpenDSA Stand-alone Modules

Chapter 0 modules

Show Source |    | About   «  0.186. Number Problems   ::   Contents   ::   0.188. The Fast Fourier Transform  »

The Transformation Concept

1. The Transformation Concept: Integer Multiplication

Multiplying two large numbers is considerably more difficult than adding them. Recall your grade-school algorithms for adding and multiplying big numbers. Adding two \(n\) -digit numbers is done by simply moving from right to left through both numbers, for a total of \(O(n)\) work. But the cost to multiply two \(n\)-digit numbers directly is \(O(n^2)\) since you essentially need to multiply each digit of the one number by each digit of the other.

Recall that one property of logarithms is that \(\log nm = \log n + \log m\). Thus, if taking logarithms and anti-logarithms were cheap, then we could reduce multiplication to addition by taking the log of the two operands, adding, and then taking the anti-log of the sum.

Under normal circumstances, taking logarithms and anti-logarithms is expensive, and so this reduction would not be considered practical. However, this reduction is precisely the basis for the slide rule. The slide rule uses a logarithmic scale to measure the lengths of two numbers, in effect doing the conversion to logarithms automatically. These two lengths are then added together, and the inverse logarithm of the sum is read off another logarithmic scale. The part normally considered expensive (taking logarithms and anti-logarithms) is cheap because it is a physical part of the slide rule. Thus, the entire multiplication process can be done cheaply via a reduction to addition. In the days before electronic calculators, slide rules were routinely used by scientists and engineers to do basic calculations of this nature.

This process for doing multiplication quickly is an example of using a transformation to speed up a problem. We have transformed the input values by taking their logarithm, and then did a cheap operation (addition) on the transformed values, and then reversed the transformation (with an anti-log) to get the true answer to the original problem.

2. Polynomial Multiplication

Now consider the problem of multiplying large polynomials. A vector \(\mathbf a\) of \(n\) values can uniquely represent a polynomial of degree \(n-1\), expressed as

\[P_{\mathbf a}(x) = \sum_{i=0}^{n-1} {\mathbf a}_i x^i.\]
Settings

Proficient Saving... Error Saving
Server Error
Resubmit


Alternatively, an \(n-1\) -degree polynomial can be uniquely represented by a list of its values at \(n\) distinct points. Finding the value for a polynomial at a given point is called evaluation. Finding the coefficients for the polynomial given the values at \(n\) points is called interpolation.


Settings

Proficient Saving... Error Saving
Server Error
Resubmit

There are many useful engineering problems that involve multiplying two large-degree polynomials. This can be done using a transformation that involves evaluation (normally expensive), interpolation (normally expensive) and vector multiplication (cheap), similar in concept to how the slide rule uses a transformation to multiply large numbers efficiently. The secret to speeding this process up is in carefully selecting the right values to evaluate and interpolate.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

Now, let's start thinking about ways to speed this up.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit


   «  0.186. Number Problems   ::   Contents   ::   0.188. The Fast Fourier Transform  »

nsf
Close Window