Summation Techniques¶
Consider the following simple summation.
An easy proof by induction shows that this summation has the well-known closed form \(n(n+1)/2\). But while induction is a good technique for proving that a proposed closed-form expression is correct, how do we find a candidate closed-form expression to test in the first place? Let us try to think through this problem from first principles, as though we had never seen it before.
A good place to begin analyzing a summation it is to give an estimate of its value for a given \(n\). Observe that the biggest term for this summation is \(n\), and there are \(n\) terms being summed up. So the total must be less than \(n^2\). Actually, most terms are much less than \(n\), and the sizes of the terms grow linearly. If we were to draw a picture with bars for the size of the terms, their heights would form a line, and we could enclose them in a box \(n\) units wide and \(n\) units high. It is easy to see from this that a closer estimate for the summation is about \((n^2)/2\). Having this estimate in hand helps us when trying to determine an exact closed-form solution, because we will hopefully recognize if our proposed solution is badly wrong.
Let us now consider some ways that we might hit upon an exact equation for the closed form solution to this summation. One particularly clever approach that we can take is to observe that we can “pair up” the first and last terms, the second and (\(n-1\))th terms, and so on. Each pair sums to \(n+1\). The number of pairs is \(n/2\). Thus, the solution is \(n(n+1)/2\). This is pretty, and there is no doubt about it being correct. The problem is that it is not a useful technique for solving many other summations.
Now let us try to come up with something a bit more general. We already recognized that, because the largest term is \(n\) and there are \(n\) terms, the summation is less than \(n^2\). If we are lucky, the closed form solution is a polynomial. Using that as a working assumption, we can invoke a technique called guess-and-test. We will guess that the closed-form solution for this summation is a polynomial. That means that we are guessing that it is of the form \(c_1 n^2 + c_2 n + c_3\) for some constants \(c_1\), \(c_2\), and \(c_3\). If this is true, then we can plug in the answers to small cases of the summation to solve for the coefficients. For this example, substituting 0, 1, and 2 for \(n\) leads to three simultaneous equations. Because the summation when \(n=0\) is just 0, \(c_3\) must be 0. For \(n=1\) and \(n=2\) we get the two equations
which in turn yield \(c_1 = 1/2\) and \(c_2 = 1/2\). Thus, if the closed-form solution for the summation is a polynomial, then it can only be
which is more commonly written
At this point, we still must do the “test” part of the guess-and-test approach. We can use an induction proof to verify whether our candidate closed-form solution is correct. In this case it is indeed correct, as shown by Example 3. The induction proof is necessary because our initial assumption that the solution is a simple polynomial could be wrong. For example, it might have been that the true solution includes a logarithmic term, such as \(c_1n^2 + c_2 n \log n\). The process shown here is essentially fitting a curve to a fixed number of points. Because there is always an \(n\)-degree polynomial that fits \(n+1\) points, we have not done enough work to be sure that we to know the true equation without the induction proof.
Guess-and-test is useful whenever the solution is a polynomial expression. In particular, similar reasoning can be used to solve for \(\sum_{i=1}^n i^2\), or more generally \(\sum_{i=1}^n i^c\) for \(c\) any positive integer. Why is this not a universal approach to solving summations? Because many summations do not have a polynomial as their closed form solution.
A more general approach is based on the subtract-and-guess or divide-and-guess strategies. One form of subtract-and-guess is known as the shifting method. The shifting method subtracts the summation from a variation on the summation. The variation selected for the subtraction should be one that makes most of the terms cancel out. To solve sum \(f\), we pick a known function \(g\) and find a pattern in terms of \(f(n) - g(n)\) or \(f(n)/g(n)\).
Example 1
Find the closed form solution for \(\sum_{i=1}^n i\) using the divide-and-guess approach. We will try two example functions to illustrate the divide-and-guess method: dividing by \(n\) and dividing by \(f(n-1)\). Our goal is to find patterns that we can use to guess a closed-form expression as our candidate for testing with an induction proof. To aid us in finding such patterns, we can construct a table showing the first few numbers of each function, and the result of dividing one by the other, as follows.
Dividing by both \(n\) and \(f(n-1)\) happen to give us useful patterns to work with. \(\frac{f(n)}{n} = \frac{n+1}{2}\), and \(\frac{f(n)}{f(n-1)} = \frac{n+1}{n-1}\). Of course, lots of other guesses for function \(g\) do not work. For example, \(f(n) - n = f(n-1)\). Knowing that \(f(n) = f(n-1) + n\) is not useful for determining the closed form solution to this summation. Or consider \(f(n) - f(n-1) = n\). Again, knowing that \(f(n) = f(n-1) + n\) is not useful. Finding the right combination of equations can be like finding a needle in a haystack.
In our first example, we can see directly what the closed-form solution should be. Since \(\frac{f(n)}{n} = \frac{n+1}{2}\), obviously \(f(n) = n(n+1)/2\).
Dividing \(f(n)\) by \(f(n-1)\) does not give so obvious a result, but it provides another useful illustration.
Once again, we still do not have a proof that \(f(n) = n(n+1)/2\). Why? Because we did not prove that \(f(n)/n = (n+1)/2\) nor that \(f(n)/f(n-1) = (n+1)(n-1)\). We merely hypothesized patterns from looking at a few terms. Fortunately, it is easy to check our hypothesis with induction.
Example 2
Solve the summation
We will begin by writing out a table listing the first few values of the summation, to see if we can detect a pattern.
By direct inspection of the second line of the table, we might recognize the pattern \(f(n) = \frac{2^n-1}{2^n}\). A simple induction proof can then prove that this always holds true. Alternatively, consider if we hadn’t noticed the pattern for the form of \(f(n)\). We might observe that \(f(n)\) appears to be reaching an asymptote at 1. In which case, we might consider looking at the difference between \(f(n)\) and the expected asymptote. This result is shown in the last line of the table, which has a clear pattern since the \(i\) th entry is of \(1/2^i\). From this we can easily deduce a guess that \(f(n) = 1 - \frac{1}{2^n}\). Again, a simple induction proof will verify the guess.
Example 3
Solve the summation
This is called a geometric series. Our goal is to find some function \(g(n)\) such that the difference between \(f(n)\) and \(g(n)\) one from the other leaves us with an easily manipulated equation. Because the difference between consecutive terms of the summation is a factor of \(r\), we can shift terms if we multiply the entire expression by \(r\):
We can now subtract the one equation from the other, as follows:
The result leaves only the end terms:
Thus, we get the result
where \(r \neq 1\).
Example 4
For our second example of the shifting method, we solve
We can achieve our goal if we multiply by two:
The \(i\) th term of \(2f(n)\) is \(i \cdot 2^{i+1}\), while the \((i+1)\) th term of \(f(n)\) is \((i+1) \cdot 2^{i+1}\). Subtracting one expression from the other yields the summation of \(2^i\) and a few non-canceled terms:
Shift \(i\) ‘s value in the second summation, substituting \((i+1)\) for \(i\):
Break the second summation into two parts:
Cancel like terms:
Again shift \(i\) ‘s value in the summation, substituting \(i\) for \((i+1)\):
Replace the new summation with a solution that we already know:
Finally, reorganize the equation:
Here is another example.