3.7. Mathematical Proof Techniques¶
3.7.1. Mathematical Proof Techniques¶
Solving any problem has two distinct parts: the investigation and the argument. Students are too used to seeing only the argument in their textbooks and lectures. But to be successful in school (and in life after school), one needs to be good at both, and to understand the differences between these two phases of the process. To solve the problem, you must investigate successfully. That means engaging the problem, and working through until you find a solution. Then, to give the answer to your client (whether that “client” be your instructor when writing answers on a homework assignment or exam, or a written report to your boss), you need to be able to make the argument in a way that gets the solution across clearly and succinctly. The argument phase involves good technical writing skills—the ability to make a clear, logical argument.
Being conversant with standard proof techniques can help you in this process. Knowing how to write a good proof helps in many ways. First, it clarifies your thought process, which in turn clarifies your explanations. Second, if you use one of the standard proof structures such as proof by contradiction or an induction proof, then both you and your reader are working from a shared understanding of that structure. That makes for less complexity to your reader to understand your proof, because the reader need not decode the structure of your argument from scratch.
This section briefly introduces three commonly used proof techniques:
Deduction, or direct proof;
Proof by contradiction and
Proof by mathematical induction.
3.7.1.1. Direct Proof¶
In general, a direct proof is just a “logical explanation”. A direct proof is sometimes referred to as an argument by deduction. This is simply an argument in terms of logic.
Many direct proofs are written in English with words such as “if … then”. In this case logic notation such as \(P \Rightarrow Q\) can often help express the proof. Even if we don’t wish to use symbolic logic notation, we can still take advantage of fundamental theorems of logic to structure our arguments. For example, if we want to prove that \(P\) and \(Q\) are equivalent, we can first prove \(P \Rightarrow Q\) and then prove \(Q \Rightarrow P\).
In some domains, proofs are essentially a series of state changes from a start state to an end state. Formal predicate logic can be viewed in this way, with the various “rules of logic” being used to make the changes from one formula or combining a couple of formulas to make a new formula on the route to the destination. Symbolic manipulations to solve integration problems in introductory calculus classes are similar in spirit, as are high school geometry proofs.
3.7.1.2. Proof by Contradiction¶
The simplest way to disprove a theorem or statement is to find a counter-example to the theorem. Unfortunately, no number of examples supporting a theorem is sufficient to prove that the theorem is correct. However, there is an approach that is vaguely similar to disproving by counter-example, called proof by contradiction. To prove a theorem by contradiction, we first assume that the theorem is false. We then find a logical contradiction stemming from this assumption. If the logic used to find the contradiction is correct, then the only way to resolve the contradiction is to recognize that the assumption that the theorem is false must be incorrect. That is, we conclude that the theorem must be true.
A related proof technique is proving the contrapositive. We can prove that \(P \Rightarrow Q\) by proving \((\mathrm{not}\ Q) \Rightarrow (\mathrm{not}\ P)\). This technique works because the truth table for the two logical statements are the same.
3.7.1.3. Proof by Mathematical Induction¶
Mathematical induction can be used to prove a wide variety of theorems. Induction also provides a useful way to think about algorithm design, because it encourages you to think about solving a problem by building up from simple subproblems. Induction can help to prove that a recursive function produces the correct result. Understanding recursion is a big step toward understanding induction, and vice versa, since they work by essentially the same process.
Within the context of algorithm analysis, one of the most important uses for mathematical induction is as a method to test a hypothesis. When seeking a closed-form solution for a summation or recurrence, we might first guess or otherwise acquire evidence that a particular formula is the correct solution. If the formula is indeed correct, it is often an easy matter to prove that fact with an induction proof.
Let Thrm be a theorem to prove, and express Thrm in terms of a positive integer parameter \(n\). Mathematical induction states that Thrm is true for any value of parameter \(n\) (for \(n \geq c\), where c is some constant) if the following two conditions are true:
Base Case: Thrm holds for \(n = c\), and
Induction Step: If Thrm holds for \(n - 1\), then Thrm holds for \(n\).
Proving the base case is usually easy, typically requiring that some small value such as 1 be substituted for \(n\) in the theorem and applying simple algebra or logic as necessary to verify the theorem. Proving the induction step is sometimes easy, and sometimes difficult. An alternative formulation of the induction step is known as strong induction. The induction step for strong induction is:
- 2a. Induction Step:
If Thrm holds for all \(k, c \leq k < n\), then Thrm holds for \(n\).
Proving either variant of the induction step (in conjunction with verifying the base case) yields a satisfactory proof by mathematical induction.
The two conditions that make up the induction proof combine to demonstrate that Thrm holds for \(n=2\) as an extension of the fact that Thrm holds for \(n=1\). This fact, combined again with condition (2) or (2a), indicates that Thrm also holds for \(n=3\), and so on. Thus, Thrm holds for all values of \(n\) (larger than the base cases) once the two conditions have been proved.
What makes mathematical induction so powerful (and so mystifying to most people at first) is that we can take advantage of the assumption that Thrm holds for all values less than \(n\) as a tool to help us prove that Thrm holds for \(n\). This is known as the induction hypothesis. Having this assumption to work with makes the induction step easier to prove than tackling the original theorem itself. Being able to rely on the induction hypothesis provides extra information that we can bring to bear on the problem.
Recursion and induction have many similarities. Both are anchored on one or more base cases. A recursive function relies on the ability to call itself to get the answer for smaller instances of the problem. Likewise, induction proofs rely on the truth of the induction hypothesis to prove the theorem. The induction hypothesis does not come out of thin air. It is true if and only if the theorem itself is true, and therefore is reliable within the proof context. Using the induction hypothesis to do work is exactly the same as using a recursive call to do work.
Note carefully what took place in this example. First we cast \(\mathbf{S}(n)\) in terms of a smaller occurrence of the problem: \(\mathbf{S}(n) = \mathbf{S}(n-1) + n\). This is important because once \(\mathbf{S}(n-1)\) comes into the picture, we can use the induction hypothesis to replace \(\mathbf{S}(n-1)\) with \((n-1)(n)/2\). From here, it is simple algebra to prove that \(\mathbf{S}(n-1) + n\) equals the right-hand side of the original theorem.
We can compare the induction proof of Example 3.7.3 with the direct proof in Example 3.7.1. Different people might think one is easier to understand than the other, but certainly the writer of the direct proof version had to discover an insight unique to that problem that might not be helpful or relevant when proving other summations.
Our next example of mathematical induction proves a theorem from geometry. It also illustrates a standard technique of induction proof where we take \(n\) objects and remove some object to use the induction hypothesis.
Compare the proof in Example 3.7.8 with that in Example 3.7.6. For Example 3.7.6, we took a collection of stamps of size \(n-1\) (which, by the induction hypothesis, must have the desired property) and from that “built” a collection of size \(n\) that has the desired property. We therefore proved the existence of some collection of stamps of size \(n\) with the desired property.
For Example 3.7.8 we must prove that any collection of \(n\) lines has the desired property. Thus, our strategy is to take an arbitrary collection of \(n\) lines, and “reduce” it so that we have a set of lines that must have the desired property because it matches the induction hypothesis. From there, we merely need to show that reversing the original reduction process preserves the desired property. Since we controlled the reduction process, we control the reversal of this reduction.
In contrast, consider what is required if we attempt to “build” from a set of lines of size \(n-1\) to one of size \(n\). We would have great difficulty justifying that all possible collections of \(n\) lines are covered by our building process. By reducing from an arbitrary collection of \(n\) lines to something less, we avoid this problem.
Another advantage to thinking in terms of “reducing from \(n\)” rather than “building up from \(n-1\)” is that reducing is more like what we do when we write a recursive function. In recursion, we would naturally compute some function of \(n\) by calling the function (recursively) on \(n-1\) and then using the result to compute the value for \(n\).
This section’s final example shows how induction can be used to prove that a recursive function produces the correct result.
We can use a similar process to prove many recursive programs correct. The general form is to show that the base cases perform correctly, and then to use the induction hypothesis to show that the recursive step also produces the correct result. Prior to this, we must prove that the function always terminates, which might also be done using an induction proof.