# 4.8. Calculating Program Running Time¶

## 4.8.1. Calculating Program Running Time¶

This modules discusses the analysis for several simple code fragments. We will make use of the algorithm analysis simplifying rules:

If \(f(n)\) is in \(O(g(n))\) and \(g(n)\) is in \(O(h(n))\), then \(f(n)\) is in \(O(h(n))\).

If \(f(n)\) is in \(O(k g(n))\) for any constant \(k > 0\), then \(f(n)\) is in \(O(g(n))\).

If \(f_1(n)\) is in \(O(g_1(n))\) and \(f_2(n)\) is in \(O(g_2(n))\), then \(f_1(n) + f_2(n)\) is in \(O(\max(g_1(n), g_2(n)))\).

If \(f_1(n)\) is in \(O(g_1(n))\) and \(f_2(n)\) is in \(O(g_2(n))\), then \(f_1(n) f_2(n)\) is in \(O(g_1(n) g_2(n))\).

What about other control statements?
`While`

loops are analyzed in a manner similar to `for`

loops.
The cost of an `if`

statement in the worst case is the greater of
the costs for the `then`

and `else`

clauses.
This is also true for the average case, assuming that
the size of \(n\) does not affect the probability of executing one
of the clauses (which is usually, but not necessarily, true).
For `switch`

statements, the worst-case cost is that of the most
expensive branch.
For subroutine calls, simply add the cost of executing the subroutine.

There are rare situations in which the probability for executing the
various branches of an `if`

or `switch`

statement are
functions of the input size.
For example, for input of size \(n\), the `then`

clause of an
`if`

statement might be executed with probability \(1/n\).
An example would be an `if`

statement that executes the
`then`

clause only for the smallest of \(n\) values.
To perform an average-case analysis for such programs,
we cannot simply count the cost of the `if`

statement as being the cost of the more expensive branch.
In such situations, the technique of
amortized analysis can come to
the rescue.

Determining the execution time of a recursive subroutine can be difficult. The running time for a recursive subroutine is typically best expressed by a recurrence relation. For example, the recursive factorial function calls itself with a value one less than its input value. The result of this recursive call is then multiplied by the input value, which takes constant time. Thus, the cost of the factorial function, if we wish to measure cost in terms of the number of multiplication operations, is one more than the number of multiplications made by the recursive call on the smaller input. Because the base case does no multiplications, its cost is zero. Thus, the running time for this function can be expressed as

The closed-form solution for this recurrence relation is \(\Theta(n)\).

### 4.8.1.1. Case Study: Two Search Algorithms¶

The final example of algorithm analysis for this section will compare two algorithms for performing search in an array. Earlier, we determined that the running time for sequential search on an array where the search value \(K\) is equally likely to appear in any location is \(\Theta(n)\) in both the average and worst cases. We would like to compare this running time to that required to perform a binary search on an array whose values are stored in order from lowest to highest. Here is a visualization of the binary search method.

### 4.8.1.2. Binary Search Practice Exercise¶

### 4.8.1.3. Analyzing Binary Search¶

Function `binarySearch`

is designed to find the (single) occurrence of
\(K\) and return its position.
A special value is returned if \(K\) does not appear in the array.
This algorithm can be modified to implement variations
such as returning the position of the first
occurrence of \(K\) in the array if multiple occurrences are
allowed, and returning the position of the greatest value less than
\(K\) when \(K\) is not in the array.

Comparing sequential search to binary search, we see that as \(n\) grows, the \(\Theta(n)\) running time for sequential search in the average and worst cases quickly becomes much greater than the \(\Theta(\log n)\) running time for binary search. Taken in isolation, binary search appears to be much more efficient than sequential search. This is despite the fact that the constant factor for binary search is greater than that for sequential search, because the calculation for the next search position in binary search is more expensive than just incrementing the current position, as sequential search does.

Note however that the running time for sequential search will be roughly the same regardless of whether or not the array values are stored in order. In contrast, binary search requires that the array values be ordered from lowest to highest. Depending on the context in which binary search is to be used, this requirement for a sorted array could be detrimental to the running time of a complete program, because maintaining the values in sorted order requires a greater cost when inserting new elements into the array. This is an example of a tradeoff between the advantage of binary search during search and the disadvantage related to maintaining a sorted array. Only in the context of the complete problem to be solved can we know whether the advantage outweighs the disadvantage.