Close
Register
Close Window

CS3 Data Structures & Algorithms

Chapter 4 Algorithm Analysis

Show Source |    | About   «  4.7. Lower Bounds and \(\Theta\) Notation   ::   Contents   ::   4.9. Analyzing Problems  »

4.8. Calculating Program Running Time

4.8.1. Calculating Program Running Time

This modules discusses the analysis for several simple code fragments. We will make use of the algorithm analysis simplifying rules:

  1. If \(f(n)\) is in \(O(g(n))\) and \(g(n)\) is in \(O(h(n))\), then \(f(n)\) is in \(O(h(n))\).

  2. If \(f(n)\) is in \(O(k g(n))\) for any constant \(k > 0\), then \(f(n)\) is in \(O(g(n))\).

  3. If \(f_1(n)\) is in \(O(g_1(n))\) and \(f_2(n)\) is in \(O(g_2(n))\), then \(f_1(n) + f_2(n)\) is in \(O(\max(g_1(n), g_2(n)))\).

  4. If \(f_1(n)\) is in \(O(g_1(n))\) and \(f_2(n)\) is in \(O(g_2(n))\), then \(f_1(n) f_2(n)\) is in \(O(g_1(n) g_2(n))\).

What about other control statements? While loops are analyzed in a manner similar to for loops. The cost of an if statement in the worst case is the greater of the costs for the then and else clauses. This is also true for the average case, assuming that the size of \(n\) does not affect the probability of executing one of the clauses (which is usually, but not necessarily, true). For switch statements, the worst-case cost is that of the most expensive branch. For subroutine calls, simply add the cost of executing the subroutine.

There are rare situations in which the probability for executing the various branches of an if or switch statement are functions of the input size. For example, for input of size \(n\), the then clause of an if statement might be executed with probability \(1/n\). An example would be an if statement that executes the then clause only for the smallest of \(n\) values. To perform an average-case analysis for such programs, we cannot simply count the cost of the if statement as being the cost of the more expensive branch. In such situations, the technique of amortized analysis can come to the rescue.

Determining the execution time of a recursive subroutine can be difficult. The running time for a recursive subroutine is typically best expressed by a recurrence relation. For example, the recursive factorial function calls itself with a value one less than its input value. The result of this recursive call is then multiplied by the input value, which takes constant time. Thus, the cost of the factorial function, if we wish to measure cost in terms of the number of multiplication operations, is one more than the number of multiplications made by the recursive call on the smaller input. Because the base case does no multiplications, its cost is zero. Thus, the running time for this function can be expressed as

\[T(n) = T(n-1) + 1 \ \mbox{for}\ n>1;\ \ T(1) = 0.\]

The closed-form solution for this recurrence relation is \(\Theta(n)\).

4.8.1.1. Case Study: Two Search Algorithms

The final example of algorithm analysis for this section will compare two algorithms for performing search in an array. Earlier, we determined that the running time for sequential search on an array where the search value \(K\) is equally likely to appear in any location is \(\Theta(n)\) in both the average and worst cases. We would like to compare this running time to that required to perform a binary search on an array whose values are stored in order from lowest to highest. Here is a visualization of the binary search method.

Settings

Proficient Saving... Error Saving
Server Error
Resubmit

4.8.1.2. Binary Search Practice Exercise

4.8.2. Summary Exercise

   «  4.7. Lower Bounds and \(\Theta\) Notation   ::   Contents   ::   4.9. Analyzing Problems  »

Close Window