2.4. Growth Rates Review¶
2.4.1. Growth Rates Review¶
Two functions of n have different growth rates if, as n goes to infinity, their ratio either goes to infinity or goes to zero.

Figure 2.4.1: Two views of a graph illustrating the growth rates for six equations. The bottom view shows in detail the lower-left portion of the top view. The horizontal axis represents input size. The vertical axis can represent time, space, or any other measure of cost.¶
Where would the line for the equation (1.618)n go on this graph?
Exact equations relating program operations to running time require machine-dependent constants. Sometimes, the equation for exact running time is complicated to compute. Usually, we are satisfied with knowing an approximate growth rate. Specifically, we don’t generally care about the contants in the equations. Given two algorithms with growth rate c1n and c22n!, do we need to know the values of c1 and c2?
Here are some useful observations.
Since n2 grows faster than n,
2n2 grows faster than 2n. (Take antilog of both sides.)
n4 grows faster than n2. (Square boths sides.)
n grows faster than √n. (n=(√n)2. Replace n with √n.)
2logn grows no slower than logn. (Take log of both sides. Log “flattens” growth rates.)
Since n! grows faster than 2n,
n!! grows faster than 2n!. (Apply factorial to both sides.)
2n! grows faster than 22n. (Take antilog of both sides.)
n!2 grows faster than 22n. (Square both sides.)
√n! grows faster than √2n. (Take square root of both sides.)
logn! grows no slower than n. (Take log of both sides. Actually, it grows faster since logn!=Θ(nlogn).)
If f grows faster than g, then must √f grow faster than √g? Yes.
Must logf grow faster than logg? No. logn≈logn2 within a constant factor, that is, the growth rate is the same! Again, the log operator “flattens” the relative growth rates.
logn is related to n in exactly the same way that n is related to 2n.
2logn=n.
2.4.1.1. Asymptotic Notation¶
The notation “f∈O(n2)” is preferred to “f=O(n2)”. While n∈O(n2) and n2∈O(n2), O(n)≠O(n2).
Note that Big oh does not say how good an algorithm is—only how bad it can be.
If algorithm A∈O(n) and algorithm B∈O(n2), is A better than B? Perhaps… but perhaps the problem is that we don’t know much about the analysis of these algorithms. Perhaps better analysis will show that A=Θ(n) while B=Θ(logn).
Some log notation: logn2(=2logn) vs. log2n(=(logn)2) vs. loglogn.
log162=2log16=8.
log216=42=16.
loglog16=log4=2.
Order Notation has practical limits. Consider this statement: Resource requirements for Algorithm A grow slower than resource requirements for Algorithm B. Is A better than B? There are some potential problems with claiming so.
How big must the input be before it becomes true?
Some growth rate differences are trivial. For example: Θ(log2n) vs. Θ(n1/10). If n is 1012(≈240) then log2n≈1600, n1/10=16 even though n1/10 grows faster than log2n. n must be enormous (like 2150) for n1/10 to be bigger than log2n.
It is not always practical to reduce an algorithm’s growth rate. “Practical” here means that the constants might become too much higher when we shave off the minor asymptotic growth. Shaving a factor of n reduces cost by a factor of a million for input size of a million. Shaving a factor of loglogn saves only a factor of 4-5. So if changing the algorithm to remove a factor of loglogn at the expense of a constant factor of 10, then the new algorithm (while asymptotically better) won’t realize its advantages until n is so big as to never be used in any real situation.
This leads to the concept of the practicality window. In general, (1) we have limited time to solve a problem, and (2) input can only get so big before the computer chokes (or at least, users are only interested in running problems of a certain size). So while one algorithm might be asymptotically better than another, perhaps this is not true within the range of practical inputs that the user actually would require.
Fortunately, algorithm growth rates are USUALLY well behaved, so that Order Notation gives practical indications. “Practical” is the keyword. We use asymptotics because they provide a simple model that usually mirrors reality. This is useful to simplify our thinking.