question archive What differs the lower-upper bound and best-worst scenario? Explain!
Subject:FinancePrice:4.86 Bought39
What differs the lower-upper bound and best-worst scenario? Explain!
Lower bound: a value that is less than or equal to every element of a set of data. Upper bound: a value that is greater than or equal to every element of a set of data.
In computer science, the worst-case complexity (usually denoted in asymptotic notation) measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as n or N). It gives an upper bound on the resources required by the algorithm.
In the case of running time, the worst-case time-complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.
The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.
When analyzing algorithms it makes little sense to consider the best-case scenario as it is very often trivial and not very informative.
You can convince yourself that almost every algorithm can be adapted to have a best-case complexity of O(n)O(n), where nn is the size of the input, by simply running a preliminary check that verifies if the input instance belongs to some class of instances for which the solution is trivial.
Just to give a concrete example: the best case for every sorting algorithm can be made O(n)O(n) if you just check whether the input sequence of numbers is already sorted.
The focus is often on the worst-case complexity. Once you decide that you want to compare algorithms with respect to their worst-case complexity, it also makes sense to ask how quickly a problem can be solved.
It is usually impossible to give a sharp bound to the time needed to solve a problem, therefore one seeks upper and lower bound.
An upper bound of O(f(n))O(f(n)) tells you that there is some algorithm that solves the problem in O(f(n))O(f(n)) worst-case time.
A lower bound of Ω(g(n))Ω(g(n)) tells you that no conceivable algorithm can take o(g(n))o(g(n)) time to solve the problem.
Just to be clear: a lower bound on the time needed to solve a problem is expressed as a function of the input size nn and is the smallest amount of work that is necessary to solve all instances of size nn. Intuitively (not a formal definition) you can think of it as minA∈AmaxI∈InT(A,I)minA∈AmaxI∈InT(A,I) where AA is the set of all possible algorithms that solve your problem, InIn is the set of all instances of size nn, and T(A,I)T(A,I) is the time needed by A∈AA∈A to solve instance I∈InI∈In.
What you seem to have in mind instead is minA∈AminI∈InT(A,I)minA∈AminI∈InT(A,I).
A lower bound for a problem is useful to establish how "hard" that problem is to solve (problems that require more time to solve are harder, this would make little sense if we looked at the easiest instances instead), and to measure how far an algorithm is from being optimal. For example Merge Sort solves the sorting problem in the optimal amount of time because its running time (asymptotically) matches the Ω(nlogn)Ω(nlog?n) lower bound for the sorting problem (in the comparison-based model).
For a function f(n)
, g(n)
is an upper bound (big O) if for "big enough n", f(n)<=c*g(n)
, for a constant c
. [g dominates f]
g(n) is lower bound (big Omega) if for "big enough n", f(n) >= c*g(n)
, for a constant c
. [f dominates g]
If g(n)
is both upper bound and lower bound of f(n)
[with different c's], we say g(n) is a tight bound for f(n) [Big theta]
Use example for upper bound instead of tight one: some times it is hard to find tight bound, such as for the fibonacci recursive algorithm. so we find an easy upper bound of O(2^n), easily. more info is found in answers in this post.
How does it related to worst/base/... cases? (as requested by comments):
Worst case/average case (or any other case) is affecting what is the complexity function, but each of big-O, big-Omega and big-Theta can be applied to each of these cases.
For example, a HashTable insert, is Theta(1)
average case insertion, and Theta(n)
worst case insertion. It is also O(n)
average case insertion (bound is not tight), and Omega(1)
worst case insertion.