O: Difference between revisions

From Rosetta Code
Content added Content deleted
(Added ordered list of notations)
(Big-O is upper limit; names for typical complexity classes; some other minor modifications)
Line 1: Line 1:
[[Category:Encyclopedia]]'''Complexity'''. In computer science the notation O(P(n)) is used to denote asymptotic behavior of an algorithm, usually its complexity in terms of execution time or memory consumption. Here ''P'' is an algebraic function of ''n'', usually polynomial or logarithmic. ''n'' describes the size of the problem. The meaning of O(P(n)) is that the complexity grows with n as P(n) does.
[[Category:Encyclopedia]]'''Complexity'''. In computer science the notation O(f(n)) is used to denote an upper limit of the asymptotic behavior of an algorithm, usually its complexity in terms of execution time or memory consumption (space). Here ''f'' is a function of ''n'', often a power or logarithm. ''n'' describes the size of the problem. The meaning of O(f(n)) is that the complexity grows with n at most as f(n) does.


The notation can also be used to describe a computational problem. In that case it characterizes the problem's complexity through the best known algorithm that solves it.
The notation can also be used to describe a computational problem. In that case it characterizes the problem's complexity through the best known algorithm that solves it.
Line 5: Line 5:
Examples: searching in an unordered container of ''n'' elements has the complexity of O(''n''). Binary search in an ordered container with random element access is O(log ''n''). The term random access actually means that the complexity of access is constant, i.e. O(1).
Examples: searching in an unordered container of ''n'' elements has the complexity of O(''n''). Binary search in an ordered container with random element access is O(log ''n''). The term random access actually means that the complexity of access is constant, i.e. O(1).


Here are typical Big-O's listed slowest to fastest (that is, slower algorithms have Big-O's near the top):
Here are some typical complexity classes listed from 'slowest' to 'fastest' (that is, slower algorithms have Big-O's near the top):
*O(n<sup>n</sup>)
*O(e<sup>n</sup>) ('exponential')
*O(n<sup>k</sup>) for some fixed ''k'' ('polynomial')
*O(n!)
*O(k<sup>n</sup>)
*O(n<sup>3</sup>) ('cubic')
*O(n<sup>3</sup>)
*O(n<sup>2</sup>) ('quadratic')
*O(n<sup>2</sup>)
*O(n*log(n)) (fastest possible time for a comparison sort)
*O(n*log(n)) (fastest possible time for a comparison sort)
*O(n)
*O(n) ('linear')
*O(log(n))
*O(log(n)) ('logarithmic')
*O(1)
*O(1) ('constant')


See also [http://en.wikipedia.org/wiki/Big_O_notation Big O notation]
See also [http://en.wikipedia.org/wiki/Big_O_notation Big O notation]

Revision as of 16:34, 18 July 2008

Complexity. In computer science the notation O(f(n)) is used to denote an upper limit of the asymptotic behavior of an algorithm, usually its complexity in terms of execution time or memory consumption (space). Here f is a function of n, often a power or logarithm. n describes the size of the problem. The meaning of O(f(n)) is that the complexity grows with n at most as f(n) does.

The notation can also be used to describe a computational problem. In that case it characterizes the problem's complexity through the best known algorithm that solves it.

Examples: searching in an unordered container of n elements has the complexity of O(n). Binary search in an ordered container with random element access is O(log n). The term random access actually means that the complexity of access is constant, i.e. O(1).

Here are some typical complexity classes listed from 'slowest' to 'fastest' (that is, slower algorithms have Big-O's near the top):

  • O(en) ('exponential')
  • O(nk) for some fixed k ('polynomial')
  • O(n3) ('cubic')
  • O(n2) ('quadratic')
  • O(n*log(n)) (fastest possible time for a comparison sort)
  • O(n) ('linear')
  • O(log(n)) ('logarithmic')
  • O(1) ('constant')

See also Big O notation