What are Asymptotic Notations
These are the mathematical notations that are used for the asymptotic analysis of the algorithms. The term 'asymptotic' describes an expression where a variable exists whose value tends to infinity. In short, it is a method that describes the limiting behavior of an expression. Thus, using asymptotic notations, we analyze the complexities of an algorithm and its performance. Using the asymptotic notations, we determine and show the complexities after analyzing it. Therefore, there are three types of asymptotic notations through which we can analyze the complexities of the algorithms:
- Big O Notation (O): It represents the upper bound of the runtime of an algorithm. Big O Notation's role is to calculate the longest time an algorithm can take for its execution, i.e., it is used for calculating the worst-case time complexity of an algorithm.
- Omega Notation (Ω(n)): It represents the lower bound of the runtime of an algorithm. It is used for calculating the best time an algorithm can take to complete its execution, i.e., it is used for measuring the best case time complexity of an algorithm.
- Theta Notation (Θ(n)): It carries the middle characteristics of both Big O and Omega notations as it represents the lower and upper bound of an algorithm.
So, these three asymptotic notations are widely utilized, however, apart from these, there are additional prevalent asymptotic notations like linear, logarithmic, cubic, and numerous others.
Big O Notation
The Big O notation is employed to indicate the maximum runtime of an algorithm, providing a measure of the algorithm's worst-case time complexity. It evaluates and computes the time and memory needed for running an algorithm with a specific input size.
Mathematically,
For a function denoted as f(n) and another function labeled as g(n), both functions are defined across an infinite range of positive real numbers.
Where g(n) is consistently positive for all sufficiently large n, it can be expressed as:
f(n) = O(g(n)) where n tends to infinity (n → ∞)
However, it is notable that the assumption of n approaching infinity is not explicitly mentioned, allowing us to express the aforementioned equation as:
f(n) = O(g(n))
Here, f and g represent essential functions mapping from positive integers to non-negative real values.
Therefore, the concept of Big O notation pertains to significant n values.
Properties of Big O Notation
Certain essential properties of Big O Notation are discussed below:
- Constant Multiplication: If f(n) = c.g(n), then O(f(n)) = O(g(n)) where c is a nonzero constant.
- Summation Function: If f(n) = f 1 (n) + f 2 (n) + -- + f m (n) and f i (n)≤ f i +1(n) ∀ i=1, 2,--, m, then O(f(n)) = O(max(f1(n), f2(n), --, fm(n))).
- Polynomial Function: If f(n) = a0 + a1.n + a2.n2 + -- + am.nm, then O(f(n)) = O(nm).
- Logarithmic Function: If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n))
In the context of Big O notation, all logarithmic functions exhibit equivalent growth patterns.
How does Big O Notation make runtime analysis of an algorithm
When evaluating the efficiency of an algorithm, we traditionally assess and contrast the worst-case time complexities of the algorithm. The complexity denoted by O(1), recognized as Constant Running Time, represents the quickest possible runtime for an algorithm where the execution time remains consistent regardless of input size variations. Despite being the optimal runtime scenario for an algorithm, achieving constant running time is quite uncommon. This rarity is attributed to the fact that an algorithm's runtime is inherently influenced by the magnitude of the input size, denoted as n.
For example:
For the given algorithm, it's crucial to understand that the runtime behavior is intricately linked to the input size, denoted as n. Let's delve into some mathematical illustrations to delve into the runtime evaluation of the algorithm across varying values of n:
- n = 20 log (20) = 2.996; 20 = 20; 20 log (20) = 59.9; 20 2 = 400; 2 20 = 1084576; 20! = 2.432902 + 18 18 ;
- n = 10 log (10) = 1; 10 = 10; 10 log (10) = 10; 10 2 = 100; 2 10 = 1024; 10! = 3628800;
Thus, similarly, we calculate the runtime performance of an algorithm. Let's see some algorithmic examples and see the runtime analysis of those algorithms:
- For Linear Search, the runtime complexity is O(n).
- For binary search, the runtime complexity is O(log n).
- For Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort, the runtime complexity is O(n^c).
- For Exponential algorithms such as Tower of Hanoi, the runtime complexity is O(c^n).
- For Heap Sort, Merge SortSort, the runtime complexity is O(n log n).
How does Big O notation analyze the Space complexity
It is crucial to evaluate the runtime and space complexity of an algorithm. This analysis provides insights into the time taken for algorithm execution and the memory utilized. Therefore, assessing the space complexity involves comparing the worst-case performance in terms of memory usage.
To assess the space complexity of an algorithm, it is essential to perform the following two tasks:
Task 1: Development of the software for a specific algorithm is necessary.
The magnitude of the input variable n is essential in determining the amount of memory each individual item will occupy.
Both of these tasks are crucial and must be completed before proceeding to calculate the space complexity of an algorithm.
Examples of Algorithms
Below we have mentioned some algorithmic examples with their space complexities:
- For Linear Search, Bubble sort, selection sort, Heap sort, Insertion sort, and Binary Search, the space complexity is O(1) .
- For radix sort, the space complexity is O(n+k) .
- For quick SortSort, the space complexity is O(n) .
- For merge sort, the space complexity is O(log n) .
Example of Big O Notation in C
We have coded the selection sort algorithm in C language below and determined the worst-case time complexity (Big O notation) of this algorithm:
for(int i=0; i<n; i++)
{
int min = i;
for(int j=i; j<n; j++)
{
if(array[j]<array[min])
min=j;
}
int temp = array[i];
array[i] = array[min];
array[min] = temp;
}
In order to analyze the algorithm:
- We can see that the range of the for outer loop is i < n , which means the order of the loop is O(n).
- Next, for the inner for loop, it is also O(n) as j < n.
- The average efficiency is found n/2 for a constant c, but we ignore the constant. So, the order is O(n).
- On multiplying the order of the inner and outer loop, we get the runtime complexity as O(n^2).
You have the option to incorporate alternative algorithms in the C programming language, analyze them, and establish their complexities using a comparable approach.