Time Complexity

In the previous section, we learned about Algorithm Complexity. Now, we will learn about Time Complexity.

โฑ๏ธ What is Time Complexity?

We have already discussed that Time Complexity is used to measure the amount of time an algorithm takes to complete as a task based on the size of the input data.

so what does this mean? For smaller input dataset, the algorithm may look fast enough, but as the input dataset becomes larger, the algorithm may become slower and slower.

For Example: If we want to search a contact in a phone book with 10 contacts, it will be almost instant. But if we would search a contact in a phone book with 10,000 contacts, it will take more time. And in case of 1,000,000 contacts, it will become slower and slower.

๐Ÿงฎ How to calculate Time Complexity?

To figure out time complexity, just ask: how many times does the work repeat as the input size โ€˜nโ€™ grows?

  1. Spot loops: Look for loops, nested loops, and recursive calls.
  2. Count repeats: Estimate how many times they run for input size โ€˜nโ€™ (like n, nยฒ, log n).
  3. Combine parts: Add times for sequential steps, multiply for nested ones. Like Do A then B โ†’ add (n + n). A loop inside a loop โ†’ multiply (n ร— n).
  4. Ignore small terms and constants: Focus on the biggest growth part (nยฒ + n โ†’ nยฒ, 3n โ†’ n).
  5. Express in Big O notation: Write the result as O(1), O(n), O(n log n), O(nยฒ), etc.

Quick examples:

  • No loops, any arithmetic operation (addition, subtraction, multiplication, division), just a few steps โ†’ O(1)
  • One loop over n โ†’ O(n)
  • Two nested loops over n โ†’ O(nยฒ)
  • Halving each time (like binary search) โ†’ O(log n)

Lets see some common time complexities in detail!

โš™๏ธ O(1) time complexity

O(1) time complexity which is also known as constant time complexity. This means that algorithm takes the same time to execute no matter how large the input size is.

๐Ÿ“Œ What statements come under O(1) time complexity?

Here are some examples of operations that typically have O(1) time complexity:

  • accessing a specific element in an array by index
  • updating a specific element in an array by index
  • adding or removing an element from the end of a list (like ArrayList in Java)

Here is an example of O(1) time complexity:

public void updateAtIndex(int[] arr, int index, int newValue) {
arr[index] = newValue;
System.out.println("Updated element at index " + index + ": " + arr[index]);
}

As you can see in the above example:

  1. We have a method updateAtIndex that takes an array, an index, and a new value as input.
  2. It just updates the element at the specified index with the new value and prints it.
  3. There are no loops or recursive calls, just a couple of operations regardless of the size of the input array,

so the time complexity of this algorithm is O(1).

๐Ÿ“ˆ O(n) time complexity

O(n) time complexity which is also known as linear time complexity. In this case, the algorithm takes as much time to execute as the size of the input data. This means that if the input size doubles, the time taken by the algorithm will also double.

Here is an example of O(n) time complexity:

public void printAllElements(int[] arr) {
for (int i = 0; i < arr.length; i++) {
System.out.println("Element at index " + i + ": " + arr[i]);
}
}

As you can see in the above example:

  1. We have a method printAllElements that takes an array as input.
  2. Then it uses loop through each element of the array and prints it.
  3. The loop runs n times where n is the size of the input array.

so the time complexity of this algorithm is O(n).

If we give n = 5, the loop will run 5 times. And if we give n = 10, the loop will run 10 times. So as the input size increases, the time taken by the algorithm also increases linearly.

๐Ÿ“‰ O(n^2) time complexity

O(n^2) time complexity which is also known as quadratic time complexity. In this case, the time taken by the algorithm is proportional to the square of the size of the input data. This means that if the input size doubles, the time taken by the algorithm will increase by a square of the input size.

Here is an example of O(n^2) time complexity:

public void printAllPairs(int[] arr) {
for (int i = 0; i < arr.length; i++) {
for (int j = 0; j < arr.length; j++) {
System.out.println("Pair: (" + arr[i] + ", " + arr[j] + ")");
}
}
}

As you can see in the above example:

  1. We have a method printAllPairs that takes an array as input.
  2. Then it uses a nested loop to print all possible pairs of elements in the array.
  3. The outer loop runs n times and for each iteration of the outer loop, the inner loop also runs n times. So the total number of iterations is n * n which is n^2.

so the time complexity of this algorithm is O(n^2).

If we give n = 5, the total number of iterations will be 5 _ 5 = 25. And if we give n = 10, the total number of iterations will be 10 _ 10 = 100. So as the input size increases, the time taken by the algorithm increases quadratically.

๐Ÿ” O(log n) time complexity

O(log n) time complexity which is also known as logarithmic time complexity. In this case, the time taken by the algorithm increases logarithmically as the size of the input data increases.

Here is an example of O(log n) time complexity:

public int binarySearch(int[] arr, int target) {
int left = 0;
int right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) {
return mid; // Target found
} else if (arr[mid] < target) {
left = mid + 1; // Search in the right half
} else {
right = mid - 1; // Search in the left half
}
}
return -1; // Target not found
}

As you can see in the above example:

  1. We have a method binarySearch that takes a sorted array and a target value as input.
  2. It uses a while loop to repeatedly divide the search interval in half until the target value is found or the interval is empty.
  3. Each time the loop runs, it effectively halves the size of the input data being considered.

so the time complexity of this algorithm is O(log n).

๐Ÿ“ Calculation of O(log n) time complexity:

- Pass 1: n elements
- Pass 2: n/2 elements
- Pass 3: n/4 elements
- Pass 4: n/8 elements
- Pass k: n/(2^k) elements
so when we reach at the end of the loop, we have n/(2^k) = 1;
=> n = 2^k
if we take log on both sides
=> log n = log(2^k)
=> log n = k \* log 2
=> k = log n / log 2
Since log 2 is a constant, we can ignore it in Big O notation.
=> k = log n

So the time complexity of the binary search algorithm is O(log n).

๐Ÿงญ Rules to identify log n time complexity?

  1. Look for algorithms that reduce the problem size by a constant factor (like halving) with each step. For example, in the binary search algorithm, the search space is halved with each iteration of the loop. It can be any constant factor, not just halving. For instance, if an algorithm reduces the problem size by one-third each time, it still has O(log n) time complexity.

  2. Check if the algorithm involves dividing the input data into smaller sub-problems.

Don't worry

Donโ€™t worry if you donโ€™t understand it right now. We will practice more examples in the upcoming sections.

๐Ÿชœ O(nlogn) time complexity

O(n log n) time complexity which is also known as linearithmic time complexity. Its combination of O(n) and O(log n) time complexities. This means that the algorithm performs a logarithmic number of operations for each element in the input data.

Here is an example of O(n log n) time complexity:

public void mergeSort(int[] arr, int left, int right) {
if (left < right) {
int mid = left + (right - left) / 2;
// Recursively sort the first half
mergeSort(arr, left, mid);
// Recursively sort the second half
mergeSort(arr, mid + 1, right);
// Merge the sorted halves
merge(arr, left, mid, right);
}
}

As you can see in the above example:

  1. We have a method mergeSort that takes an array and the left and right indices as input.
  2. It uses a recursive approach to divide the array into smaller sub-arrays until each sub-array has one element.
  3. The merging process combines the sorted sub-arrays back together.
  4. The division of the array takes O(log n) time because the array is halved at each step, and the merging process takes O(n) time because each element needs to be processed.

so the time complexity of this algorithm is O(n log n).

๐Ÿ“Š Time Complexity Table in increasing order

Here is a table that summarizes the common time complexities in increasing order:

Time Complexity Description
O(1) Constant Time
O(log n) Logarithmic Time
O(n) Linear Time
O(n log n) Linearithmic Time
O(n^2) Quadratic Time
O(2^n) Exponential Time
O(n!) Factorial Time

When you come from top to bottom, the time complexity increases, which means the algorithm becomes slower and slower as the input size increases.

โœ… Conclusion

In this section, we learned about Time Complexity in detail. We learned what is time complexity, how to calculate time complexity, and the common time complexities like O(1), O(n), O(n^2), O(log n), and O(n log n) with examples.

Share & Connect