# Merge Sort Made Easy! (Code Meditation)

Merge Sort has the Best, Worst, and Average-case complexity of O(nlogn).

• Can it get better than this for a comparison-based sorting algorithm? No.
• So Why wait? let’s learn this ASAP and accurately.
• We will also look at the shortcomings of Merge Sort at a later point.

So I started out by gleaning over available online resources and found the different ways presented in various articles slightly confusing. I am writing this post in the hope of making it easier for any other learners out there to understand this concept.

# The Gist

The idea is simple, just split the array at a midpoint (which I will show you how to calculate) and simply call the function on itself, meaning further split each array until you reach a case where you have a single item in an array to work with. This is called the base case. Once you reach the base case, you merge all single element arrays (base cases) from bottom to the top of the recursion tree using an algorithm called merge algorithm (which I will be explaining). Why wait? let’s jump right into the code and I'll try my best to explain every line step by step.

# Code:

All this information may be just too much sometimes, right? what worked for me is to familiarise yourself with the above code, sit in a calm, quiet place, close eyes, and retrace this logic step by step using code as reference and after that, I’m sure that you will be able to code this in a single go smoothly. I call this Code Meditation :P

# Interesting facts about Merge Sort

• MergeSort is a divide and conquer algorithm, we are splitting array and recursively working on individual subparts.
• Merge sort needs O(n) amount of memory in order to copy over elements as it sorts. You might have noticed this as we were creating left and right sub-arrays every time. This is probably the greatest drawback of the merge sort algorithm: it is an out-of-place sorting algorithm, that requires additional memory as its dataset grows.
• Merge sort can work well on any type of data sets irrespective of its size.

# Complexity Analysis

One way to solve recurrences is to draw a recursion tree where each node in the tree represents a subproblem and the value at each node represents the amount of work spent at each subproblem.

The root node represents the original problem. In a recursion tree, every node that is not a leaf has children, representing the number of subproblems it is splitting into. To figure out how much work is being spent at each subproblem, first find the size of the subproblem with the help of b, then substitute the size of the subproblem in the recurrence formula T(n), then take the value of f(n) as the amount of work spent at that subproblem.

Long story short, a node with a problem size of x, the node will have a children each contributing f(x/b) amount of work. The work at the leaves is T(1), since at that point we have divided the original problem up until it can no longer be further divided. Note that this means that the work contributed by the leaves are O(1).

Once we have our tree, the total runtime can be calculated by summing up the work contributed by all of the nodes. We can do this by summing up the work at each level of the tree, then summing up the levels of the tree.

`java.util.Arrays` uses quicksort (actually dual-pivot quicksort in the most recent version) for primitive types such as `int` and mergesort (modified) for objects that implement `Comparable` or use a `Comparator`. Why the difference? Why not pick one and use it for all cases? Robert Sedgewick suggests that