IKH

Hierarchical Clustering Algorithm

One of the major considerations in using the K-means algorithm is deciding the value of K beforehand. The hierarchical clustering algorithm does not have this restriction.

The output of the hierarchical clustering algorithm is quite different from the K-mean algorithm as well. It results in an inverted tree-shaped structure, called the dendrogram. An example of a dendrogram is shown below.

Let’s see how hierarchical clustering works.

In the K-Means algorithm, you divided the data in the first step itself. In the subsequent steps, you refined our clusters to get the most optimal grouping. In hierarchical clustering, the data is not partitioned into a particular cluster in a single step. Instead, a series of partitions/merges take place, which may run from a single cluster containing all objects to n clusters that each contain a single object or vice-versa.

This is very helpful since you don’t have to specify the number of clusters beforehand.

Given a set of N items to be clustered, the steps in hierarchical clustering are:

  • Calculate the NxN distance (similarity) matrix, which calculates the distance of each data point from the other.
  • Each item is first assigned to its own cluster, i.e. N clusters are formed.
  • The clusters which are closest to each other are merged to form a single cluster.
  • The same step of computing the distance and merging the closest clusters is repeated till all the points become part of a single cluster

Thus, what you have at the end is the dendrogram, which shows you which data points group together in which cluster at what distance. You will learn more about interpreting the dendrogram in the next segment.

Look at the image given below and answer the question that follows.

Report an error