IKH

The working of PCA

Until now, you’ve learnt the two building blocks of PCA: Basis and variance. In the following video, we will make use of both the terms to make you understand the objective that PCA aims to achieve.

The steps  of PCA as summarised in the above video are as follows:

  • Find n new features – Choose a different set of n basis vectors (non-standard). These basis vectors are essentially the directions of maximum variance and are called Principal Components
  • Express the original dataset using these new features
  • Transform the dataset from the original basis to this PCA basis.
  • Perform dimensionality reduction – Choose only a certain k (where k < n) number of the PCs to represent the data.  Remove those PCs which have fewer variance (explain less information) than others.

PCA’s role in the ML pipeline almost solely exists as a dimensionality reduction tool. Basically, you choose a fixed number of PCs that explained a certain threshold of variance that you have chosen and then uses only that many columns to represent the original dataset. This modified dataset is then passed on to the ML pipeline for further prediction algorithms to take place. PCA helps us in improving the model performance significantly and helps us in visualising higher-dimensional datasets as well.

Additional Reading

As mentioned in the video, you can take a look at the Algorithm of PCA optional session to understand in detail about how PCA finds the new basis vectors using the eigendecomposition of the covariance matrix method.

Report an error