In the previous segments, you learnt the concept of basis and the change of basis. Now, you might wonder what role does it have to play in PCA. Let’s understand how the change of basis plays an important role in the case of dimensionality reduction.

You understood intuitively in the above lecture how a change in basis is the fundamental concept behind PCA. To summarise

- PCA finds new basis vectors for us. These new basis vectors are also known as Principal Components.
- We represent the data using these new Principal Components by performing the change of basis calculations.
- After doing the change of basis, we can perform dimensionality reduction. In fact, PCA finds new basis vectors in such a way that it becomes easier for us to discard a few of the features.

In the next video let’s take a look at a numerical example to drive home this concept.

## Calculations for the demonstration

In the above video, you took a look at a change of basis demonstration for the roadmap example. Here are the relevant calculations that were done to obtain the new dataset from the original dataset.

First, we had the following dataset in the beginning:

Visually, the above image can be shown as follows:

And, PCA gave us the following basis vectors as the new principal components:

[0.89440.4472] and [−0.44720.8944]

These basis vectors are shown as follows:

The basis vector matrix would come out to be :

[0.8944−0.44720.44720.8944]

The next step would be to compute the change of basis matrix **M**, which in this case will be the inverse of the matrix above since we’re moving from the standard basis to a non-standard basis.

M=[0.8944−0.44720.44720.8944]−1=[0.89440.4472−0.44720.8944]

(**Note** – You can verify this calculation in python by using the **np.linalg.inv()** function)

Now, all we have to do is multiply the matrix **M **that has calculated above to each of the points in the dataset to obtain the representation in the new set of basis vectors.

For P1, the original representation is given as (2,1) or [21]

and therefore the new representation is M multiplied by the above vector or:

[0.89440.4472−0.44720.8944][21]=[2.240]

Similarly, when you perform the above calculation for all the points, you get the following representation for the dataset:

This can be represented visually as below:

**Additional Notes**

- * As explained in the video we haven’t yet discussed how Principal Components are found numerically. We’ll be learning that in the next session.

Report an error