In the previous sessions, we learned about linear regression. If you recall the equation for the line that’s fit the data, is given as:

** y(p)=β0+β1x**

Where β0 is the intercept of the fitted line and β1 is the coefficient for the independent variable x.

Now the question is how to find them? In this lecture, we will be learning how the LinearRegression computes β0 and β1, Let’s listen to Prof.Dinesh and understand it in detail.

In the previous lecture, we understood that we need to find the optimal betas of the line to fit best the data and one way of finding it is to follow optimisation methods such as Gradient Descent.

But, even before Gradient Descent, we need to understand the cost function for Linear Regression, Let’s know the Cost Function in detail, in the next lecture.

To summarise, in this section, we understood that to find the optimum betas, we need to reduce the cost function for all data points, which is given as,

** J(θ0,θ1)=∑Ni=1(yi−yi(p))2**

The way to find the optimal betas or thetas is known as Gradient Descent. Let’s move to the next section and learn more about this.