ML | Linear Regression

A machine learning algorithm based on supervised learning is linear regression. It executes a regression operation. Regression uses independent variables to model a goal prediction value. It is mostly used to determine how variables and forecasting relate to one another. Regression models vary according to the number of independent variables they utilize and the type of relationship they take into account between the dependent and independent variables.

Predicting the value of a dependent variable (y) based on an independent variable is carried out using linear regression algorithm (x). Therefore, x (the input) and y (the output) are found to be linearly related using this regression approach (output). Thus, the term "linear regression" was coined.

In the diagram above, X represents a person's job history and Y represents their wage. The regression line is the line that fits our model the best.

Hypothesis Function Of Linear Regression:



As we train the provided model:

input training data (one input variable, or parameter), x

Labels to data, y (supervised learning)


The model fits the best line to predict the value of y for a given value of x during training. By locating the best 1 and 2 values, the model produces the best regression fit line.

One: interception

2: x's coefficient


The best fit line is obtained after determining the best 1 and 2 values. Therefore, when we ultimately use our model to make a prediction, it will forecast the value of y based on the value of the input x.


How can 1 and 2 values be updated to obtain the greatest fit line?

Cost Function{J}:

The model seeks to predict y values such that the error difference between projected value and real value is minimal by attaining the best-fit regression line. To find the optimal value that minimises the error between the predicted y value (pred) and real y value, it is crucial to change the 1 and 2 values (y).




The Root Mean Squared Error (RMSE) between the predicted y value (pred) and the real y value is the cost function(J) of linear regression (y).

Gradient Descent: The model employs Gradient Descent to update 1 and 2 values in order to minimise Cost function (minimising RMSE value) and get the best fit line. Starting with random values for 1 and 2, the goal is to repeatedly update the values until the minimal cost is reached.

#Artificial intelligence #Machine Learning #Python #Data Science

Comments

Popular posts from this blog

Tanh Activation Function

Sigmoid Activation Function And Its Uses.

Unleashing Creativity: The Latest Frontier of Animation AI