Activation Function In Neural Network

 To put it another way, an artificial neuron calculates the 'weighted total' of its inputs and adds a bias, as illustrated by the net input in the diagram below.

Lightbox

Mathematically

\text{net input} =\sum_\text{(weight * input)+bias}

The value of net input can now be anything between -inf and +inf. Because the neuron does not understand how to bind to a value, it is unable to determine the firing pattern. As a result, the activation function is a crucial component of any artificial neural network. They essentially determine whether or not a neuron should be engaged. As a result, the value of the net input is limited.
The activation function is a non-linear change that we apply to the input before passing it to the next layer of neurons or converting it to output.

Types Of Activation Functions

Deep Learning employs several distinct types of activation functions. The following are a few of them:

1.Step Function:

One of the most basic types of activation functions is the step function. In this case, we assume a threshold value, and the neuron is triggered if the value of net input, say y, is greater than the threshold.
Mathematically,

textif x>=0, f(x)=1
textif x0 and f(x)=0

Given below is the graphical representation of step function.

2.Sigmoid Function

The sigmoid function is a popular activation method. It is defined as follows:

\frac{1}{(1+e^-^x)}

Graphically.

Lightbox

This is a continuous differentiable smooth function. The fact that it is non-linear gives it a significant advantage over step and linear functions. The sigmoid function has an exceptionally amazing characteristic. This simply means that if I have numerous neurons with sigmoid activation function in neural network, the output will also be non linear. The function has a S form and ranges from 0 to 1.

3.ReLU

The Rectified linear unit (ReLU) is a function. It's the most used activation method. It is defined as follows:

 f(x) = \max(0, x)

Graphically,

Lightbox

The key benefit of employing the ReLU function over other activation functions is that it does not simultaneously stimulate all of the neurons. What exactly does this imply? If you look at the ReLU function if the input is negative it will convert it to zero and the neuron does not get activated.

4.Leaky ReLU

The Leaky ReLU function is a better variant of the ReLU function. We define the Relu function as a small linear component of x, rather than specifying it as 0 for x less than 0. It can be summed up as follows:

<img src="https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-a55016bd380ee1ba3f7f9a9cfdaf8094_l3.svg" alt="f(x) = ax, x

Graphically,

Lightbox

InsideAIML is the platform where you can learn AI based content by the help of certain types of courses.

Comments

Popular posts from this blog

Tanh Activation Function

Sigmoid Activation Function And Its Uses.

Unleashing Creativity: The Latest Frontier of Animation AI