Posts

Showing posts with the label artificialintelligence machinelearning python

Tanh Activation Function

Image
Tanh Activation F unction  is  veritably   analogous  to the sigmoid/ logistic activation  function , and  indeed  has the  same  S-  shape  with the  difference  in  affair   range  of-1 to 1. In Tanh, the  larger  the  input  (  more   positive ), the  near  the  affair   value  will  be  to1.0, whereas the  lower  the  input  (  more   negative ), the  near  the  affair  will  be  to-1.0. Have  a  look  at the  grade  of the tanh activation  function  to  understand  its  limitations . As you can  see  — it  also  faces the  problem  of  evaporating   slants   analogous  to the sigmoid activation  function . Plus the  grade  of the tanh  function  is  imp...

Rectified Linear Units (ReLU) in Deep Learning

Image
  The Rectified Linear Unit is the most commonly used activation function in deep learning models. The function returns 0 if it receives any negative input, but for any positive value   x x  it returns that value back. So it can be written as  f ( x ) = m a x ( 0 , x ) f ( x ) = m a x ( 0 , x ) . Graphically it looks like this It's surprising that such a simple function (and one composed of two linear pieces) can allow your model to account for non-linearities and interactions so well. But the ReLU Activation function works great in most applications, and it is very widely used as a result. Why It Works Introducing Interactions and Non-linearities. Activation functions  serve  two  primary   purposes   1)  Help  a  model   account  for  commerce   goods . What's an interactive  effect ? It's when one variable A affects a  vaticination   else   depending  on the  value  ofB...