I was learning about activation functions in neural networks. Problem with a simple linear function is referred below:
A = cx, derivative with respect to x is c. That means the gradient has no relationship with X. It is a constant gradient and the descent is going to be on a constant gradient. If there is an error in prediction, the changes made by backpropagation is constant and not depending on the change in input delta(x) !!!
A derivative of the ReLu function is also a constant. My question is "How can we backpropagate with ReLu function when its derivative is a constant and not depending on the change in input?"
Linear functions are not used as activation functions in neural networks.
ReLU activation function is defined as
f(x) = max(0,x)
It is not linear and its derivative is not constant. The derivative is 0 for negative x and 1 for positive x.
Related
I am about making backpropagation on a neural network that uses ReLU.
In a previous project of mine, I did it on a network that was using Sigmoid activation function, but now I'm a little bit confused, since ReLU doesn't have a derivative.
Here's an image about how weight5 contributes to the total error.
In this example, out/net = a*(1 - a) if I use sigmoid function.
What should I write instead of "a*(1 - a)" to make the backpropagation work?
since ReLU doesn't have a derivative.
No, ReLU has derivative. I assumed you are using ReLU function f(x)=max(0,x). It means if x<=0 then f(x)=0, else f(x)=x. In the first case, when x<0 so the derivative of f(x) with respect to x gives result f'(x)=0. In the second case, it's clear to compute f'(x)=1.
Relu Derivative is 1 for x >= 0 and 0 for x < 0
The relu derivative can be implemented with np.heaviside step function e.g. np.heaviside(x, 1). The second parameter defines the return value when x = 0, so a 1 means 1 when x = 0.
Coming to this from a maths standpoint it should be half on the discontinuity.
Its quote common to average things like that since frouiers series act in that manner
I’m writing a neural network but I have trouble training it using backpropagation so I suspect there is a bug/mathematical mistake somewhere in my code. I’ve spent ours reading different literature on how the equations of backpropagation should look but I’m a bit confused by it since different books say different things, or at least use wildly confusing and contradictory notation. So, I was hoping that someone who knows with a 100% certainty how it works could clear it out for me.
There are two steps in the backpropagation that confuse me. Let’s assume for simplicity that I only have a three layer feed forward net, so we have connections between input-hidden and hidden-output. I call the weighted sum that reaches a node z and the same value but after it has passed the activation function of the node a.
Apparently I’m not allowed to embed an image with the equations that my question concern so I will have to link it like this: https://i.stack.imgur.com/CvyyK.gif
Now. During backpropagation, when calculating the error in the nodes of the output layer, is it:
[Eq. 1] Delta_output = (output-target) * a_output through the derivative of the activation function
Or is it
[Eq. 2] Delta_output = (output-target) * z_output through the derivative of the activation function
And during the error calculation of the nodes in the hidden layer, same thing, is it:
[Eq. 3] Delta_hidden = a_h through the derivative of the activation function * sum(w_h*Delta_output)
Or is it
[Eq. 4] Delta_hidden = z_h through the derivative of the activation function * sum(w_h*Delta_output)
So the question is basically; when running a node's value through the derivative version of the activation function during backpropagation, should the value be expressed as it was before or after it passed the activation function (z or a)?
Is the first or the second equation in the image correct and similarly is the third or fourth equation in the image correct?
Thanks.
You have to compute the derivatives with the values before it have passed through the activation function. So the answer is "z".
Some activation functions simplify the computation of the derivative, like tanh:
a = tanh(z)
derivative on z of tanh(z) = 1.0 - tanh(z) * tanh(z) = 1.0 - a * a
This simplification can lead to the confusion you was talking about, but here is another activation function without possible confusion:
a = sin(z)
derivative on z of sin(z) = cos(z)
You can find a list of activation functions and their derivatives on wikipedia: activation function.
Some networks doesn't have an activation function on the output nodes, so the derivative is 1.0, and delta_output = output - target or delta_output = target - output, depending if you add or substract the weight change.
If you are using and activation function on the output nodes, the you'll have to give targets that are in the range of the activation function like [-1,1] for tanh(z).
Can anybody tell me how the abovementioned algorithm works. Theoretically I get it but I need some demonstration. A simple 2x2 matrix will do. Hope you can help me on this. This is in relation to a Deep Learning course I am taking.
The ReLU is simply a function:
f(x)=max(0,x)
So if you have a matrix:
X=[[0, 3],[-2,7]] ----> f(X) = [[0,3][0,7]]
During the backward pass, the gradient will flow only through the positive elements of the input matrix. The gradient mask obtained for the previous example would be:
X=[[0, 3],[-2,7]] ----> df(X)/d(X)=[[0,1][0,1]]
Since the derivative of ReLU is not defined for x=0, another valid mask could have been:
df(X)/dX=[[c,1][0,1]] with c between 0 and 1.
The RELU formula is simply f(x)=max(0,x) so it basically put to zero all negative numbers in the convolution matrix.
Here you have a good explanation of the RELU and Convolutional Neural Nets.
I would like to use the function fminsearch
of matlab to search for the best hyperparameters of my SVM with a weighted RBF kernel classifier. fminsearch uses the Nelder-Mead simplex method.
Let's say I have the following hyperparameters: C, gamma, w1....wn where wi are the weights of the kernel.
Additionally, I have the constraint that sum(wi) = 1, i.e. all weights must sum up to one.
Is there a possibility to use Nelder-Mead with this equality constraint? I know that there is the fminsearchbnd
method for Matlab but I think it can handle only boundary inequality constraints.
Edit: I'm using a SVM classifier and the weights are used in a weighted RBF kernel (one weight for each feature). The parameters to estimate are thus C, gamma and the weights. The cost function is the accuracy.
Can you substitute out one of the w(i)? That means, replace e.g. w1 by 1-w2-w3-... (and drop the constraint). Otherwise have a look at fmincon which allows explicit constraints. In addition you may need 0 <= w(i) <= 1.
I'm trying to implement gradient calculation for neural networks using backpropagation.
I cannot get it to work with cross entropy error and rectified linear unit (ReLU) as activation.
I managed to get my implementation working for squared error with sigmoid, tanh and ReLU activation functions. Cross entropy (CE) error with sigmoid activation gradient is computed correctly. However, when I change activation to ReLU - it fails. (I'm skipping tanh for CE as it retuls values in (-1,1) range.)
Is it because of the behavior of log function at values close to 0 (which is returned by ReLUs approx. 50% of the time for normalized inputs)?
I tried to mitiage that problem with:
log(max(y,eps))
but it only helped to bring error and gradients back to real numbers - they are still different from numerical gradient.
I verify the results using numerical gradient:
num_grad = (f(W+epsilon) - f(W-epsilon)) / (2*epsilon)
The following matlab code presents a simplified and condensed backpropagation implementation used in my experiments:
function [f, df] = backprop(W, X, Y)
% W - weights
% X - input values
% Y - target values
act_type='relu'; % possible values: sigmoid / tanh / relu
error_type = 'CE'; % possible values: SE / CE
N=size(X,1); n_inp=size(X,2); n_hid=100; n_out=size(Y,2);
w1=reshape(W(1:n_hid*(n_inp+1)),n_hid,n_inp+1);
w2=reshape(W(n_hid*(n_inp+1)+1:end),n_out, n_hid+1);
% feedforward
X=[X ones(N,1)];
z2=X*w1'; a2=act(z2,act_type); a2=[a2 ones(N,1)];
z3=a2*w2'; y=act(z3,act_type);
if strcmp(error_type, 'CE') % cross entropy error - logistic cost function
f=-sum(sum( Y.*log(max(y,eps))+(1-Y).*log(max(1-y,eps)) ));
else % squared error
f=0.5*sum(sum((y-Y).^2));
end
% backprop
if strcmp(error_type, 'CE') % cross entropy error
d3=y-Y;
else % squared error
d3=(y-Y).*dact(z3,act_type);
end
df2=d3'*a2;
d2=d3*w2(:,1:end-1).*dact(z2,act_type);
df1=d2'*X;
df=[df1(:);df2(:)];
end
function f=act(z,type) % activation function
switch type
case 'sigmoid'
f=1./(1+exp(-z));
case 'tanh'
f=tanh(z);
case 'relu'
f=max(0,z);
end
end
function df=dact(z,type) % derivative of activation function
switch type
case 'sigmoid'
df=act(z,type).*(1-act(z,type));
case 'tanh'
df=1-act(z,type).^2;
case 'relu'
df=double(z>0);
end
end
Edit
After another round of experiments, I found out that using a softmax for the last layer:
y=bsxfun(#rdivide, exp(z3), sum(exp(z3),2));
and softmax cost function:
f=-sum(sum(Y.*log(y)));
make the implementaion working for all activation functions including ReLU.
This leads me to conclusion that it is the logistic cost function (binary clasifier) that does not work with ReLU:
f=-sum(sum( Y.*log(max(y,eps))+(1-Y).*log(max(1-y,eps)) ));
However, I still cannot figure out where the problem lies.
Every squashing function sigmoid, tanh and softmax (in the output layer)
means different cost functions.
Then makes sense that a RLU (in the output layer) does not match with the cross entropy cost function.
I will try a simple square error cost function to test a RLU output layer.
The true power of RLU is in the hidden layers of a deep net since it not suffer from gradient vanishing error.
If you use gradient descendent you need to derive the activation function to be used later in the back-propagation approach. Are you sure about the 'df=double(z>0)'?. For the logistic and tanh seems to be right.
Further, are you sure about this 'd3=y-Y' ? I would say this is true when you use the logistic function but not for the ReLu (the derivative is not the same and therefore will not lead to that simple equation).
You could use the softplus function that is a smooth version of the ReLU, which the derivative is well known (logistic function).
I think the flaw lies in comapring with the numerically computed derivatives. In your derivativeActivation function , you define the derivative of ReLu at 0 to be 0. Where as numerically computing the derivative at x=0 shows it to be
(ReLU(x+epsilon)-ReLU(x-epsilon)/(2*epsilon)) at x =0 which is 0.5. Therefore, defining the derivative of ReLU at x=0 to be 0.5 will solve the problem
I thought I'd share my experience I had with similar problem. I too have designed my multi classifier ANN in a way that all hidden layers use RELU as non-linear activation function and the output layer uses softmax function.
My problem was related to some degree to numerical precision of the programming language/platform I was using. In my case I noticed that if I used "plain" RELU not only does it kill the gradient but the programming language I used produced the following softmax output vectors (this is just a example sample):
⎡1.5068230536681645e-35⎤
⎢ 2.520367499064734e-18⎥
⎢3.2572859518007807e-22⎥
⎢ 1⎥
⎢ 5.020155103452967e-32⎥
⎢1.7620297760773188e-18⎥
⎢ 5.216008990667109e-18⎥
⎢ 1.320937038894421e-20⎥
⎢2.7854159049317976e-17⎥
⎣1.8091246170996508e-35⎦
Notice the values of most of the elements are close to 0, but most importantly notice the 1 value in the output.
I used a different cross-entropy error function than the one you used. Instead of calculating log(max(1-y, eps)) I stuck to the basic log(1-y). So given the output vector above, when I calculated log(1-y) I got the -Inf as a result of cross-entropy, which obviously killed the algorithm.
I imagine if your eps is not reasonably high enough so that log(max(1-y, eps)) -> log(max(0, eps)) doesn't yield way too small log output you might be in a similar pickle like myself.
My solution to this problem was to use Leaky RELU. Once I've started using it, I could carry on using the multi classifier cross-entropy as oppose to softmax-cost function you decided to try.