Error function in Artificial Neural Network trained using backpropogation - neural-network

On various literature I keep seeing reference of
error function
but I'm not quite sure what it means. I am using sigmoid function for activation. Does the error function mean the following equation:
differential = actualOutput(1-actualOutput)
error = (expectedOutput-actualOutput)(differential)
or is the following:
error = expectedOutput-actualOutput
?

Your answer
The error function is the function which you try to minimize. What you have listed above is a set of error functions, and the derivatives of some of them. It might be a bit confusing when litterature uses the same term when the minimizing function has been derivated. Just remember that we wish to minimize the error in our network, and the functions which helps us achieve it is the error function.
The error function
The most common error function is the the of the squares of the differences between all outputdesired and outputactual.
The derivative of the error function for the output layer

Related

Different loss functions for backpropagation

I came across some different error calculation functions for backpropagation:
Squared error function from http://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
or a nice explanation for the derivation for the BP loss function
Error = Output(i) * (1 - Output(i)) * (Target(i) - Output(i))
Now, I'm wondering how many more there are, and what difference in effect it has on training?
Also, since I understand that the second example uses the derivative of the activation function used by the layer, does the first one also does this in a way? And would it be true for any loss function (if there are more)?
Finally, how to know which one to use, and when?
This was a very broad question, but I can shed some light on the error / cost function part.
Cost functions
There are many different cost functions that can be applied when working with neural networks. There are no neural network specific cost functions. The most common cost function in NN is probably the Mean Squared Error (MSE) and the Cross Entropy Cost function. The latter cost function is often the most appropriate when working with logistic or softmax output layers. The MSE cost function on the other hand, is convenient since it does not require the output values to be in the range [0, 1].
The different cost functions excerts different convergence properties and has their own pros and cons. You'll have to read up on those that are interesting to you.
List of cost functions
Danielle Ensign has compiled a short, nice list of cost functions over at CrossValidated.
Sidenote
You have confused the derivative of squared error function. The equation you've defined as the derivative of the error function, is actually the derivative of the error functions times the derivative of your output layer activation function. This multiplication calculates the delta of the output layer.
The squared error function and its derivative are defined as:
While the sigmoid activation function and its derivative are defined as:
The delta of the output layer is defined as:
And this is true for all cost functions.

Multi-variable Fitness Function error using Optimization Tool

I have the following fitness function:
function f = objfun(x,t)
f = x.*(t-x);
end
When i try to use this code as a fitness function using MATLAB's Optimization Tool and the Genetic Algorithm (ga) solver, i get the following error:
Error running optimization. Not enough input arguments.
I know the function has only 2 variables and I'm passing it those few variables so I have no idea why I am getting this error.
Can someone please help me fix this?
I never worked in Matlab because I heard that is slow (see for instance this thread: Performance Tradeoff - When is MATLAB better/slower than C/C++).
for genetic algorithms you need the highest speed possible because for complex problems you need very large populations...
I suggest to use C/C++. Here is a very light C implementation of a Genetic Algorithm that I've made for solving the function optimization problem: http://create-technology.blogspot.ro/2015/03/a-genetic-algorithm-for-solving.html

Embedded MATLAB function : Block Error

I made a design in simulink to implement PID using embedded matlab function.
My function is :
function [u,integral,previous_error] = fcn(Kp,Td,Ti,error,previous_error1,integral1)
dt = 1;
Ki= Kp/Ti;
Kd=Kp*Td;
integral = integral1 + error*dt; % integral term
derivative = (error-previous_error1)/dt; % derivative term
u = Kp*error+Ki*integral+Kd*derivative; % action of control
previous_error=error;
%integral=integral;
end
This is how my model looks:(a part of the entire model)
I am getting the following error :
Simulink cannot solve the algebraic loop containing 'pid_block1/MATLAB Function' at time 2.2250754336053813E-8 using the TrustRegion-based algorithm due to one of the following reasons: the model is ill-defined i.e., the system equations do not have a solution; or the nonlinear equation solver failed to converge due to numerical issues.
To rule out solver convergence as the cause of this error, either
a) switch to LineSearch-based algorithm using
set_param('pid_block1','AlgebraicLoopSolver','LineSearch')
b) reducing the ode45 solver RelTol parameter so that the solver takes smaller time steps.
If the error persists in spite of the above changes, then the model is likely ill-defined and requires modification.
Any idea, why am I getting it?
Should i use global variables for integral and previous_error here?
Thanks in advance.
Erm.. Unless there is a specific reason why you need it in this form, I would strongly recommend replacing your MATLAB Function block with Simulink blocks such as:
Gain blocks for Kp, KI, and KD
Sum blocks for all addition and subtraction
Derivative block for derivative
Integrator block for integration
etc....
I've found that Algebraic loop problems are really hard to get rid of and are usually just best to avoid. The method I suggest above can be used for most any controller type and has worked quite nicely for me in the past.
If the issue is neatness, you can always create your own "PID controller" subsystem or library part.
Let me know if you need some more detail or a diagram on how you might do this.

is the Netlab's function mlperr calculating the mean squared error?

I wonder if the mlperr from the Netlab package is calculating the mean squared error.
The documentation states that it's dependent on the ouput's units activation function. How does that make sense? Shouldn't it be independent from that?
I also tried to read the source code of mlperr and I didn't see any signs that could make me think that this is a MSE error function.
Any Netlab expert here that can offer some insights? Thanks! :)
This method is used to evaluate the multilayer perceptron accodring to its output activation. It assumes the most common usage of such, so:
for linear output it returns the MSE error
0.5*sum(sum((y - t).^2))
for logistic output it returns the cross entropy error
-sum(sum(t.*log(y) + (1 - t).*log(1 - y)))
for softmax output it returns the corresponding energy error
-sum(sum(t.*log(y)))
Whole source can be seen here.

Simulate ARMA process in matlab

I am trying to simulate a time series process using Matlab. For example, let's see the following example:
http://www.mathworks.com/help/econ/arima.print.html
When I run following code
model = arima(1,0,1);
[fit,VarCov] = estimate(model,Y,'print',false);
I get the following error:
??? Undefined function or method 'arima' for input arguments of type 'double'.
Does Matlab contain functions for Matlab? Can I calculate different functions,like calculate autocorrelation or autocovariance at different lag? Or estimate ARMA parameters?
You need to make sure that you have a license and code for the Econometrics Toolbox before using that function. MATLAB has annoying license requirements for both the main software and its various toolboxes :(
But, you may be able to find some public code that can do the same thing. Check out http://www.mathtools.net/