Genetic algorithms: fitness function not working properly - neural-network

I have a binary dataset of (m x n) m instances and n features with m >> n. And there is a target Variable or Class attribute, also binary. I want to do feature selection using genetic algorithm. I decided to use 0/ 1 strings in the GA, where 0 if a feature s not selected, and 1 if a feature is selected. I generated a random K sets of bit strings. Thus each K of these bit strings represents a possible selection of features. To develop a fitness function , I train a neural network with each of these K feature sets(models), and then based on the accuracy on a separate Validation set I created this fitness function for each model :-
fitness=tradeoffk*Valacc+(1-tradeoffk)*(ones(no_of_models,1)*n-featSel)/maxFeat;
This fitness function is like a tradeoff between the number of features passed for training (featSel) and the validation accuracy reported the neural network. I set different values to tradeoffk like 0.5, 0.2 and 0.8.
I ran 10 iterations of the GA. Each iteration was done for 20 genertations, and tried to check how the fitness function grows. However, there is no significant change in the fitness function. In a GA, generally the fitness function is expected to grow and then stabilizes but here it grows very marginally.
For instance, this is the sample output of one of these iterations :-
gen=001 avgFitness=0.808 maxFitness=0.918
gen=002 avgFitness=0.808 maxFitness=0.918
gen=003 avgFitness=0.815 maxFitness=0.918
gen=004 avgFitness=0.815 maxFitness=0.918
gen=005 avgFitness=0.817 maxFitness=0.918
gen=006 avgFitness=0.818 maxFitness=0.918
gen=007 avgFitness=0.818 maxFitness=0.918
gen=008 avgFitness=0.819 maxFitness=0.918
gen=009 avgFitness=0.819 maxFitness=0.918
gen=010 avgFitness=0.819 maxFitness=0.918
gen=011 avgFitness=0.819 maxFitness=0.918
gen=012 avgFitness=0.819 maxFitness=0.918
gen=013 avgFitness=0.819 maxFitness=0.918
gen=014 avgFitness=0.819 maxFitness=0.918
gen=015 avgFitness=0.819 maxFitness=0.918
gen=016 avgFitness=0.819 maxFitness=0.918
gen=017 avgFitness=0.819 maxFitness=0.918
Also ,the neural network takes a lot of time to train ( > 2 hours for 20 generations)
Could anyone give further suggestions , and where is it possibly going wrong ?!

You could use linear-discriminant analysis (LDA) for your validation model instead of neural network. It is much quicker to train, but of course cannot represent non-linear relationships. Have you tried genetic programming? It does have feature-selection built-in as it tries to build a model and select features at the same time. You could give HeuristicLab a try which has a quite powerful genetic programming implementation that also includes classification.

Related

Function approximation by ANN

So I have something like this,
y=l3*[sin(theta1)*cos(theta2)*cos(theta3)+cos(theta1)*sin(theta2)*cos(theta3)-sin(theta1)*sin(theta2)*sin(theta3)+cos(theta1)*cos(theta2)sin(theta3)]+l2[sin(theta1)*cos(theta2)+cos(theta1)*sin(theta2)]+l1*sin(theta1)+l0;
and something similar for x. Where thetai is angles from specified interval and li some coeficients. Task is approximate inversion of equation, so you set x and y and result will be appropriate theta. So I random generate thetas from specified intervals, compute x and y. Then I norm x and y between <-1,1> and thetas between <0,1>. This data I used as training set in such way, inputs of network are normalized x and y, outputs are normalized thetas.
I train the network, tried different configuration and absolute error of network was still around 24.9% after whole night of training. It's so much, so I don't know what to do.
Bigger training set?
Bigger network?
Experiment with learning rate?
Longer training?
Technical info
As training algorithm was used error back propagation. Neurons have sigmoid activation function, units are biased. I tried topology: [2 50 3], [2 100 50 3], training set has length 1000 and training duration was 1000 cycle(in one cycle I go through all dataset). Learning rate has value 0.2.
Error of approximation was computed as
sum of abs(desired_output - reached_output)/dataset_lenght.
Used optimizer is stochastic gradient descent.
Loss function,
1/2 (desired-reached)^2
Network was realized in my Matlab template for NN. I know that is weak point, but I'm sure my template is right because(successful solution of XOR problem, approximation of differential equations, approximation of state regulator). But I show this template, because this information may be useful.
Neuron class
Network class
EDIT:
I used 2500 unique data within theta ranges.
theta1<0, 180>, theta2<-130, 130>, theta3<-150, 150>
I also experiment with larger dataset, but accuracy doesn't improve.

Stochastic Gradient Descent for Logistic Regression always returns a cost of Inf and weight vector never gets any closer

I am trying to implement a logistic regression solver in MATLAB and i am finding the weights by stochastic gradient descent. I am running into a problem where my data seems to produce an infinite cost, and no matter what happens it never goes down...
Both these seem perfectly fine, i cant imagine why my cost function would ALWAYS return infinite.
Here is my training data where the first column is the class (Either 1 or 0) and the next seven columns are the features i am trying to regress on.
Your gradient has the wrong sign:
gradient = learningRate .* (trueClass(m) - predictedClass) .* transpose([1.0 features(m,:)])
It should be:
gradient = learningRate .* (predictedClass - trueClass(m)) .* transpose([1.0 features(m,:)])
See Andrew Ng's note for details.
The gradient with respect to the j-th parameter is obtained as below: (where h(x) is the logistic function; y is the true label; x is the feature vector.)
Otherwise, when you take the negative of gradient you are doing gradient ascend. I believe that 's why you eventually get infinite cost since it's dead loop and you never get out of it.
The update rule should still be:
weightVector = weightVector - gradient

Backpropagation for rectified linear unit activation with cross entropy error

I'm trying to implement gradient calculation for neural networks using backpropagation.
I cannot get it to work with cross entropy error and rectified linear unit (ReLU) as activation.
I managed to get my implementation working for squared error with sigmoid, tanh and ReLU activation functions. Cross entropy (CE) error with sigmoid activation gradient is computed correctly. However, when I change activation to ReLU - it fails. (I'm skipping tanh for CE as it retuls values in (-1,1) range.)
Is it because of the behavior of log function at values close to 0 (which is returned by ReLUs approx. 50% of the time for normalized inputs)?
I tried to mitiage that problem with:
log(max(y,eps))
but it only helped to bring error and gradients back to real numbers - they are still different from numerical gradient.
I verify the results using numerical gradient:
num_grad = (f(W+epsilon) - f(W-epsilon)) / (2*epsilon)
The following matlab code presents a simplified and condensed backpropagation implementation used in my experiments:
function [f, df] = backprop(W, X, Y)
% W - weights
% X - input values
% Y - target values
act_type='relu'; % possible values: sigmoid / tanh / relu
error_type = 'CE'; % possible values: SE / CE
N=size(X,1); n_inp=size(X,2); n_hid=100; n_out=size(Y,2);
w1=reshape(W(1:n_hid*(n_inp+1)),n_hid,n_inp+1);
w2=reshape(W(n_hid*(n_inp+1)+1:end),n_out, n_hid+1);
% feedforward
X=[X ones(N,1)];
z2=X*w1'; a2=act(z2,act_type); a2=[a2 ones(N,1)];
z3=a2*w2'; y=act(z3,act_type);
if strcmp(error_type, 'CE') % cross entropy error - logistic cost function
f=-sum(sum( Y.*log(max(y,eps))+(1-Y).*log(max(1-y,eps)) ));
else % squared error
f=0.5*sum(sum((y-Y).^2));
end
% backprop
if strcmp(error_type, 'CE') % cross entropy error
d3=y-Y;
else % squared error
d3=(y-Y).*dact(z3,act_type);
end
df2=d3'*a2;
d2=d3*w2(:,1:end-1).*dact(z2,act_type);
df1=d2'*X;
df=[df1(:);df2(:)];
end
function f=act(z,type) % activation function
switch type
case 'sigmoid'
f=1./(1+exp(-z));
case 'tanh'
f=tanh(z);
case 'relu'
f=max(0,z);
end
end
function df=dact(z,type) % derivative of activation function
switch type
case 'sigmoid'
df=act(z,type).*(1-act(z,type));
case 'tanh'
df=1-act(z,type).^2;
case 'relu'
df=double(z>0);
end
end
Edit
After another round of experiments, I found out that using a softmax for the last layer:
y=bsxfun(#rdivide, exp(z3), sum(exp(z3),2));
and softmax cost function:
f=-sum(sum(Y.*log(y)));
make the implementaion working for all activation functions including ReLU.
This leads me to conclusion that it is the logistic cost function (binary clasifier) that does not work with ReLU:
f=-sum(sum( Y.*log(max(y,eps))+(1-Y).*log(max(1-y,eps)) ));
However, I still cannot figure out where the problem lies.
Every squashing function sigmoid, tanh and softmax (in the output layer)
means different cost functions.
Then makes sense that a RLU (in the output layer) does not match with the cross entropy cost function.
I will try a simple square error cost function to test a RLU output layer.
The true power of RLU is in the hidden layers of a deep net since it not suffer from gradient vanishing error.
If you use gradient descendent you need to derive the activation function to be used later in the back-propagation approach. Are you sure about the 'df=double(z>0)'?. For the logistic and tanh seems to be right.
Further, are you sure about this 'd3=y-Y' ? I would say this is true when you use the logistic function but not for the ReLu (the derivative is not the same and therefore will not lead to that simple equation).
You could use the softplus function that is a smooth version of the ReLU, which the derivative is well known (logistic function).
I think the flaw lies in comapring with the numerically computed derivatives. In your derivativeActivation function , you define the derivative of ReLu at 0 to be 0. Where as numerically computing the derivative at x=0 shows it to be
(ReLU(x+epsilon)-ReLU(x-epsilon)/(2*epsilon)) at x =0 which is 0.5. Therefore, defining the derivative of ReLU at x=0 to be 0.5 will solve the problem
I thought I'd share my experience I had with similar problem. I too have designed my multi classifier ANN in a way that all hidden layers use RELU as non-linear activation function and the output layer uses softmax function.
My problem was related to some degree to numerical precision of the programming language/platform I was using. In my case I noticed that if I used "plain" RELU not only does it kill the gradient but the programming language I used produced the following softmax output vectors (this is just a example sample):
⎡1.5068230536681645e-35⎤
⎢ 2.520367499064734e-18⎥
⎢3.2572859518007807e-22⎥
⎢ 1⎥
⎢ 5.020155103452967e-32⎥
⎢1.7620297760773188e-18⎥
⎢ 5.216008990667109e-18⎥
⎢ 1.320937038894421e-20⎥
⎢2.7854159049317976e-17⎥
⎣1.8091246170996508e-35⎦
Notice the values of most of the elements are close to 0, but most importantly notice the 1 value in the output.
I used a different cross-entropy error function than the one you used. Instead of calculating log(max(1-y, eps)) I stuck to the basic log(1-y). So given the output vector above, when I calculated log(1-y) I got the -Inf as a result of cross-entropy, which obviously killed the algorithm.
I imagine if your eps is not reasonably high enough so that log(max(1-y, eps)) -> log(max(0, eps)) doesn't yield way too small log output you might be in a similar pickle like myself.
My solution to this problem was to use Leaky RELU. Once I've started using it, I could carry on using the multi classifier cross-entropy as oppose to softmax-cost function you decided to try.

Regularization in Feed-forward Neural Network

I have just gone through some on-line open course lectures by Andrew Ng in Coursera. At the end of the lectures regarding Neural Networks, he explained reguralization but I am afraid I missed something. With reguralization, the value of cost function is calculated as follows:
J(theta) = -1/m * jValMain + lambda/(2*m)*JValReg
jValMain is set of sums to over y, and output of NN. The second component jValReg is to apply reguralization and looks something like this:
jValReg = lambda/(2*m)*sum( sum( sum( Theta(j)(i)(k)^2 ) ) )
Theta is a set of weights, m is a number of all elements/cases in database and then the lambda. What is lambda? Is it scalar or vector or matrix? How do we apply reguralization via lambda? Is lambda to regulate a particular jth and ith weight from lth layer or is it to regulate all weights by one number. It somehow confuses me. If anyone is familiar with this concept, I will be grateful for any help.
Cheers!
lambda is the regularization parameter in your estimation. Think of it as a means to control the bias in your estimate. It is a scalar and is often used to prevent over fitting of the data. Here are a few lines taken from the notes of the coursera assignments.
... the value of lambda can significantly affect the results of regularized polynomial regression on the training and cross validation set. In particular, a model without regularization (lambda = 0) fits the training set well, but does not generalize. Conversely, a model with too much regularization (lambda = 100) does not fit the training set and testing set well. A good choice of lambda (e.g., lambda = 1) can provide a good fit to the data.

MATLAB fminunc() not completing for large datasets. Works for smaller ones

I am performing logistic regression in MATLAB with L2 regularization on text data. My program works well for small datasets. For larger sets, it keeps running infinitely.
I have seen the potentially duplicate question (matlab fminunc not quitting (running indefinitely)). In that question, the cost for initial theta was NaN and there was an error printed in the console. For my implementation, I am getting a real valued cost and there is no error even with verbose parameters being passed to fminunc(). Hence I believe this question might not be a duplicate.
I need help in scaling it to larger sets. The size of the training data I am currently working on is roughly 10k*12k (10k text files cumulatively containing 12k words). Thus, I have m=10k training examples and n=12k features.
My cost function is defined as follows:
function [J gradient] = costFunction(X, y, lambda, theta)
[m n] = size(X);
g = inline('1.0 ./ (1.0 + exp(-z))');
h = g(X*theta);
J =(1/m)*sum(-y.*log(h) - (1-y).*log(1-h))+ (lambda/(2*m))*norm(theta(2:end))^2;
gradient(1) = (1/m)*sum((h-y) .* X(:,1));
for i = 2:n
gradient(i) = (1/m)*sum((h-y) .* X(:,i)) - (lambda/m)*theta(i);
end
end
I am performing optimization using MATLAB's fminunc() function. The parameters I pass to fminunc() are:
options = optimset('LargeScale', 'on', 'GradObj', 'on', 'MaxIter', MAX_ITR);
theta0 = zeros(n, 1);
[optTheta, functionVal, exitFlag] = fminunc(#(t) costFunction(X, y, lambda, t), theta0, options);
I am running this code on a machine with these specifications:
Macbook Pro i7 2.8GHz / 8GB RAM / MATLAB R2011b
The cost function seems to behave correctly. For initial theta, I get acceptable values of J and gradient.
K>> theta0 = zeros(n, 1);
K>> [j g] = costFunction(X, y, lambda, theta0);
K>> j
j =
0.6931
K>> max(g)
ans =
0.4082
K>> min(g)
ans =
-2.7021e-05
The program takes incredibly long to run. I started profiling keeping MAX_ITR = 1 for fminunc(). With a single iteration, the program did not complete execution even after a couple of hours had elapsed. My questions are:
Am I doing something wrong mathematically?
Should I use any other optimizer instead of fminunc()? With LargeScale=on, fminunc() uses trust-region algorithms.
Is this problem cluster-scale and should not be run on a single machine?
Any other general tips will be appreciated. Thanks!
This helped solve the problem: I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
I was able to get this working by setting the LargeScale flag to 'off' in fminunc(). From what I gather, LargeScale = 'on' uses trust region algorithms, while keeping it 'off' uses quasi-newton methods. Using quasi-newton methods and passing the gradient worked a lot faster for this particular problem and gave very nice results.
Here is my advise:
-Set the Matlab flag to show debug output during run. If not just print out in your cost function the cost, which will allow you to monitor iteration count and error.
And second, which is very important:
Your problem is illposed, or so to say underdetermined. You have a 12k feature space and provide only 10k examples, which means for an unconstrained optimization the answer is -Inf. To make a quick example why this is, your problem is like:
Minimize x+y+z given that x+y-z = 2. Feature space dim 3, spanned vector space - 1d. I suggest use PCA or CCA to reduce the dimensionality of the the text files by retaining their variation up to 99%. This will probably give you a feature space ~100-200dim.
PS: Just to point out that the problem is very fram from cluster size requirement, which usually is 1kk+ data points and that fminunc is not at all an overkill, and LIBSVM has nothing to do with it because fminunc is just a quadratic optimizer, while LIBSVM is a classifier. To clear out LIBSVM uses something similar to fminunc just with different objective function.
Here's what I suspect to be the issue, based on my experience with this type of problem. You're using a dense representation for X instead of a sparse one. You're also seeing the typical effect in text classification that the number of terms increasing roughly linearly with the number of samples. Effectively, the cost of the matrix multiplication X*theta goes up quadratically with the number of samples.
By contrast, a good sparse matrix representation only iterates over the non-zero elements to do a matrix multiplication, which tends to be roughly constant per document if they're of appropriately constant length, causing linear instead of quadratic slowdown in the number of samples.
I'm not a Matlab guru, but I know it has a sparse matrix package, so try to use that.