I use Q-learning with neural network as approimator. And after several training iteration, weights acquire values in the range from 0 to 10. Can the weights take such values? Or does this indicate bad network parameters?
Weights can take those values. Especially when you're propagating a large number of iterations; the connections that need to be 'heavy', get 'heavier'.
There are plenty examples showing neural networks with weights larger than 1. Example.
Also, following this image, there is no such thing as weight limits:
legend
Related
Supposed you have two convolutional neural networks implemented in matlab and composed by these layers:
imageInputLayer
ConvolutionalLayer
maxPoolinglayer
relulayer
softmaxlayer
fullyconnectedlayer
classification layer
Both of these networks have exactly same architecture.
I apply the same method of training for 2 networks with same hyperparameters.
Both of these networks have exactly same weights in their corresponding layers.
That is, both of these networks are a replica of each other.
Both of these networks are trained using exactly same training set and validation set without shuffle.
I am wondering:
Will the scores (training error and validation error) and trained weights be different for both?
Does it depend upon the method for training?
In short: Yes to both - because inital weights are usually initated using random numbers.
A tad less short: A neural network is simply an algorithm, if there is no noise (i.e. randomness) introduced in any function on the way, 2 networks will end up being completely the same.
I am looking at (two-layer) feed-forward Neural Networks in Matlab. I am investigating parameters that can minimise the classification error.
A google search reveals that these are some of them:
Number of neurons in the hidden layer
Learning Rate
Momentum
Training type
Epoch
Minimum Error
Any other suggestions?
I've varied the number of hidden neurons in Matlab, varying it from 1 to 10. I found that the classification error is close to 0% with 1 hidden neuron and then grows very slightly as the number of neurons increases. My question is: shouldn't a larger number of hidden neurons guarantee an equal or better answer, i.e. why might the classification error go up with more hidden neurons?
Also, how might I vary the Learning Rate, Momentum, Training type, Epoch and Minimum Error in Matlab?
Many thanks
Since you are considering a simple two layer feed forward network and have already pointed out 6 different things you need to consider to reduce classification errors, I just want to add one thing only and that is amount of training data. If you train a neural network with more data, it will work better. Note that, training with large amount of data is a key to get good outcome from neural networks, specially from deep neural networks.
Why the classification error goes up with more hidden neurons?
Answer is simple. Your model has over-fitted the training data and thus resulting in poor performance. Note that, if you increase the number of neurons in hidden layers, it would decrease training errors but increase testing errors.
In the following figure, see what happens with increased hidden layer size!
How may I vary the Learning Rate, Momentum, Training type, Epoch and Minimum Error in Matlab?
I am expecting you have already seen feed forward neural net in Matlab. You just need to manipulate the second parameter of the function feedforwardnet(hiddenSizes,trainFcn) which is trainFcn - a training function.
For example, if you want to use gradient descent with momentum and adaptive learning rate backpropagation, then use traingdx as the training function. You can also use traingda if you want to use gradient descent with adaptive learning rate backpropagation.
You can change all the required parameters of the function as you want. For example, if you want to use traingda, then you just need to follow the following two steps.
Set net.trainFcn to traingda. This sets net.trainParam to traingda's default parameters.
Set net.trainParam properties to desired values.
Example
net = feedforwardnet(3,'traingda');
net.trainParam.lr = 0.05; % setting the learning rate to 5%
net.trainParam.epochs = 2000 % setting number of epochs
Please see this - gradient descent with adaptive learning rate backpropagation and gradient descent with momentum and adaptive learning rate backpropagation.
I am training a neural network with 1 sigmoid hidden layer and a linear output layer. The network simply approximates a cosine function. The weights are initiliazed according to Nguyen-Widrow initialization and the biases are initialized to 1. I am using MATLAB as a platform.
Running the network a number of times without changing any parameters, I am getting results (mean squared error) which range from 0.5 to 0.5*10^-6. I cannot understand how the results can even vary that much, I'd imagine there would at least be a narrower and more consistent window of errors.
What could be causing such a big variance?
I have a set of size N. How can I determine whether this data set is trained or not?
Training will take place infinitely if the data I feed is random. So I should have a maximum number iterations for which a neural network can be considered as trained normally, to avoid having an infinite number of iterations.
What is the maximum number of iteration for which I can consider the Neural Network as trained?
You will need to define a confidence interval, which you are ready to accept. Please read the article: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=00478409 for further information.
I am having some issues with using neural network. I am using a non linear activation function for the hidden layer and a linear function for the output layer. Adding more neurons in the hidden layer should have increased the capability of the NN and made it fit to the training data more/have less error on training data.
However, I am seeing a different phenomena. Adding more neurons is decreasing the accuracy of the neural network even on the training set.
Here is the graph of the mean absolute error with increasing number of neurons. The accuracy on the training data is decreasing. What could be the cause of this?
Is it that the nntool that I am using of matlab splits the data randomly into training,test and validation set for checking generalization instead of using cross validation.
Also I could see lots of -ve output values adding neurons while my targets are supposed to be positives. Could it be another issues?
I am not able to explain the behavior of NN here. Any suggestions? Here is the link to my data consisting of the covariates and targets
https://www.dropbox.com/s/0wcj2y6x6jd2vzm/data.mat
I am unfamiliar with nntool but I would suspect that your problem is related to the selection of your initial weights. Poor initial weight selection can lead to very slow convergence or failure to converge at all.
For instance, notice that as the number of neurons in the hidden layer increases, the number of inputs to each neuron in the visible layer also increases (one for each hidden unit). Say you are using a logit in your hidden layer (always positive) and pick your initial weights from the random uniform distribution between a fixed interval. Then as the number of hidden units increases, the inputs to each neuron in the visible layer will also increase because there are more incoming connections. With a very large number of hidden units, your initial solution may become very large and result in poor convergence.
Of course, how this all behaves depends on your activation functions and the distributio of the data and how it is normalized. I would recommend looking at Efficient Backprop by Yann LeCun for some excellent advice on normalizing your data and selecting initial weights and activation functions.