Neural Networks: exact high level training algorithm - neural-network

I am trying to make my very first Neural Network work. I designed it so that I can choose the number of layers and the number of nodes per layer freely. I had a hard time trying to implement back propagation but I think I have done it recursively even if it is not as performant as it can be. I am using the sigmoid as an activation for all nodes (even the input nodes and the output node).
My network has a single output node in the output layer that should predict a variable (zero or one).
My question is how exactly should I do to train my network ? I noticed that when I use the following algorithm:
for i in [1:100000]
feed the same record to my neural network
perform a forward pass
compute the error using the square of the difference as a loss function for this record with the current weights
Update the weights using back propagation
it converges to the correct result (the output node value converges to zero when the record is labeled with zero, and to one when the record is labeled as one). But when I feed a different record to the network at each time of this iterative algorithm the network completely diverges.
Suppose that I would like to work with a mini batch of N records, this means that I have to make N forward passes giving at each time one of the N records as input to the network, comùpute the error, take the average over the N records, but then, when I would like to use the average error in the back propagation algorithm, what input record should I use ? Because, as far as I know the input layer is also used to compute the weights between it and the first hidden layer. Should I then use the last one of the N records as input? Or the first one ? Does it even matter ? I am a bit confused here and I found nothing on the internet to answer this particular question.
Best regards.

Related

sigmoid - back propagation neural network

I'm trying to create a sample neural network that can be used for credit scoring. Since this is a complicated structure for me, i'm trying to learn them small first.
I created a network using back propagation - input layer (2 nodes), 1 hidden layer (2 nodes +1 bias), output layer (1 node), which makes use of sigmoid as activation function for all layers. I'm trying to test it first using a^2+b2^2=c^2 which means my input would be a and b, and the target output would be c.
My problem is that my input and target output values are real numbers which can range from (-/infty, +/infty). So when I'm passing these values to my network, my error function would be something like (target- network output). Would that be correct or accurate? In the sense that I'm getting the difference between the network output (which is ranged from 0 to 1) and the target output (which is a large number).
I've read that the solution would be to normalise first, but I'm not really sure how to do this. Should i normalise both the input and target output values before feeding them to the network? What normalisation function is best to use cause I read different methods in normalising. After getting the optimized weights and use them to test some data, Im getting an output value between 0 and 1 because of the sigmoid function. Should i revert the computed values to the un-normalized/original form/value? Or should i only normalise the target output and not the input values? This really got me stuck for weeks as I'm not getting the desired outcome and not sure how to incorporate the normalisation idea in my training algorithm and testing..
Thank you very much!!
So to answer your questions :
Sigmoid function is squashing its input to interval (0, 1). It's usually useful in classification task because you can interpret its output as a probability of a certain class. Your network performes regression task (you need to approximate real valued function) - so it's better to set a linear function as an activation from your last hidden layer (in your case also first :) ).
I would advise you not to use sigmoid function as an activation function in your hidden layers. It's much better to use tanh or relu nolinearities. The detailed explaination (as well as some useful tips if you want to keep sigmoid as your activation) might be found here.
It's also important to understand that architecture of your network is not suitable for a task which you are trying to solve. You can learn a little bit of what different networks might learn here.
In case of normalization : the main reason why you should normalize your data is to not giving any spourius prior knowledge to your network. Consider two variables : age and income. First one varies from e.g. 5 to 90. Second one varies from e.g. 1000 to 100000. The mean absolute value is much bigger for income than for age so due to linear tranformations in your model - ANN is treating income as more important at the beginning of your training (because of random initialization). Now consider that you are trying to solve a task where you need to classify if a person given has grey hair :) Is income truly more important variable for this task?
There are a lot of rules of thumb on how you should normalize your input data. One is to squash all inputs to [0, 1] interval. Another is to make every variable to have mean = 0 and sd = 1. I usually use second method when the distribiution of a given variable is similiar to Normal Distribiution and first - in other cases.
When it comes to normalize the output it's usually also useful to normalize it when you are solving regression task (especially in multiple regression case) but it's not so crucial as in input case.
You should remember to keep parameters needed to restore the original size of your inputs and outputs. You should also remember to compute them only on a training set and apply it on both training, test and validation sets.

Batch training in SOM?

I am trying to implement a general SOM with batch training. and i have doubt regarding the formula for batch training.
i have read about it in the following link
http://cs-www.cs.yale.edu/c2/images/uploads/HR15.pdf
https://notendur.hi.is//~benedikt/Courses/Mia_report2.pdf
i noticed that the weight updates are assigned rather than added at the end of an epoch - wouldn't that overwrite the whole networks previous values, and the update formula did not include the previous weights of the nodes, then how does it even work?
when i was implementing it, a lot of the nodes in network became NaN because the neighborhood value became zero for a lot of nodes due to gradient decrease at the end of training and the update formula resulted in a division by zero.
can someone explain the batch algorithm correctly. i DID google it, and i saw a lot of "improving batch" or "speeding up batch" but nothing about just batch kohonen directly. and among the ones that did explain the formula was the same and that doesn't work.
The update rule of the Batch SOM that you see is the good one.
The basic idea behind this algorithm is to train your SOM using the whole training dataset and so at each iteration, the weights of your neurons re present the mean of the closest inputs.
And so, the information of the previous weights are in the BMU (Best matching Unit).
As you said, some neurons weights produce NaN due to division by zero.
To overcome this problem you can use neighbor function that is always greater than zero (for example a Gaussian function).

Artificial neural network presented with unclassified inputs

I am trying to classify portions of time series data using a feed forward neural network using 20 neurons in a single hidden layer, and 3 outputs corresponding to the 3 events I would like to be able to recognize. There are many other things that I could classify in the data (obviously), but I don't really care about them for the time being. Neural network creation and training has been performed using Matlab's neural network toolbox for pattern recognition, as this is a classification problem.
In order to do this I am sequentially populating a moving window, then inputting the window into the neural network. The issue I have is that I am obviously not able to classify and train every possible shape the time series takes on. Due to this, I typically get windows filled with data that look very different from the windows I used to train the neural network, but still get outputs near 1.
Essentially, the 3 things I trained the ANN with are windows of 20 different data sets that correspond to shapes that would correspond to steady state, a curve that starts with a negative slope and levels off to 0 slope (essentially the left half side of a parabola that opens upwards), and a curve corresponding to 0 slope that quickly declines (right half side of a parabola that opens downwards).
Am I incorrect in thinking that if I input data that doesn't correspond to any of the items I trained the ANN with it should output values near 0 for all outputs?
Or is it likely due to the fact that these basically cover all the bases of steady state, increasing and decreasing, despite large differences in slope, and therefore something is always classified?
I guess I just need a nudge in the right direction.
Neural network output values
A neural network may not guarantee specific output values if these input values / expected output values were presented during the training period.
A neural network will not consistently output 0 for untrained input values.
A solution is to simply present the network with an array of input values that should result in the network outputting 0.

How to use created "net" neural network object for prediction?

I used ntstool to create NAR (nonlinear Autoregressive) net object, by training on a 1x1247 input vector. (daily stock price for 6 years)
I have finished all the steps and saved the resulting net object to workspace.
Now I am clueless on how to use this object to predict the y(t) for example t = 2000, (I trained the model for t = 1:1247)
In some other threads, people recommended to use sim(net, t) function - however this will give me the same result for any value of t. (same with net(t) function)
I am not familiar with the specific neural net commands, but I think you are approaching this problem in the wrong way. Typically you want to model the evolution in time. You do this by specifying a certain window, say 3 months.
What you are training now is a single input vector, which has no information about evolution in time. The reason you always get the same prediction is because you only used a single point for training (even though it is 1247 dimensional, it is still 1 point).
You probably want to make input vectors of this nature (for simplicity, assume you are working with months):
[month1 month2; month2 month 3; month3 month4]
This example contains 2 training points with the evolution of 3 months. Note that they overlap.
Use the Network
After the network is trained and validated, the network object can be used to calculate the network response to any input. For example, if you want to find the network response to the fifth input vector in the building data set, you can use the following
a = net(houseInputs(:,5))
a =
34.3922
If you try this command, your output might be different, depending on the state of your random number generator when the network was initialized. Below, the network object is called to calculate the outputs for a concurrent set of all the input vectors in the housing data set. This is the batch mode form of simulation, in which all the input vectors are placed in one matrix. This is much more efficient than presenting the vectors one at a time.
a = net(houseInputs);
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Neural Network Generalization and Avoid Overfitting.
strong text

Artificial Neural Networks: Choosing initial neurons

How is the initial structure (Neurons and connections between them) chosen? My book only states that we give the connection random weights in the beginning before we train the network.
I think that we would add neurons during the training like this:
Start with a completely empty network
The first value I generate during the training will not exist
Add a neuron to correspond to this value, with a random weight
What you are after is a self-organizing ANN. Usually, the way the connections are organized is man-made into a model that the developer thinks will have sufficient power to perform the computation neccessary. You can of course start with a random selection of nodes with random connections, but the evolution of such a network will probably take a lot longer time than a standard two or three layer network.
So, yes, you are right in that you would use a similar approach when doing a self-organizing network. Keep track of two sets of genetic algorithms, one for the structure and one for the weights (or combine the two in some devious way) and evolve as you please.
I do not believe the question is about self-organising or GA-evolved ANNs. It sounds more like it is about a the most common ANN: a perceptron (single or multi-layer), in which case the structure of the network: the number of layers and the size of the layers, must be hand chosen at the beginning. A simple initial rule of thumb for initialising the weight is simply picking uniformly random values between -1.0 and 1.0.