Standardising Training Set in Backpropogation - neural-network

If I was to standardise the training data before I train the neural network, after the training do I then de-standardise the training data and feed it back in to the neural network to show the final modelled results and expected results. Or do I feed the standardised training data back in and de-standardise the final results and expected results after?

You never destandarize input data. Network (or any other machine learning model) won't understand data which is in different scale/space than the one that was used during training.
If you did, however scale the output values, than in an obvious way to have to scale them back in order to obtain "unscaled" results.

Related

Loading a dataset in parts for training a neural network

This is my first post so please ask me if something is not clear.
I am currently working on training a neural network on a custom dataset that I have created. This dataset consists of 1000 folders which contain 81 images (512x512 px) each that are going to be loaded, processed and used as an input. My issue is that my computer cannot handle such a large dataset and I have to find a way to use the whole dataset.
The neural network that I am working on can be found here https://github.com/chshin10/epinet.
On the EPINET_train.py file you can see the data generator that is being used.
The neural network uses the RMSProp optimizer.
What I did to deal with this issue is that I split the data into 2 folders one for training and one for testing with an 80%-20% split. Then I load 10% of the data from each folder in order to train the neural network (data was not chosen randomly). I train the neural network for 100 epoches and the I load the next set of data until all of the sets have been used for training. Then I repeat the procedure.
After 3 iterations it seems to me that the loss function is not getting minimized more for each set of data. Is this solution used in a similar scenario? Is there something I can do better.

Problems with outputs in neural networks (in MATLAB's neural networks toolbox)

I trained my artificial neural network (ANN) in MATLAB with 652,500 data points, and in another blind test (652,100 data points - for completely new input data sets) the output is excellent (as I want). But the problem occurs when I insert very less amount of data (for example, below 50 data points). The output is quite unexpected, and I checked it many times.
To be more precise, the training phase contains 10% data for training, 45% for validation and 45% for testing. The training is quite successful, and for large amount of new input data it works very well. The problem is when very limited data (compared to training data points) are inserted in the neural network, it shows quite unrealistic output, beyond the range on what it was trained.
Why is this so? Could anyone light some sheds on this please?
Also mention please, is there any strict (hard and fast) rules on training and final testing data points? For example: what percent of training data should be / must be introduced in the new input data sets. I guess the problem is my network overestimate or underestimate the output as very less percentage of data it receives as compared to training phase.
Your problem is over-fitting of the dataset in duration of training. Data dividing is a very important task in training of a neural network. In general and more scientifically, the percentage of the training set should be between 70-80%. Test and validation sets should be each on around 10-15%. For instance:
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
You imagine a student in a class. TrainRatio is materials/lectures that should be learned by student. ValRatio is the percentage of the materials that should be examined as a middle-term examination, and TestRatio is the percentage of the materials should be examined as final examination. So, if you have not enough material for training, the student cannot be a success in the middle and final examination. Is it clear? A neural network works for such a simple student for learning/training. So, your network faces with over-fitting problems.

How to use created "net" neural network object for prediction?

I used ntstool to create NAR (nonlinear Autoregressive) net object, by training on a 1x1247 input vector. (daily stock price for 6 years)
I have finished all the steps and saved the resulting net object to workspace.
Now I am clueless on how to use this object to predict the y(t) for example t = 2000, (I trained the model for t = 1:1247)
In some other threads, people recommended to use sim(net, t) function - however this will give me the same result for any value of t. (same with net(t) function)
I am not familiar with the specific neural net commands, but I think you are approaching this problem in the wrong way. Typically you want to model the evolution in time. You do this by specifying a certain window, say 3 months.
What you are training now is a single input vector, which has no information about evolution in time. The reason you always get the same prediction is because you only used a single point for training (even though it is 1247 dimensional, it is still 1 point).
You probably want to make input vectors of this nature (for simplicity, assume you are working with months):
[month1 month2; month2 month 3; month3 month4]
This example contains 2 training points with the evolution of 3 months. Note that they overlap.
Use the Network
After the network is trained and validated, the network object can be used to calculate the network response to any input. For example, if you want to find the network response to the fifth input vector in the building data set, you can use the following
a = net(houseInputs(:,5))
a =
34.3922
If you try this command, your output might be different, depending on the state of your random number generator when the network was initialized. Below, the network object is called to calculate the outputs for a concurrent set of all the input vectors in the housing data set. This is the batch mode form of simulation, in which all the input vectors are placed in one matrix. This is much more efficient than presenting the vectors one at a time.
a = net(houseInputs);
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Neural Network Generalization and Avoid Overfitting.
strong text

Continuously train MATLAB ANN, i.e. online training?

I would like to ask for ideas what options there is for training a MATLAB ANN (artificial neural network) continuously, i.e. not having a pre-prepared training set? The idea is to have an "online" data stream thus, when first creating the network it's completely untrained but as samples flow in the ANN is trained and converges.
The ANN will be used to classify a set of values and the implementation would visualize how the training of the ANN gets improved as samples flows through the system. I.e. each sample is used for training and then also evaluated by the ANN and the response is visualized.
The effect that I expect is that for the very first samples the response of the ANN will be more or less random but as the training progress the accuracy improves.
Any ideas are most welcome.
Regards, Ola
In MATLAB you can use the adapt function instead of train. You can do this incrementally (change weights every time you get a new piece of information) or you can do it every N-samples, batch-style.
This document gives an in-depth run-down on the different styles of training from the perspective of a time-series problem.
I'd really think about what you're trying to do here, because adaptive learning strategies can be difficult. I found that they like to flail all over compared to their batch counterparts. This was especially true in my case where I work with very noisy signals.
Are you sure that you need adaptive learning? You can't periodically re-train your NN? Or build one that generalizes well enough?

Matlab neural network testing

I have created a neural network and the performance is good. By using nprtool, we are allow to test the network with an input data and target data. Here is my question, what is the purpose of testing a neural network with target data provided? Isn't it testing should not hav e target data so that we can know how well can the trained neural network perform without target data is given? Hope someone will respond to this, thanks =)
I'm not familiar with nprtool, but I suspect it would give the input data to your neural network, and then compare your NN's output data with the target data (and compute some kind of success rate based on that).
So your NN will never see the target data, it's just used to measure the performance.
It's like the "teacher's edition" of the exercise books in school. The student (i.e. the NN) doesn't have the solutions, but her/his answers will be compared against them by the teacher (i.e. nprtool). (Okay, the teacher probably/hopefully knows the subject, but you get the idea.)
The "target" data t is the desired y of y=net(x) used as example to train the network.
What nprtool do is to divide the training set into three groups: the training set, the validation set and the test set.
The first one is used to actually update the network.
The second one is used to determine the performances of the net (note: this set is NOT used in any way to update the network): as the NN "learns" the error (as difference between the t and net(x)) over the validation set decreases. The trend will eventually stop or even reverse: this phenomena is called "overfitting", which means the NN is now chasing the training set, "memorizing" it at the cost of the ability to generalize (meaning: to perform well with unseen data). So the purpose of this validation set is to determine when to stop the training before the NN starts overfitting. This should answer your question.
Finally third set is for external testing, to leave you a set of data untouched by the training procedure.
Even though the total data set [training, validation and testing] are inputs to the training algorithm, the testing data is in no way used to design (i.e., train and validate) the net
total = design + test
design = train + validate
The training data is used to estimate weights and biases
The validation data is used to monitor the design performance on nontraining data. REGARDLESS OF THE PERFORMANCE ON TRAINING DATA, if validation performance degrades continuously for 6 (default) epochs, training is terminated (VALIDATION STOPPING).
This mitigates the dreaded phenomenon of OVERTRAINING AN OVERFIT NET where performance on nontraining data degrades even if the training set performance is improving.
An overfit net has more unknown weights and biases than training equations, thereby allowing an infinite number of solutions. A simple example of overfitting with two unknowns but only one equation:
KNOWN: a, b, c
FIND: unique x1 and x2
USING: a * x1 + b * x2 = c
Hope this helps.
Greg