I have developed a time-series prediction engine based on an implementation of an Elman Network. Everything works fine when I use it synchronously (that is, presenting the samples to the input layer, do the training, then read the predictions directly from output neurons). However, when I save the network (weights and biases) at the end of a training session in order to decouple the training and running phases, what should I do with the context neuron values? Should I save them from the last epoch/sample, or should I re-initialize them to 0 before presenting a new Sample to obtain a prediction? I have actually tried both, but I never get to see the same results as in the synchrounous approach.
Related
This is my first post so please ask me if something is not clear.
I am currently working on training a neural network on a custom dataset that I have created. This dataset consists of 1000 folders which contain 81 images (512x512 px) each that are going to be loaded, processed and used as an input. My issue is that my computer cannot handle such a large dataset and I have to find a way to use the whole dataset.
The neural network that I am working on can be found here https://github.com/chshin10/epinet.
On the EPINET_train.py file you can see the data generator that is being used.
The neural network uses the RMSProp optimizer.
What I did to deal with this issue is that I split the data into 2 folders one for training and one for testing with an 80%-20% split. Then I load 10% of the data from each folder in order to train the neural network (data was not chosen randomly). I train the neural network for 100 epoches and the I load the next set of data until all of the sets have been used for training. Then I repeat the procedure.
After 3 iterations it seems to me that the loss function is not getting minimized more for each set of data. Is this solution used in a similar scenario? Is there something I can do better.
Can someone help me what to do with a classification, if I get a training and validation error shown in the picture to improve my neural network? I tried to stop the training earlier, so that the validation error is smaller, but it's still too high. I get a validation accuracy of 62.45%, but thats too low. The dataset are images that show objects somewhere in the image (not centered). If I use the same network with the same number of images, but where the shown objects are always centered to the principal point, it works much better with a validation accuracy of 95%,
One can look for following things while implementing the Neural net:
Dataset Issues:
i) Check if the input data you are feeding the network makes sense and is there too much noise in the data.
ii) Try passing random input and see if the error performance persist. If it does, then it's time to make changes in your net.
iii) Check if the input data has appropriate labels.
iv) If the input data is not shuffled and is passed in a specific order of label, leads to negative impact on the learning. So, shuffling of data and label together is necessary.
v) Reduce the batch size and make sure batch don't contain the same label.
vi) Too much data augmentation is not good as it has a regularizing effect and when combined with other forms of regularization (weight L2, dropout, etc.) can cause the net to underfit.
vii) Data must be pre-processed as per the requirement of the data. For example, if you are training the network for face classification then the image face without or very any background should be passed to the network for learning.
Implementation Issues:
i) Check your loss function, weight initialization, and gradient checking to make sure the backpropagation works in an appropriate manner.
ii) Visualize the biases, activation, and weights for each layer with help of visualization library like Tensorboard.
iii) Try using dynamic learning rate concept, where the learning rate changes with a designed set of epochs.
iv) Increase the network size by adding more layer or more neurons, as it might not be enough to capture the features of its mark.
If I was to standardise the training data before I train the neural network, after the training do I then de-standardise the training data and feed it back in to the neural network to show the final modelled results and expected results. Or do I feed the standardised training data back in and de-standardise the final results and expected results after?
You never destandarize input data. Network (or any other machine learning model) won't understand data which is in different scale/space than the one that was used during training.
If you did, however scale the output values, than in an obvious way to have to scale them back in order to obtain "unscaled" results.
I used ntstool to create NAR (nonlinear Autoregressive) net object, by training on a 1x1247 input vector. (daily stock price for 6 years)
I have finished all the steps and saved the resulting net object to workspace.
Now I am clueless on how to use this object to predict the y(t) for example t = 2000, (I trained the model for t = 1:1247)
In some other threads, people recommended to use sim(net, t) function - however this will give me the same result for any value of t. (same with net(t) function)
I am not familiar with the specific neural net commands, but I think you are approaching this problem in the wrong way. Typically you want to model the evolution in time. You do this by specifying a certain window, say 3 months.
What you are training now is a single input vector, which has no information about evolution in time. The reason you always get the same prediction is because you only used a single point for training (even though it is 1247 dimensional, it is still 1 point).
You probably want to make input vectors of this nature (for simplicity, assume you are working with months):
[month1 month2; month2 month 3; month3 month4]
This example contains 2 training points with the evolution of 3 months. Note that they overlap.
Use the Network
After the network is trained and validated, the network object can be used to calculate the network response to any input. For example, if you want to find the network response to the fifth input vector in the building data set, you can use the following
a = net(houseInputs(:,5))
a =
34.3922
If you try this command, your output might be different, depending on the state of your random number generator when the network was initialized. Below, the network object is called to calculate the outputs for a concurrent set of all the input vectors in the housing data set. This is the batch mode form of simulation, in which all the input vectors are placed in one matrix. This is much more efficient than presenting the vectors one at a time.
a = net(houseInputs);
Each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found, retrain several times.
There are several other techniques for improving upon initial solutions if higher accuracy is desired. For more information, see Improve Neural Network Generalization and Avoid Overfitting.
strong text
I have created a neural network and the performance is good. By using nprtool, we are allow to test the network with an input data and target data. Here is my question, what is the purpose of testing a neural network with target data provided? Isn't it testing should not hav e target data so that we can know how well can the trained neural network perform without target data is given? Hope someone will respond to this, thanks =)
I'm not familiar with nprtool, but I suspect it would give the input data to your neural network, and then compare your NN's output data with the target data (and compute some kind of success rate based on that).
So your NN will never see the target data, it's just used to measure the performance.
It's like the "teacher's edition" of the exercise books in school. The student (i.e. the NN) doesn't have the solutions, but her/his answers will be compared against them by the teacher (i.e. nprtool). (Okay, the teacher probably/hopefully knows the subject, but you get the idea.)
The "target" data t is the desired y of y=net(x) used as example to train the network.
What nprtool do is to divide the training set into three groups: the training set, the validation set and the test set.
The first one is used to actually update the network.
The second one is used to determine the performances of the net (note: this set is NOT used in any way to update the network): as the NN "learns" the error (as difference between the t and net(x)) over the validation set decreases. The trend will eventually stop or even reverse: this phenomena is called "overfitting", which means the NN is now chasing the training set, "memorizing" it at the cost of the ability to generalize (meaning: to perform well with unseen data). So the purpose of this validation set is to determine when to stop the training before the NN starts overfitting. This should answer your question.
Finally third set is for external testing, to leave you a set of data untouched by the training procedure.
Even though the total data set [training, validation and testing] are inputs to the training algorithm, the testing data is in no way used to design (i.e., train and validate) the net
total = design + test
design = train + validate
The training data is used to estimate weights and biases
The validation data is used to monitor the design performance on nontraining data. REGARDLESS OF THE PERFORMANCE ON TRAINING DATA, if validation performance degrades continuously for 6 (default) epochs, training is terminated (VALIDATION STOPPING).
This mitigates the dreaded phenomenon of OVERTRAINING AN OVERFIT NET where performance on nontraining data degrades even if the training set performance is improving.
An overfit net has more unknown weights and biases than training equations, thereby allowing an infinite number of solutions. A simple example of overfitting with two unknowns but only one equation:
KNOWN: a, b, c
FIND: unique x1 and x2
USING: a * x1 + b * x2 = c
Hope this helps.
Greg