Zeroing LSTM input - neural-network

I have a trained LSTM network with 128 channels and 500 data point in each channel. After training and testing, I am interested in testing the model again with the same test data with specific channel zeroed. Would a huge drop in F1-Score show that the most important features are in these channels or am I doing something wrong ?

Related

Loading a dataset in parts for training a neural network

This is my first post so please ask me if something is not clear.
I am currently working on training a neural network on a custom dataset that I have created. This dataset consists of 1000 folders which contain 81 images (512x512 px) each that are going to be loaded, processed and used as an input. My issue is that my computer cannot handle such a large dataset and I have to find a way to use the whole dataset.
The neural network that I am working on can be found here https://github.com/chshin10/epinet.
On the EPINET_train.py file you can see the data generator that is being used.
The neural network uses the RMSProp optimizer.
What I did to deal with this issue is that I split the data into 2 folders one for training and one for testing with an 80%-20% split. Then I load 10% of the data from each folder in order to train the neural network (data was not chosen randomly). I train the neural network for 100 epoches and the I load the next set of data until all of the sets have been used for training. Then I repeat the procedure.
After 3 iterations it seems to me that the loss function is not getting minimized more for each set of data. Is this solution used in a similar scenario? Is there something I can do better.

What is the consequence of not normalizing test data after training a convolutional neural network

To the best of my knowledge the normalization mainly facilitates the training(optimization) phase. If I train a network with normalized data sets but later feed it unnormalized test data, what will be the theoretical consequence? e.g. Will the test accuracy be bad?
I've done an experiment of this on cifar-10 and found that the network still gives good results on those unnormalized test data.

What is the usefulness of the mean file with AlexNet neural network?

When using an AlexNet neural network, be it with caffe or CNTK, it needs a mean file as input. What is this mean file for ? How does it affect the training ? How is it generated, only from training sample ?
Mean subtraction removes the DC component from images. It has the geometric interpretation of centering the cloud of data around the origin along every dimension. It reduces the correlation between images which improves training. From my experience I can say that it improves the training accuracy significantly. It is computed from the training data. Computing mean from the testing data makes no sense.

Problems with outputs in neural networks (in MATLAB's neural networks toolbox)

I trained my artificial neural network (ANN) in MATLAB with 652,500 data points, and in another blind test (652,100 data points - for completely new input data sets) the output is excellent (as I want). But the problem occurs when I insert very less amount of data (for example, below 50 data points). The output is quite unexpected, and I checked it many times.
To be more precise, the training phase contains 10% data for training, 45% for validation and 45% for testing. The training is quite successful, and for large amount of new input data it works very well. The problem is when very limited data (compared to training data points) are inserted in the neural network, it shows quite unrealistic output, beyond the range on what it was trained.
Why is this so? Could anyone light some sheds on this please?
Also mention please, is there any strict (hard and fast) rules on training and final testing data points? For example: what percent of training data should be / must be introduced in the new input data sets. I guess the problem is my network overestimate or underestimate the output as very less percentage of data it receives as compared to training phase.
Your problem is over-fitting of the dataset in duration of training. Data dividing is a very important task in training of a neural network. In general and more scientifically, the percentage of the training set should be between 70-80%. Test and validation sets should be each on around 10-15%. For instance:
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
You imagine a student in a class. TrainRatio is materials/lectures that should be learned by student. ValRatio is the percentage of the materials that should be examined as a middle-term examination, and TestRatio is the percentage of the materials should be examined as final examination. So, if you have not enough material for training, the student cannot be a success in the middle and final examination. Is it clear? A neural network works for such a simple student for learning/training. So, your network faces with over-fitting problems.

Matlab neural network testing

I have created a neural network and the performance is good. By using nprtool, we are allow to test the network with an input data and target data. Here is my question, what is the purpose of testing a neural network with target data provided? Isn't it testing should not hav e target data so that we can know how well can the trained neural network perform without target data is given? Hope someone will respond to this, thanks =)
I'm not familiar with nprtool, but I suspect it would give the input data to your neural network, and then compare your NN's output data with the target data (and compute some kind of success rate based on that).
So your NN will never see the target data, it's just used to measure the performance.
It's like the "teacher's edition" of the exercise books in school. The student (i.e. the NN) doesn't have the solutions, but her/his answers will be compared against them by the teacher (i.e. nprtool). (Okay, the teacher probably/hopefully knows the subject, but you get the idea.)
The "target" data t is the desired y of y=net(x) used as example to train the network.
What nprtool do is to divide the training set into three groups: the training set, the validation set and the test set.
The first one is used to actually update the network.
The second one is used to determine the performances of the net (note: this set is NOT used in any way to update the network): as the NN "learns" the error (as difference between the t and net(x)) over the validation set decreases. The trend will eventually stop or even reverse: this phenomena is called "overfitting", which means the NN is now chasing the training set, "memorizing" it at the cost of the ability to generalize (meaning: to perform well with unseen data). So the purpose of this validation set is to determine when to stop the training before the NN starts overfitting. This should answer your question.
Finally third set is for external testing, to leave you a set of data untouched by the training procedure.
Even though the total data set [training, validation and testing] are inputs to the training algorithm, the testing data is in no way used to design (i.e., train and validate) the net
total = design + test
design = train + validate
The training data is used to estimate weights and biases
The validation data is used to monitor the design performance on nontraining data. REGARDLESS OF THE PERFORMANCE ON TRAINING DATA, if validation performance degrades continuously for 6 (default) epochs, training is terminated (VALIDATION STOPPING).
This mitigates the dreaded phenomenon of OVERTRAINING AN OVERFIT NET where performance on nontraining data degrades even if the training set performance is improving.
An overfit net has more unknown weights and biases than training equations, thereby allowing an infinite number of solutions. A simple example of overfitting with two unknowns but only one equation:
KNOWN: a, b, c
FIND: unique x1 and x2
USING: a * x1 + b * x2 = c
Hope this helps.
Greg