Choose training and test set for MLP and Hopfield network - neural-network

I have a question regarding the choice of the training and the test set for a Multilayer Perceptron (MLP) and a Hopfield network.
For example, assume that we got 100 patterns of the digits 0-9 given in a bitmap format. 10 of them are perfect digits while the other 90 are distorted. Which of these patterns will be used for the training set and which for the test set? The goal is to classify the digits.
I suppose for the Hopfield network the perfect digits will be used as the training set, but what about the MLP? One approach I thought of was to take for example 70 of the distorted digits and use them as the training set along with the corresponding perfect digits as their intended targets. Is this approach correct?

Disclaimer: I have not worked with Hopfield Networks before, so I trust you in your statements about it, but it should not be of that great relevance for the answer, anyways.
I am also assuming that you want to classify the digits, which is something you don't explicitly state in your question.
As for a proper split: Aside from the fact that that little training data is generally not a feasible amount to get decent results for a MLP (even for a simple task such as digit classification), it is unlikely that you will be able to "pre-label" your training data in terms of quality in most real-world scenarios. You should therefore always assume that the data you are processing is inherently noisy. A good example for this is also the fact that data augmentation is frequently used to enrich your training corpus. Since data augmentation can consist of such simple changes as
added noise
minor rotations
horizontal/vertical flipping (the latter only makes so much sense for digits, though)
can improve your accuracy, it goes to show that visual quality and quantity for training are two very different things. Of course, it is not per se true that quantity alone will solve your problem (although research indicates that it is at least a good idea to use very much data)
Further, what you judge to be a good representation might be very much different from the network's perspective (although for labeling digits it might be rather easy to tell). A decent strategy is therefore to simply perform a random sampling for your training/test split.
Something I like to do when preprocessing a dataset is, when done splitting, to check whether every class is somewhat evenly represented in the splits, so you won't overfit.
Similarly, I would argue that having clean/high quality images of digits in both your test and training set might make the most sense, since you want to both be able to recognize a high quality number, as well as a sloppily written digit, and then test whether you can actually recognize it (with your test set).

Related

Is running more epochs really a direct cause of overfitting?

I've seen some comments in online articles/tutorials or Stack Overflow questions which suggest that increasing number of epochs can result in overfitting. But my intuition tells me that there should be no direct relationship at all between number of epochs and overfitting. So I'm looking for answer which explains if I'm right or wrong (or whatever's in between).
Here's my reasoning though. To overfit, you need to have enough free parameters (I think this is called "capacity" in neural networks) in your model to generate a function which can replicate the sample data points. If you don't have enough free parameters, you'll never overfit. You might just underfit.
So really, if you don't have too many free parameters, you could run infinite epochs and never overfit. If you have too many free parameters, then yes, the more epochs you have the more likely it is that you get to a place where you're overfitting. But that's just because running more epochs revealed the root cause: too many free parameters. The real loss function doesn't care about how many epochs you run. It existed the moment you defined your model structure, before you ever even tried to do gradient descent on it.
In fact, I'd venture as far as to say: assuming you have the computational resources and time, you should always aim to run as many epochs as possible, because that will tell you whether your model is prone to overfitting. Your best model will be the one that provides great training and validation accuracy, no matter how many epochs you run it for.
EDIT
While reading more into this, I realise I forgot to take into account that you can arbitrarily vary the sample size as well. Given a fixed model, a smaller sample size is more prone to being overfit. And then that kind of makes me doubt my intuition above. Still happy to get an answer though!
Your intuition to me seems completely correct.
But here is the caveat. The whole purpose of deep models is that they are "deep" (duh!!). So what happens is that your feature space gets exponentially larger as you grow your network.
Here is an example to compare a deep model with a simpler mode:
Assume you have a 10-variable data set. With a crazy amount of feature engineering, you might be able to extract 50 features out of it. Then if you run a traditional model (let's say a logistic regression), you will have 50 parameters (capacity in your word, or degree of freedom) to train.
But, if you use a very simple deep model with Layer 1: 10 unit, layer2: 10 units, layer3: 5 units, layer4: 2 units, you will end up with (10*10 + 10*10 + 5*2 = 210) parameters to train.
Therefore, usually when we train a neural net for a long time, we end of with a memorized version of our data set(this gets worse if our data set is small and easy to be memorized).
But as you also mentioned, there is no intrinsic reason why higher number of epochs result in overfitting. Early stopping is usually a very good way for avoiding this. Just set patience equal to 5-10 epochs.
If the amount of trainable parameters is small with respect to the size of your training set (and your training set is reasonably diverse) then running over the same data multiple times will not be that significant, since you will be learning some features about your problem, rather than just memorizing the training data set. The problem arises when the amount of parameters is comparable to your training data set size (or bigger), it is basically the same problem as with any machine learning technique that uses too many features. This is quite common if you use large layers with dense connections. To combat this overfitting problem there are lots of regularization techniques (dropout, L1 regularizer, constraining certain connections to be 0 or equal such as in CNN).
The problem is that might still be left with too many trainable parameters. A simple way to regularize even further is to have a small learning rate (i.e. don't learn too much from this particular example lest you memorize it) combined with monitoring the epochs (if there is a large gap increase between validation/training accuracy, you are starting to overfit your model). You can then use the gap info to stop your training. This is a version of what is known as early stopping (stop before you reach the minimum in your loss function).

How to interpret the discriminator's loss and the generator's loss in Generative Adversarial Nets?

I am reading people's implementation of DCGAN, especially this one in tensorflow.
In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow):
Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training GANs?
Unfortunately, like you've said for GANs the losses are very non-intuitive. Mostly it happens down to the fact that generator and discriminator are competing against each other, hence improvement on the one means the higher loss on the other, until this other learns better on the received loss, which screws up its competitor, etc.
Now one thing that should happen often enough (depending on your data and initialisation) is that both discriminator and generator losses are converging to some permanent numbers, like this:
(it's ok for loss to bounce around a bit - it's just the evidence of the model trying to improve itself)
This loss convergence would normally signify that the GAN model found some optimum, where it can't improve more, which also should mean that it has learned well enough. (Also note, that the numbers themselves usually aren't very informative.)
Here are a few side notes, that I hope would be of help:
if loss haven't converged very well, it doesn't necessarily mean that the model hasn't learned anything - check the generated examples, sometimes they come out good enough. Alternatively, can try changing learning rate and other parameters.
if the model converged well, still check the generated examples - sometimes the generator finds one/few examples that discriminator can't distinguish from the genuine data. The trouble is it always gives out these few, not creating anything new, this is called mode collapse. Usually introducing some diversity to your data helps.
as vanilla GANs are rather unstable, I'd suggest to use some version
of the DCGAN models, as they contain some features like convolutional
layers and batch normalisation, that are supposed to help with the
stability of the convergence. (the picture above is a result of the DCGAN rather than vanilla GAN)
This is some common sense but still: like with most neural net structures tweaking the model, i.e. changing its parameters or/and architecture to fit your certain needs/data can improve the model or screw it.

Neural Network Retraining

I am coding a simple Neural Network, but I have thought of one issue that is bothering me.
This NN is for finding categories in the input. To better understand this, say the categories are "the numbers" (0,1,2...9).
To implement this the output layer is 10 nodes. Say I train this NN with several input -output pairs and save the learned weights somewhere. As the learning process takes quite a lot of time, after that I go and take a break. Come fresh the next day and re-start learning with new input -output pairs. So fair so goo
But what happen if on that time, I decide that I want to recognize hexadecimals (0,1,...9,A,B,,,E,F)... ergo the categories are increasing.
I suspect that would imply changing the structure of the NN and therefore I should retrain the NN from scratch.
Is this so?
Any comment, advice or your share of experience will be greatly appreciated
EDIT: This question has been marked as duplicate. I read the other question and although similar, my question is more concrete. While the other question speaks in generalities and the answer also is quite general- mine is very concrete as I use an example:
If I train a NN to recognize decimal numbers and later on decide to add data to make it recognize hexadecimals, can this be possible? How? Do I have to retrain the whole NN? In other words, does the structure of the NN needs to stay stationary with 10 OR 16 outputs since the beginning?
I would very much appreciate for a concrete answer to this. Thanks
A few considerations
Your training set and testing set should have the same distribution
Unless you have some way of specifying sample weights like some algorithms can you should at all costs avoid training on biased data. This is true for machine learning in general, not only neural networks.
Resuming training from a previous session is equivalent of using good initial values
Technically, you're just using the previous network as initial value instead of a random value. You should keep training in the whole dataset as always, to avoid a biased network.
Short Answer
Yes, you should always retrain your network if by retrain, you mean doing a training routine with the full dataset.
If you just mean retrain as doing a really long training iteration, it isn't your choice anyway. You must always train the network until the training error and testing error (or cross validated error) converge. If you reuse the previously trained network, that will probably happen faster.
You see, this is true no matter what kind of model change. If you change the network architecture, or the dataset, or both (your example), or some other parameter.
Of course, if you change the network architecture, you're going to have a bit of trouble on reusing the previous network. You could reuse the learned parameters from nodes that were kept and randomly initialize the parameters for the new nodes.

Neural Network OCR - using probability to determine touching symbols?

Now I use some neural network for OCR and it produces output symbol and some probability for it. Also I have algorithm to split touching characters.
I expected to use probability to decide when to apply splitting.
But now I cannot do this because my network sometimes gives probability for touching characters higher then for normal characters.
Also I cannot understand what happened even after splitting - sometimes normal symbol can be split into two another symbols that both can be recognized with higher probability that initial symbol.
So I need to decide what to do. The question is
can Neural Network at least in theory provide reliable probability for OCR in this sense?
If it is possible then what should I try to do? Should I try to process current output or train network more or choose another network?
Any kind of help or suggestion will be greatly appreciated
Your approach is good and should eventually work given enough training data and given that you remove enough bugs from your preprocessing, splitting, training, etc.
Make sure that you split in the training set (prior to training) exactly the same way that you split the digits when you test them.
But note that Machine Learning produces algorithms that are correct within some accuracy, so you will always find instances that fail. The question is how good is your overall test performance (e.g. % correct digits), and how to increase this to the level that your application requires.
The question is can Neural Network at least in theory provide reliable
probability for OCR in this sense?
Yes
If it is possible then what should I try to do? Should I try to
process current output or train network more or choose another
network?
all of the above until it works! Training size is one of the key factors, and as you grow your training size you can grow your network to improve accuracy.

Machine learning - training step

When you're using Haar-like features for your training data for an Adaboost algorithm, how do you build your data sets? Do you literally have to find thousands of positive and negative samples? There must be a more efficient way of doing this...
I'm trying to analyze images in matlab (not faces) and am relatively new to image processing.
Yes, you do need many positive and negative samples for training. This is especially true for Adaboost, which works by repeatedly resampling the training set. How many samples is enough is hard to say. But generally, the more the better, because that increases the chances of your training set being representative.
Also, it seems to me that your quest for efficiency is misplaced. Training is done ahead of time, presumably off-line. It is the efficiency of classifying unknown instances after the training is done, that people usually worry about.
Undoubtedly, more data, more information, better result. You should include more information as possible. However, one thing you may need care is the ratio of positive set to negative set. For logistic regression, the ratio should not be over 1:5, for adaboost, I'm not really sure with the result, but it will certainly change with the ratio (I tried before).
Yes we need many positive and negative samples for the training but the collection of those data is very tedious. But you can make it easy by taking videos instead of pictures and using ffmpeg to convert those videos into pictures. It will make the training part much easier.
The only reason to have kind of equal positive and negative samples is to avoid bias. Sometimes you might get high accuracy , but it completely fails to classify one category. To evaluate such methods precision/recall are more useful than accuracy.