How to compensate if I cant do a large batch size in neural network - neural-network

I am trying to run an action recognition code from GitHub. The original code used a batch size of 128 with 4 GPUS. I only have two gpus so I cannot match their bacth size number. Is there anyway I can compensate this difference in batch. I saw somewhere that iter_size might compensate according to a formula effective_batchsize= batch_size*iter_size*n_gpu. what is iter_size in this formula?
I am using PYthorch not Caffe.

In pytorch, when you perform the backward step (calling loss.backward() or similar) the gradients are accumulated in-place. This means that if you call loss.backward() multiple times, the previously calculated gradients are not replaced, but in stead the new gradients get added on to the previous ones. That is why, when using pytorch, it is usually necessary to explicitly zero the gradients between minibatches (by calling optimiser.zero_grad() or similar).
If your batch size is limited, you can simulate a larger batch size by breaking a large batch up into smaller pieces, and only calling optimiser.step() to update the model parameters after all the pieces have been processed.
For example, suppose you are only able to do batches of size 64, but you wish to simulate a batch size of 128. If the original training loop looks like:
optimiser.zero_grad()
loss = model(batch_data) # batch_data is a batch of size 128
loss.backward()
optimiser.step()
then you could change this to:
optimiser.zero_grad()
smaller_batches = batch_data[:64], batch_data[64:128]
for batch in smaller_batches:
loss = model(batch) / 2
loss.backward()
optimiser.step()
and the updates to the model parameters would be the same in each case (apart maybe from some small numerical error). Note that you have to rescale the loss to make the update the same.

The important concept is not so much the batch size; it's the quantity of epochs you train. Can you double the batch size, giving you the same cluster batch size? If so, that will compensate directly for the problem. If not, double the quantity of iterations, so you're training for the same quantity of epochs. The model will quickly overcome the effects of the early-batch bias.
However, if you are comfortable digging into the training code, myrtlecat gave you an answer that will eliminate the batch-size difference quite nicely.

Related

yolov4..cfg : increasing subdivisions parameter consequences

I'm trying to train a custom dataset using Darknet framework and Yolov4. I built up my own dataset but I get a Out of memory message in google colab. It also said "try to change subdivisions to 64" or something like that.
I've searched around the meaning of main .cfg parameters such as batch, subdivisions, etc. and I can understand that increasing the subdivisions number means splitting into smaller "pictures" before processing, thus avoiding to get the fatal "CUDA out of memory". And indeed switching to 64 worked well. Now I couldn't find anywhere the answer to the ultimate question: is the final weight file and accuracy "crippled" by doing this? More specifically what are the consequences on the final result? If we put aside the training time (which would surely increase since there are more subdivisions to train), how will be the accuracy?
In other words: if we use exactly the same dataset and train using 8 subdivisions, then do the same using 64 subdivisions, will the best_weight file be the same? And will the object detections success % be the same or worse?
Thank you.
first read comments
suppose you have 100 batches.
batch size = 64
subdivision = 8
it will divide your batch = 64/8 => 8
Now it will load and work one by one on 8 divided parts into the RAM, because of LOW RAM capacity you can change the parameter according to ram capacity.
you can also reduce batch size , so it will take low space in ram.
It will do nothing to the datasets images.
It is just splitting the large batch size which can't be load in RAM, so divided into small pieces.

How to train a neural network with Q-Learning

I just implemented Q-Learning without neural networks but I am stuck at implementing them with neural networks.
I will give you a pseudo code showing how my Q-Learning is implemented:
train(int iterations)
buffer = empty buffer
for i = 0 while i < iterations:
move = null
if random(0,1) > threshold:
move = random_move()
else
move = network_calculate_move()
input_to_network = game.getInput()
output_of_network = network.calculate(input_to_network)
game.makeMove(move)
reward = game.getReward()
maximum_next_q_value = max(network.calculate(game.getInput()))
if reward is 1 or -1: //either lost or won
output_of_network[move] = reward
else:
output_of_network[move] = reward + discount_factor * max
buffer.add(input_to_network, output_of_network)
if buffer is full:
buffer.remove_oldest()
train_network()
train_network(buffer b):
batch = b.extract_random_batch(batch_size)
for each input,output in batch:
network.train(input, output, learning_rate) //one forward/backward pass
My problem right now is that this code works for a buffer size of less than 200.
For any buffer over 200, my code does not work anymore so I've got a few questions:
Is this implementation correct? (In theory)
How big should the batch size be compared to the buffer size
How would one usually train the network? For how long? Until a specific MSE of the whole batch is reached?
Is this implementation correct? (In theory)
Yes, your pseudocode does have the right approach.
How big should the batch size be compared to the buffer size
Algorithmically speaking, using larger batches in stochastic gradient descent allows you to reduce the variance of your stochastic gradient updates (by taking the average of the gradients in the batch), and this in turn allows you to take bigger step-sizes, which means the optimization algorithm will make progress faster.
The experience replay buffer stores a fixed number of recent memories, and as new ones come in, old ones are removed. When the time comes to train, we simply draw a uniform batch of random memories from the buffer, and train our network with them.
While related, there is no standard value for batch size vs. buffer size. Experimenting with these hyperparameters is one of the joys of deep reinforcement learning.
How would one usually train the network? For how long? Until a
specific MSE of the whole batch is reached?
Networks are usually trained until they "converge," which means that there are repeatedly no meaningful changes in the Q-table between episodes

Set Batch Size *and* Number of Training Iterations for a neural network?

I am using the KNIME Doc2Vec Learner node to build a Word Embedding. I know how Doc2Vec works. In KNIME I have the option to set the parameters
Batch Size: The number of words to use for each batch.
Number of Epochs: The number of epochs to train.
Number of Training Iterations: The number of updates done for each batch.
From Neural Networks I know that (lazily copied from https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network):
one epoch = one forward pass and one backward pass of all the training examples
batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).
As far as I understand it makes little sense to set batch size and iterations, because one is determined by the other (given the data size, which is given by the circumstances). So why can I change both parameters?
This is not necessarily the case. You can also train "half epochs". For example, in Google's inceptionV3 pretrained script, you usually set the number of iterations and the batch size at the same time. This can lead to "partial epochs", which can be fine.
If it is a good idea or not to train half epochs may depend on your data. There is a thread about this but not a concluding answer.
I am not familiar with KNIME Doc2Vec, so I am not sure if the meaning is somewhat different there. But from the definitions you gave setting batch size + iterations seems fine. Also setting number of epochs could cause conflicts though leading to situations where numbers don't add up to reasonable combinations.

how to choose batch size in caffe

I understand that bigger batch size gives more accurate results from here. But I'm not sure which batch size is "good enough". I guess bigger batch sizes will always be better but it seems like at a certain point you will only get a slight improvement in accuracy for every increase in batch size. Is there a heuristic or a rule of thumb on finding the optimal batch size?
Currently, I have 40000 training data and 10000 test data. My batch size is the default which is 256 for training and 50 for the test. I am using NVIDIA GTX 1080 which has 8Gigs of memory.
Test-time batch size does not affect accuracy, you should set it to be the largest you can fit into memory so that validation step will take shorter time.
As for train-time batch size, you are right that larger batches yield more stable training. However, having larger batches will slow training significantly. Moreover, you will have less backprop updates per epoch. So you do not want to have batch size too large. Using default values is usually a good strategy.
See my masters thesis, page 59 for some of the reasons why to choose a bigger batch size / smaller batch size. You want to look at
epochs until convergence
time per epoch: higher is better
resulting model quality: lower is better (in my experiments)
A batch size of 32 was good for my datasets / models / training algorithm.

What is the meaning of "drop" and "sgd" while training custom ner model using spacy?

I am training a custom ner model to identify organization name in addresses.
My training loop looks like this:-
for itn in range(100):
random.shuffle(TRAIN_DATA)
losses = {}
batches = minibatch(TRAIN_DATA, size=compounding(15., 32., 1.001))
for batch in batches
texts, annotations = zip(*batch)
nlp.update(texts, annotations, sgd=optimizer,
drop=0.25, losses=losses)
print('Losses', losses)
Can someone explain the parameters "drop", "sgd", "size" and give some ideas to how should I change these values, so that my model performs better.
You can find details and tips in the spaCy documentation:
https://spacy.io/usage/training#tips-batch-size:
The trick of increasing the batch size is starting to become quite popular ... In training the various spaCy models, we haven’t found much advantage from decaying the learning rate – but starting with a low batch size has definitely helped
batch_size = compounding(1, max_batch_size, 1.001)
This will set the batch size to start at 1, and increase each batch until it reaches a maximum size.
https://spacy.io/usage/training#tips-dropout:
For small datasets, it’s useful to set a high dropout rate at first, and decay it down towards a more reasonable value. This helps avoid the network immediately overfitting, while still encouraging it to learn some of the more interesting things in your data. spaCy comes with a decaying utility function to facilitate this. You might try setting:
dropout = decaying(0.6, 0.2, 1e-4)
https://spacy.io/usage/training#annotations:
sgd: An optimizer, i.e. a callable to update the model’s weights. If not set, spaCy will create a new one and save it for further use.
The drop, sgd and size are some of the parameters you can customize to optimize your training.
drop is used to change the value of dropout.
size is used to change the size of the batch
sgd is used to change various hyperparameters such as learning rate, Adam beta1 and beta2 parameters, gradient clipping and L2 regularisation.
I consider the sgd to be a very important argument to experiment with.
To help you, I wrote a short blog post showing how to customize any spaCy parameters from your python interpreter (e.g. jupyter notebook). No command line interface required.