yolov4..cfg : increasing subdivisions parameter consequences - darknet

I'm trying to train a custom dataset using Darknet framework and Yolov4. I built up my own dataset but I get a Out of memory message in google colab. It also said "try to change subdivisions to 64" or something like that.
I've searched around the meaning of main .cfg parameters such as batch, subdivisions, etc. and I can understand that increasing the subdivisions number means splitting into smaller "pictures" before processing, thus avoiding to get the fatal "CUDA out of memory". And indeed switching to 64 worked well. Now I couldn't find anywhere the answer to the ultimate question: is the final weight file and accuracy "crippled" by doing this? More specifically what are the consequences on the final result? If we put aside the training time (which would surely increase since there are more subdivisions to train), how will be the accuracy?
In other words: if we use exactly the same dataset and train using 8 subdivisions, then do the same using 64 subdivisions, will the best_weight file be the same? And will the object detections success % be the same or worse?
Thank you.

first read comments
suppose you have 100 batches.
batch size = 64
subdivision = 8
it will divide your batch = 64/8 => 8
Now it will load and work one by one on 8 divided parts into the RAM, because of LOW RAM capacity you can change the parameter according to ram capacity.
you can also reduce batch size , so it will take low space in ram.
It will do nothing to the datasets images.
It is just splitting the large batch size which can't be load in RAM, so divided into small pieces.

Related

scikit-learn: Hierarchal Agglomerative Clustering performance with increasing dataset

scikit-learn==0.21.2
Hierarchal Agglomerative Clustering algorithm response time is increasing exponentially when increasing the dataset.
My Data set is textual. Each Document is 7-10 words long.
Using the following code to perform the Clustering.
hac_model = AgglomerativeClustering(affinity=consine,
linkage=complete,
compute_full_tree=True,
connectivity=None, memory=None,
n_clusters=None,
distance_threshold=0.7)
cluster_matrix = hac_model.fit_predict(matrix)
where the matrix of size are:
5000x1500 taking 17 seconds
10000*2000 taking 113 seconds
13000*2418 taking 228 seconds
I can't control 5000, 10000, 15000 as that is the size of input, or the feature set size(i.e 1500,2000,2418) since I am using BOW model(TFIDF).
I end up using all the unique words(after removing stopwords) as my feature list. this list grows as the input size increases.
So two questions.
How do I avoid increase in feature set size irrespective of increase in the size of input data set
Is there a way I can improve on the performance of the Algorithm without compromising on the quality?
Standard AGNES hierarchical clustering is O(n³+n²d) in complexity. So the number of instances is much more a problem than the number of features.
There are approaches that typically run in O(n²d), although the worst case remains the same, so they will be much faster than this. With these you'll usually run into memory limits first... Unfortunately, this isn't implemented in sklearn for all I know, so you'll have to use other clustering tools - or write the algorithm yourself.

How to compensate if I cant do a large batch size in neural network

I am trying to run an action recognition code from GitHub. The original code used a batch size of 128 with 4 GPUS. I only have two gpus so I cannot match their bacth size number. Is there anyway I can compensate this difference in batch. I saw somewhere that iter_size might compensate according to a formula effective_batchsize= batch_size*iter_size*n_gpu. what is iter_size in this formula?
I am using PYthorch not Caffe.
In pytorch, when you perform the backward step (calling loss.backward() or similar) the gradients are accumulated in-place. This means that if you call loss.backward() multiple times, the previously calculated gradients are not replaced, but in stead the new gradients get added on to the previous ones. That is why, when using pytorch, it is usually necessary to explicitly zero the gradients between minibatches (by calling optimiser.zero_grad() or similar).
If your batch size is limited, you can simulate a larger batch size by breaking a large batch up into smaller pieces, and only calling optimiser.step() to update the model parameters after all the pieces have been processed.
For example, suppose you are only able to do batches of size 64, but you wish to simulate a batch size of 128. If the original training loop looks like:
optimiser.zero_grad()
loss = model(batch_data) # batch_data is a batch of size 128
loss.backward()
optimiser.step()
then you could change this to:
optimiser.zero_grad()
smaller_batches = batch_data[:64], batch_data[64:128]
for batch in smaller_batches:
loss = model(batch) / 2
loss.backward()
optimiser.step()
and the updates to the model parameters would be the same in each case (apart maybe from some small numerical error). Note that you have to rescale the loss to make the update the same.
The important concept is not so much the batch size; it's the quantity of epochs you train. Can you double the batch size, giving you the same cluster batch size? If so, that will compensate directly for the problem. If not, double the quantity of iterations, so you're training for the same quantity of epochs. The model will quickly overcome the effects of the early-batch bias.
However, if you are comfortable digging into the training code, myrtlecat gave you an answer that will eliminate the batch-size difference quite nicely.

Set Batch Size *and* Number of Training Iterations for a neural network?

I am using the KNIME Doc2Vec Learner node to build a Word Embedding. I know how Doc2Vec works. In KNIME I have the option to set the parameters
Batch Size: The number of words to use for each batch.
Number of Epochs: The number of epochs to train.
Number of Training Iterations: The number of updates done for each batch.
From Neural Networks I know that (lazily copied from https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network):
one epoch = one forward pass and one backward pass of all the training examples
batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).
As far as I understand it makes little sense to set batch size and iterations, because one is determined by the other (given the data size, which is given by the circumstances). So why can I change both parameters?
This is not necessarily the case. You can also train "half epochs". For example, in Google's inceptionV3 pretrained script, you usually set the number of iterations and the batch size at the same time. This can lead to "partial epochs", which can be fine.
If it is a good idea or not to train half epochs may depend on your data. There is a thread about this but not a concluding answer.
I am not familiar with KNIME Doc2Vec, so I am not sure if the meaning is somewhat different there. But from the definitions you gave setting batch size + iterations seems fine. Also setting number of epochs could cause conflicts though leading to situations where numbers don't add up to reasonable combinations.

how to choose batch size in caffe

I understand that bigger batch size gives more accurate results from here. But I'm not sure which batch size is "good enough". I guess bigger batch sizes will always be better but it seems like at a certain point you will only get a slight improvement in accuracy for every increase in batch size. Is there a heuristic or a rule of thumb on finding the optimal batch size?
Currently, I have 40000 training data and 10000 test data. My batch size is the default which is 256 for training and 50 for the test. I am using NVIDIA GTX 1080 which has 8Gigs of memory.
Test-time batch size does not affect accuracy, you should set it to be the largest you can fit into memory so that validation step will take shorter time.
As for train-time batch size, you are right that larger batches yield more stable training. However, having larger batches will slow training significantly. Moreover, you will have less backprop updates per epoch. So you do not want to have batch size too large. Using default values is usually a good strategy.
See my masters thesis, page 59 for some of the reasons why to choose a bigger batch size / smaller batch size. You want to look at
epochs until convergence
time per epoch: higher is better
resulting model quality: lower is better (in my experiments)
A batch size of 32 was good for my datasets / models / training algorithm.

What is the largest amount of memory MATLAB can allocate?

This may seem like a little simple question to some, surely, but I haven't been able to find a direct answer online.
How much memory does MATLAB needs for a single double value (from my undestanding it is his default data type) and taking that into account what would be the largest amount of memory it could allocate on a PC (with more than enough RAM)? Are there any restrictions in that view?
This is in view that on my faculty we're thinking about transferring some programs which have been written in C to MATLAB, but are concerned about such issues.
'For floating-point numbers, MATLAB uses 4 or 8 bytes for single and double types.' <- quote from here. The memory model of Matlab is quite flexible. Five years ago I was inverting 1,000,000 by 1,000,000 matrices on a cluster using an add-on package Star-p which I guess was recently acquired by Microsoft.
As long as you're on a 64bit box, you can access 2^64 bytes of memory <- Matlab is simply limited by the physical limitations of your box, though as noted above, there are solutions to create a shared memory pool across a cluster of computers within a single Matlab environment.
See here and look up what the max size is for your machine specs. For Windows XP 32 bit, the total workspace size (max) is about 1700 MB and the largest matrix size is about 1200 MB.