I need to train an inception model with more than 400 000 images.
I know I can't load it all on the memory, since it's too big.
So, I will certainly train it over batch, instead than epoch (And so generate load every batch from the disk)
But, it will be very slow, no ?
Do you know if there is a different way of doing it ?
I also want to apply different and aleatory transformations to my images during the training.
I looked over the dataimagegenerator class, but, it's incompatible with all the images I have.
So, there is a way to do that without the generator ?
Thanks to you!
You can use the fit_generator method (https://keras.io/models/model/#fit_generator) of the model. This still loads images from memory, however this is done in parallel and has less overhead. You can write your own generator to apply the transformations you want to (https://wiki.python.org/moin/Generators).
If you need faster memory access you can take a look at hdf5. You can store the images in hdf5 to provide faster indexing and loading for your program. (http://www.h5py.org/)
Related
I implemented an image classification network to classify a dataset of 100 classes by using Alexnet as a pretrained model and changing the final output layers.
I noticed when I was loading my data like
trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=False)
, I was getting accuracy on validation dataset around 2-3 % for around 10 epochs but when I just changed shuffle=True and retrained the network, the accuracy jumped to 70% in the first epoch itself.
I was wondering if it happened because in the first case the network was being shown one example after the other continuously for just one class for few instances resulting in network making poor generalizations during training or is there some other reason behind it?
But, I did not expect that to have such a drastic impact.
P.S: All the code and parameters were exactly the same for both the cases except changing the shuffle option.
Yes it totally can affect the result! Shuffling the order of the data that we use to fit the classifier is so important, as the batches between epochs do not look alike.
Checking the Data Loader Documentation it says:
"shuffle (bool, optional) – set to True to have the data reshuffled at every epoch"
In any case, it will make the model more robust and avoid over/underfitting.
In your case this heavy increase of accuracy (from the lack of awareness of the dataset) probably is due to how the dataset is "organised" as maybe, as an example, each category goes to a different batch, and in every epoch, a batch contains the same category, which derives to a very bad accuracy when you are testing.
PyTorch did many great things, and one of them is the DataLoader class.
DataLoader class takes the dataset (data), sets the batch_size (which is how many samples per batch to load), and invokes the sampler from a list of classes:
DistributedSampler
SequentialSampler
RandomSampler
SubsetRandomSampler
WeightedRandomSampler
BatchSampler
The key thing samplers do is how they implement the iter() method.
In case of SequentionalSampler it looks like this:
def __iter__(self):
return iter(range(len(self.data_source)))
This returns an iterator, for every item in the data_source.
When you set shuffle=True that would not use SequentionalSampler, but instead the RandomSampler.
And this may improve the learning process.
I am trying to make a classification with spark-mllib, especially using RandomForestModel.
I have taken a look on this example from spark (RandomForestClassificationExample.scala), but I need a somewhat expanded approach.
I need to be able to train a model, save the model for future usage, but also to be able to load it and train further. Like, extend the dataset and train again.
I completely understand the need to export and import a model for future usage.
Unfortunately, training "further" isn't possible with Spark nor does it make sense. Thus it's recommended to retrain the model with the data from use to train the first model + new data.
Your first training values/metrics don't have much sense anymore if you want to add more data (e.g features, intercept, coefficients, etc.)
I hope that this answers your question.
You may need to look for some reinforcement learning technique instead of Random Forest if you want to use the old model and retrain it with new data.
That I know, there's deeplearning4j that implements deep reinforcement learning algorithms on top of Spark (and Hadoop).
If you only need to save JavaRDD[Object], you can do (in Java)
model.saveAsObjectFile()
Values will be writter out using Java Serialization. Then, to read your data you do:
JavaRDD[Object] model = jsc.objectFile(pathOfYourModel)
Be careful, object files are not available in Python. But you could use saveAsPickleFile() to write your model and pickleFile() to read it.
I am pretty new to tensorflow. I used to use theano for deep learning development. I notice a difference between these two, that is where input data can be stored.
In Theano, it supports shared variable to store input data on GPU memory to reduce the data transfer between CPU and GPU.
In tensorflow, we need to feed data into placeholder, and the data can come from CPU memory or files.
My question is: is it possible to store input data on GPU memory for tensorflow? or does it already do it in some magic way?
Thanks.
If your data fits on the GPU, you can load it into a constant on GPU from e.g. a numpy array:
with tf.device('/gpu:0'):
tensorflow_dataset = tf.constant(numpy_dataset)
One way to extract minibatches would be to slice that array at each step instead of feeding it using tf.slice:
batch = tf.slice(tensorflow_dataset, [index, 0], [batch_size, -1])
There are many possible variations around that theme, including using queues to prefetch the data to GPU dynamically.
It is possible, as has been indicated, but make sure that it is actually useful before devoting too much effort to it. At least at present, not every operation has GPU support, and the list of operations without such support includes some common batching and shuffling operations. There may be no advantage to putting your data on GPU if the first stage of processing is to move it to CPU.
Before trying to refactor code to use on-GPU storage, try at least one of the following:
1) Start your session with device placement logging to log which ops are executed on which devices:
config = tf.ConfigProto(log_device_placement=True)
sess = tf.Session(config=config)
2) Try to manually place your graph on GPU by putting its definition in a with tf.device('/gpu:0'): block. This will throw exceptions if ops are not GPU-supported.
Using Matlab, I am going to generate several data files and store them in H5 format as 20x1500xN, where N is an integer that can vary, but typically around 2300. Each file will have 4 different data sets with equal structure. Thus, I will quickly achieve a storage problem. My two questions:
Is there any reason not the split the 4 different data sets, and just save as 4x20x1500xNinstead? I would prefer having them split, since it is different signal modalities, but if there is any computational/compression advantage to not having them separated, I will join them.
Using Matlab's built-in compression, I set deflate=9 (and DataType=single). However, I have now realized that using deflate multiplies my computational time with 5. I realize this could have something to do with my ChunkSize, which I just put to 20x1500x5 - without any reasoning behind it. Is there a strategic way to optimize computational load w.r.t. deflation and compression time?
Thank you.
1- Splitting or merging? It won't make a difference in the compression procedure, since it is performed in blocks.
2- Your choice of chunkshape seems, indeed, bad. Chunksize determines the shape and size of each block that will be compressed independently. The bad is that each chunk is of 600 kB, that is much larger than the L2 cache, so your CPU is likely twiddling its fingers, waiting for data to come in. Depending on the nature of your data and the usage pattern you will use the most (read the whole array at once, random reads, sequential reads...) you may want to target the L1 or L2 sizes, or something in between. Here are some experiments done with a Python library that may serve you as a guide.
Once you have selected your chunksize (how many bytes will your compression blocks have), you have to choose a chunkshape. I'd recommend the shape that most closely fits your reading pattern, if you are doing partial reads, or filling in in a fastest-axis-first if you want to read the whole array at once. In your case, this will be something like 1x1500x10, I think (second axis being the fastest, last one the second fastest, and fist the slowest, change if I am mistaken).
Lastly, keep in mind that the details are quite dependant on the specific machine you run it: the CPU, the quality and load of the hard drive or SSD, speed of RAM... so the fine tuning will always require some experimentation.
I'm working with a reasonably sized net (1 convolutional layer, 2 fully connected layers). Every time I save variables using tf.train.Saver, the .ckpt files are half a gigabyte each of disk space (512 MB to be exact). Is this normal? I have a Caffe net with the same architecture that requires only a 7MB .caffemodel file. Is there a particular reason why Tensorflow saves such large file sizes?
Many thanks.
Hard to tell how large your net is from what you've described -- the number of connections between two fully connected layers scales up quadratically with the size of each layer, so perhaps your net is quite large depending on the size of your fully connected layers.
If you'd like to save space in the checkpoint files, you could replace this line:
saver = tf.train.Saver()
with the following:
saver = tf.train.Saver(tf.trainable_variables())
By default, tf.train.Saver() saves all variables in your graph -- including the variables created by your optimizer to accumulate gradient information. Telling it to save only trainable variables means it will save only the weights and biases of your network, and discard the accumulated optimizer state. Your checkpoints will probably be a lot smaller, with the tradeoff that it you may experience slower training for the first few training batches after you resume training, while the optimizer re-accumulates gradient information. It doesn't take long at all to get back up to speed, in my experience, so personally, I think the tradeoff is worth it for the smaller checkpoints.
Maybe you can try (in Tensorflow 1.0):
saver.save(sess, filename, write_meta_graph=False)
which doesn't save meta Graph information.
See:
https://www.tensorflow.org/versions/master/api_docs/python/tf/train/Saver
https://www.tensorflow.org/programmers_guide/meta_graph
Typically you only save tf.global_variables() (which is shorthand for tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES), i.e. the collection of global variables). This collection is meant to include variables which are necessary for restoring the state of the model, so things like current moving averages for batch normalization, the global step, the states of the optimizer(s) and, of course, the tf.GraphKeys.TRAINABLE_VARIABLES collection. Variables of more temporary nature, such as the gradients, are collected in LOCAL_VARIABLES and it is usually not necessary to store them and they might take up a lot of disk space.