What are the memory limitation for Mallet training files? - mallet

Mallet converts training cases to binary format using the command import-file, e.g.
bin/mallet import-file --input cases.txt --output cases.mallet
How is this binary ".mallet" file then used? Is it streamed or is the whole file loaded into memory. If it is all loaded then this places a limit on the number of training cases based on available memory.
Is it possible to characterize the size of the .mallet file based on the the size of input cases file or number of input cases?

Related

Resampling scaling truncating Matlab data file

I need to perform the conditions to the data file below on the matlab. can you help me how to do that.
Data File = 2x460800
Conditions:
1.) The scaling defined in the PhysioNet .info files for each PhysioNet data file was applied to each raw data file.
2.) The data were resampled to a common rate of 128 hertz.
3.) Each data file consisting of two ECG recordings was separated into two
separate data records.
4.) The data length was truncated to a common length of 65536 samples.

How to get the dataset size of a Caffe net in python?

I look at the python example for Lenet and see that the number of iterations needed to run over the entire MNIST test dataset is hard-coded. However, can this value be not hard-coded at all? How to get the number of samples of the dataset pointed by a network in python?
You can use the lmdb library to access the lmdb directly
import lmdb
db = lmdb.open('/path/to/lmdb_folder') //Needs lmdb - method
num_examples = int( db.stat()['entries'] )
Should do the trick for you.
It seems that you mixed iterations and amount of samples in one question. In the provided example we can see only number of iterations, i. e. how many times training phase will be repeated. The is no any direct relationship between amount of iterations (network training parameters) and amount of samples in dataset (network input).
Some more detailed explanation:
EDIT: Caffe will totally load (batch size x iterations) samples for training or testing, but there is no relation with amount of loaded samples and actual database size: it will start reading from the beginning after reaching database last record - it other words, database in caffe acts like a circular buffer.
Mentioned example points to this configuration. We can see that it expects lmdb input, and sets batch size to 64 (some more info about batches and BLOBs) for training phase and 100 for testing phase. Really we don't make any assumption about input dataset size, i. e. number of samples in dataset: batch size is only processing chunk size, iterations is how many batches caffe will take. It won't stop after reaching database end.
In other words, network itself (i. e. protobuf config files) doesn't point to any number of samples in database - only to dataset name and format and desired amount of samples. There is no way to determine database size with caffe at the current moment, as I know.
Thus if you want to load entire dataset for testing, you have only option to firstly determine amount of samples in mnist_test_lmdb or mnist_train_lmdb manually, and then specify corresponding values for batch size and iterations.
You have some options for this:
Look at ./examples/mnist/create_mnist.sh console output - it prints amount of samples while converting from initial format (I believe that you followed this tutorial);
follow #Shai's advice (read lmdb file directly).

Partition a large scale HDF5 dataset into sub-files

I have a pretty large HDF5 dataset which is of size [1 12672 1 228020] following the format:[height width channel N]. This file occupies about 22G on hard disk.
I want to partition this file in to smaller parts, say 2G files.
h5repart has been tried out but it does not work well, because I'm not able to display partitioned files in MATLAB using h5disp('...').
One solution would be for you to use the 'chunk' capability of the HDF5 format.
Using the MATLAB low-level HDF5 functions you should be able to read the chunks you require.

Matlab .mat file saving

I have identical code in Matlab, identical data that was analyzed using two different computers. Both are Win 7 64 bit. Both Matlabs are 2014-a version. After the code finishes its run, I save the variables using save command and it outputs .mat file.
Is it possible to have two very different memory sizes for these files? Like one being 170 MB, and the other being 2.4 GB? This is absurd because when I check the variables in matlab they add up to maybe 1.5 GB at most. What can be the reason for this?
Does saving to .mat file compress the variables (still with the regular .mat extension)? I think it does because when I check the individual variables they add up to around 1.5 GB.
So why would one output smaller file size, but the other just so huge?
Mat in recent versions is HDF5, which includes gzip compression. Probably on one pc the default mat format is changed to an old version which does not support compression. Try saving specifying the version, then both PCs should result in the same size.
I found the reason for this based on the following stackoverflow thread: MATLAB: Differences between .mat versions
Apparently one of the computers was using -v7 format which produces much smaller files. - v7.3 just inflates the files significantly. But this is ironical in my opinion since -v7.3 enables saving files larger than 2 GB, which means they will be much much larger when saved in .mat file.
Anyway this link is very useful.
Update:
I implemented the serialization mentioned in the above link, and it increased the file size. In my case the best option will be using -v7 format since it provides the smallest file size, and is also able to save structures and cell arrays that I use a lot.

get integer representation of .SPH audio files

I am trying to train a neural network using audio files that are originally in .SPH format. I need to get integers that represent the amplitude of the sound waves for neural net, so I used sox to convert the files to .wav format by calling sox infile.SPH outfile.wav remix 1-2 (remix for converting 2 channels into 1), and then tried to use
[y, Fs, nbits, opts] = wavread('outfile.wav') in matlab to get the integer representation.
However, matlab threw Data compression format (CCITT mu-law) is not supported.
So I used sox infile.SPH -b 16 -e signed-integer -c 1 outfile.wav
which I think puts the wave file in a linear format instead of mu-law. But now matlab threw another error: Invalid Wave File. Reason: Cannot open file.
My audio files are in 8000 Hz u-law single or dual channels, and all in 8-bit, I think (8-bit for single for sure).
Is there a way to get the integer representation out of the audio files using matlab or any other programs? Either u-law or linear is fine, unless one would be better for neural net training. Preferably 8 bit, since the source files are in 8-bit.
I don't really understand .SPH. For the uncompressed ones (and ignore headers), are the files storing amplitudes (guess it has to somehow)? Can I extract numbers out of those files directly without bothering with waves? Are the signals stored in a sequential fashion such that it would make sense to split the audio files?
I am new to audio processing in general, so any pointers would be appreciated!
You need to clearly identify the main task: feeding the neural net with vectors or matrix. So the first step is to work on the audio file (without matlab!) in order to have wav files. The second step is the neural net setting/training with matlab.
I would try to decompress 'sph' files, then convert them into 'wav' (for example see the instructions here and here).
Finally, using sox in a command/terminal window is better than using it in the matlab console.