More efficient way to search a large matrix in MATLAB? - matlab

I have a code that does what I want but it is too slow because I have a very large mat file with a matrix (33 gigabyte) that I need to search for particular values and extract those.
The file that I'm searching has the following structure:
reporter sector partner year ave
USA all CAN 2007 0.060026126
USA all CAN 2011 0.0637898418
...
This goes on for millions of rows. I want to extract the last (5th) column value for particular reporter and partner values (sector and year are fixed). In actuality there are more fixed values that I have taken out for the sake of simplicity but this might slow down my code even more. The country_codes and partner values need to vary and are looped for that reason.
The crucial part of my code is the following:
for i = 1:length(country_codes)
for g = [1:length(partner)]
matrix(i,g) = big_file(...
ismember(GTAP_data(:,1), country_codes(i)) & ... % reporter
ismember(GTAP_data(:,2), 'all') & ...sector
ismember(GTAP_data(:,3), partner(g)) & ... partner
ismember([GTAP_data{:,4}]', 2011) & ... year
,5); % ave column
end
end
In other words, the code goes through the million rows and finds just the right value by applying ismember with logical & on everything.
Is there a faster way to do this than using ismember? Can someone assist?

So what I see is you build a big table out of the data in different files.
It seems your values are text-based. That takes up more memory. "USA" already takes up three bytes of memory. If you have less then 255 countries to concider, you could store them as only one byte in uint8 format.
If you can store all columns as a value between 0 and 255 you can make a uint8 matrix that can be indexed very fast.
As an example:
%demo
GTAP_regions={'USA','NL','USA','USA','NL','GB','NL','USA','Korea Republic of','GB','NL','USA','Korea Republic of'};
S=whos('GTAP_regions');
S.bytes
GTAP_regions requires 1580 bytes. Now we convert it.
GTAP_regions_list=GTAP_regions(1);
GTAP_regions_uint=uint8(1);
for ct = 2:length(GTAP_regions)
I=ismember(GTAP_regions_list,GTAP_regions(ct));
if ~any(I)
GTAP_regions_list(end+1)=GTAP_regions(ct);
else
GTAP_regions_uint(end+1)=uint8(find(I));
end
end
S=whos('GTAP_regions_list');
S.bytes
S=whos('GTAP_regions_uint');
S.bytes
GTAP_regions_uint we need to use to do indexing, and it is now only 10 bytes and will be very fast to analyse.
GTAP_regions_list we need to use to find what index value belongs to what country, is only 496 bytes.
You can also do this for sector, partner and year, depending on the range of years. If it is no more than 255 different years it will work. Otherwise you could store it as uint16 and have 65535 possible options.

Related

MATLAB Loop Programming

I've been stuck on a MATLAB coding problem where I needed to create market weights for many stocks from a large data file with multiple days and portfolios.
I received help from an expert the other day using 'nested loops' it worked, but I don't understand what he has done in the final line. I was wondering if anyone could shed some light and provide an explanation of the last coding line.
xp = x (where x = market value)
dates=unique(x(:,1)); (finds the unique dates in the data set Dates are column 1)
for i=1:size(dates,1) (creates an empty matrix to fill the data in)
for j=5:size(xp,2)
xp(xp(:,1)==dates(i),j)=xp(xp(:,1)==dates(i),j)./sum(xp(xp(:,1)==dates(i),j)); (help???)
end
end
Any comment are much appreciated!
To understand the code, you have to understand the colon operator, logical indexing and the difference between / and ./. If any of these is unclear, please look it up in the documentation.
The following code does the same, but is easier to read because I separated each step into a single line:
dates=unique(x(:,1));
%iterates over all dates. size(dates,1) returns the number of dates
for i=1:size(dates,1)
%iterates over the fifth to last column, which contains the data that will be normalised.
for j=5:size(xp,2)
%mdate is a logical vector, which is used to select the rows with the currently processed date.
mdate=(xp(:,1)==dates(i))
%s is the sums up all values in column j same date
s=sum(xp(mdate,j))
%divide all values in column j with the same date by s, which normalises to 1
xp(mdate,j)=xp(mdate,j)./s;
end
end
With this code, I suggest to use the debugger and step through the code.

strcmp files - Very large file size output

I'm reading in a csv file that is about 80MB - data_O3. It's about 250,000 x 5 in size. I created E, which is a little bit larger because it has all the days (data_O3 is missing some days). I want to compare the two so that if the date (saved in variable d3) and siteID (d4) are the same, the data point (column 5) is placed in E.
for j = 1:size(data_O3,1)
E(strcmp(d3,data_O3{j,3})&d4 == data_O3{j,4},5) = data_O3(j,5);
end
This script works fine, but for some reason, running it takes longer than expected. I've run the same code for other data that were only slightly smaller with no problem. Is this an issue with the strcmp code or something else?
The script and files used can be found here: https://www.dropbox.com/sh/7bzq3m1ixfeuhu6/i4oOvxHPkn
There are certainly see a number of ways to speed this up significantly.
First of all, read in all numeric data in as numbers. Matlab is not optimized to work with strings, and even cells should generally be avoided as much as possible. If you want to keep everything as strings, use another language (python or perl)
Once you have the state, county and site read in as numbers, then create a number instead of a string for the siteID. One approach would be to use the formula:
siteID = siteNum + 1e4*countyCode + 1e7*stateCode
That would generate unique siteIDs for all sites.
Use datenum to convert the date field into a number.
You are now in a position where the data_O3 defined on line 79 can be a purely numeric array (no cells!), as can your E matrix. That alone will make the process many times faster.
You also might want to define the E as something other than NaN. Maybe give it values of -1.
There may be more optimizations you can do in the comparison, but do the above first and I expect you will see a huge improvement.

Converting Data to Timeseries - MATLAB

I have Excel data in the following format
Ticker Date Price
GOOG 1/1/12 100
GOOG 1/2/12 200
AAPL 1/1/12 50
etc.
I would like to convert this to a time series collection (or just a matrix of data) in the following format:
Date GOOG AAPL .... (variable number of tickers)
1/1/12 100 50
As this would be easier to use in Matlab to do some calculations on it.
The way I've done this in the past, and I dont believe it is the most efficient, was to run a unique(tickers) function to check how many tickers we have, then chop off the data accordingly in a for loop. I think this is very inefficient (and ugly) for larger data sets. I was hoping someone would have a better suggestion?
Here's a sample of previous attempts I've done on similar data which assumes the data are sorted by ticker:
[uniqueSecurities, uniqueIndex] = unique(Tickers);
numberSecurities = length(uniqueSecurities);
The above code would now tell you at which location does a new ticker start (at every uniqueIndex entry).
now assuming there is the same number of observations for each ticker, you can chop off the data in this manner:
numberObservations = whatever
j = 0;
for secIndex = 1:numberSecurities
NewDataMatrix(:,secIndex) = Prices(j : j + numberObservations);
j = j + numbrtObservations;
end
Now if you have a variable number of observations for each security, instead of jumping by "numberObservations" intervals, you use the uniqueIndex I defined above, and, in a similar manner, chop everything with the indices between uniqueIndex(k) and uniqueIndex(k+1).
The reason I'm posting is because I dont believe I am being very efficient, and in addition is there some default MATLAB way to doing this? As I understand, most databases will give me data in the above format (not the best of formats!) and I dont have any control over the format unfortunately.

How does Labview save cluster data in a binary file and how do I read it out in MATLAB?

I have a very large number of files that were saved in binary in Labview, where each column is a timestamp cluster followed by a vector of singles.
I read each data file into Matlab r2013a using
fid = fopen(filename);
data = fread(fid,[N M],'*single',0,'b');
fclose(fid);
where I pre-calculate the size of the input array N,M. Since I know what the data is supposed to look like, I have figured out that data(1:5,:) is where the timestamp data is hidden, but it looks like something like this for M = 1:
[0 -842938.0625 -1.19209289550781e-07 0 4.48415508583941e-42]
The first element is always 0, the second element decreases monotonically with a constant step size, the third seems to be bistable, flipping back and forth between two very small values, the fourth is always 0, and the fifth is also constant.
I'm assuming it has something to do with how Labview encodes dates, but my google-fu has not helped me figure that out.
To make this a more general question, then:
How does Labview encode a timestamp cluster when it saves to a binary file, and how can I read it out and translate it into a meaningful number in another programming language, such as Matlab?
EDIT:
For posterity, here is my final code (appended to the code above):
datedata = data(5:-1:1,:);
data(1:5,:) = [];
dms = typecast(reshape(datedata(2:3,:),[],1),'uint64');
dsecs = typecast(reshape(datedata(4:5,:),[],1), 'int64');
timestamp = datenum(1904,1,1) + (double(dsecs) + double(dms)*2^-64)/(3600*24);
In the code #Floris posted from Mathworks, they typecast straight to double, but when I tried that, I got garbage. In order to get the correct date, I had to first convert to integer and then to double. Since my bottleneck is in the fread line (0.3 seconds to read off of an external disk), the extra typecast step is miniscule in the grand scheme of things.
The extra column, 4.5e-42, converts to an integer value of 3200, the number of values in the subsequent vector of singles.
This is not a complete answer, but it should help (I don't have either Labview or Matlab available at home so I can't check this right now).
There is an article at http://www.mathworks.com/matlabcentral/newsreader/view_thread/292060 that describes a similar question. Couple of useful bits of information I extracted from that:
Time stamp is a double (not single)
Need to flip the order of bytes (little vs big endian) to make sense of things
There is a useful comment:
Note that the LabView time convention is miliseconds since Jan 1 1904.
Here is one approach (may contain errors but will point you in the
right direction),
The following code snippet is also given:
%% Read in date information
[ fid, msg ] = fopen(FileName, 'r') ;
NColumns = 60 ; % Number of data columns - probably different for your
dataset!
[a, count] = fread(fid, [ NColumns Inf], '*single') ; % Force data to
be read into Matlab workspace as singles
a = a' ; % Convert to data in columns not rows
% The last two columns of a are the timestamp
b = fliplr(a(:, end-1:end)) ; % Must swap the order of the columns
d = typecast(reshape(b',[],1), 'double') ; % Now we can can convert to
double
time_local = datenum(1904, 1, 1) + d/(24*3600) ; % Convert from
seconds to matlab time format
fclose(fid) ;
It looks believable to me. Let me know if it works - if not, I may be able to help debug in the morning...
A LabVIEW timestamp is a 128-bit type consisting of a signed 64-bit integer measuring the offset in seconds since the LabVIEW epoch (January 1, 1904 00:00:00.00 UTC), and an unsigned 64-bit integer measuring the fractional second. Source: ni.com.
The byte order of the file however may be platform dependent. For example the time 8:02:58.147 AM July 3 2013 EDT may be stored as:
0x 00000000CDF9C372 25AA100000000000 (big/network)
or as
0x 000000000010AA25 72C3F9CD00000000 (little)

Most compact way to encode a sequence of random variable length binary codes?

Let's say you have a List<List<Boolean>> and you want to encode that into binary form in the most compact way possible.
I don't care about read or write performance. I just want to use the minimal amount of space. Also, the example is in Java, but we are not limited to the Java system. The length of each "List" is unbounded. Therefore any solution that encodes the length of each list must in itself encode a variable length data type.
Related to this problem is encoding of variable length integers. You can think of each List<Boolean> as a variable length unsigned integer.
Please read the question carefully. We are not limited to the Java system.
EDIT
I don't understand why a lot of the answers talk about compression. I am not trying to do compression per se, but just encoding random sequence of bits down. Except each sequence of bits are of different lengths and order needs to be preserved.
You can think of this question in a different way. Lets say you have a list of arbitrary list of random unsigned integers (unbounded). How do you encode this list in a binary file?
Research
I did some reading and found what I really am looking for is Universal code
Result
I am going to use a variant of Elias Omega Coding described in the paper A new recursive universal code of the positive integers
I now understand how the smaller the representation of the smaller integers is a trade off with the larger integers. By simply choosing an Universal code with a "large" representation of the very first integer you save a lot of space in the long run when you need to encode the arbitrary large integers.
I am thinking of encoding a bit sequence like this:
head | value
------+------------------
00001 | 0110100111000011
Head has variable length. Its end is marked by the first occurrence of a 1. Count the number of zeroes in head. The length of the value field will be 2 ^ zeroes. Since the length of value is known, this encoding can be repeated. Since the size of head is log value, as the size of the encoded value increases, the overhead converges to 0%.
Addendum
If you want to fine tune the length of value more, you can add another field that stores the exact length of value. The length of the length field could be determined by the length of head. Here is an example with 9 bits.
head | length | value
------+--------+-----------
00001 | 1001 | 011011001
I don't know much about Java, so I guess my solution will HAVE to be general :)
1. Compact the lists
Since Booleans are inefficient, each List<Boolean> should be compacted into a List<Byte>, it's easy, just grab them 8 at a time.
The last "byte" may be incomplete, so you need to store how many bits have been encoded of course.
2. Serializing a list of elements
You have 2 ways to proceed: either you encode the number of items of the list, either you use a pattern to mark an end. I would recommend encoding the number of items, the pattern approach requires escaping and it's creepy, plus it's more difficult with packed bits.
To encode the length you can use a variable scheme: ie the number of bytes necessary to encode a length should be proportional to the length, one I already used. You can indicate how many bytes are used to encode the length itself by using a prefix on the first byte:
0... .... > this byte encodes the number of items (7 bits of effective)
10.. .... / .... .... > 2 bytes
110. .... / .... .... / .... .... > 3 bytes
It's quite space efficient, and decoding occurs on whole bytes, so not too difficult. One could remark it's very similar to the UTF8 scheme :)
3. Apply recursively
List< List< Boolean > > becomes [Length Item ... Item] where each Item is itself the representation of a List<Boolean>
4. Zip
I suppose there is a zlib library available for Java, or anything else like deflate or lcw. Pass it your buffer and make sure to precise you wish as much compression as possible, whatever the time it takes.
If there is any repetitive pattern (even ones you did not see) in your representation it should be able to compress it. Don't trust it dumbly though and DO check that the "compressed" form is lighter than the "uncompressed" one, it's not always the case.
5. Examples
Where one notices that keeping track of the edge of the lists is space consuming :)
// Tricky here, we indicate how many bits are used, but they are packed into bytes ;)
List<Boolean> list = [false,false,true,true,false,false,true,true]
encode(list) == [0x08, 0x33] // [00001000, 00110011] (2 bytes)
// Easier: the length actually indicates the number of elements
List<List<Boolean>> super = [list,list]
encode(super) == [0x02, 0x08, 0x33, 0x08, 0x33] // [00000010, ...] (5 bytes)
6. Space consumption
Suppose we have a List<Boolean> of n booleans, the space consumed to encode it is:
booleans = ceil( n / 8 )
To encode the number of bits (n), we need:
length = 1 for 0 <= n < 2^7 ~ 128
length = 2 for 2^7 <= n < 2^14 ~ 16384
length = 3 for 2^14 <= n < 2^21 ~ 2097152
...
length = ceil( log(n) / 7 ) # for n != 0 ;)
Thus to fully encode a list:
bytes =
if n == 0: 1
else : ceil( log(n) / 7 ) + ceil( n / 8 )
7. Small Lists
There is one corner case though: the low end of the spectrum (ie almost empty list).
For n == 1, bytes is evaluated to 2, which may indeed seem wasteful. I would not however try to guess what will happen once the compression kicks in.
You may wish though to pack even more. It's possible if we abandon the idea of preserving whole bytes...
Keep the length encoding as is (on whole bytes), but do not "pad" the List<Boolean>. A one element list becomes 0000 0001 x (9 bits)
Try to 'pack' the length encoding as well
The second point is more difficult, we are effectively down to a double length encoding:
Indicates how many bits encode the length
Actually encode the length on these bits
For example:
0 -> 0 0
1 -> 0 1
2 -> 10 10
3 -> 10 11
4 -> 110 100
5 -> 110 101
8 -> 1110 1000
16 -> 11110 10000 (=> 1 byte and 2 bits)
It works pretty well for very small lists, but quickly degenerate:
# Original scheme
length = ceil( ( log(n) / 7)
# New scheme
length = 2 * ceil( log(n) )
The breaking point ? 8
Yep, you read it right, it's only better for list with less than 8 elements... and only better by "bits".
n -> bits spared
[0,1] -> 6
[2,3] -> 4
[4,7] -> 2
[8,15] -> 0 # Turn point
[16,31] -> -2
[32,63] -> -4
[64,127] -> -6
[128,255] -> 0 # Interesting eh ? That's the whole byte effect!
And of course, once the compression kicks in, chances are it won't really matter.
I understand you may appreciate recursive's algorithm, but I would still advise to compute the figures of the actual space consumption or even better to actually test it with archiving applied on real test sets.
8. Recursive / Variable coding
I have read with interest TheDon's answer, and the link he submitted to Elias Omega Coding.
They are sound answers, in the theoretical domain. Unfortunately they are quite unpractical. The main issue is that they have extremely interesting asymptotic behaviors, but when do we actually need to encode a Gigabyte worth of data ? Rarely if ever.
A recent study of memory usage at work suggested that most containers were used for a dozen items (or a few dozens). Only in some very rare case do we reach the thousand. Of course for your particular problem the best way would be to actually examine your own data and see the distribution of values, but from experience I would say you cannot just concentrate on the high end of the spectrum, because your data lay in the low end.
An example of TheDon's algorithm. Say I have a list [0,1,0,1,0,1,0,1]
len('01010101') = 8 -> 1000
len('1000') = 4 -> 100
len('100') = 3 -> 11
len('11') = 2 -> 10
encode('01010101') = '10' '0' '11' '0' '100' '0' '1000' '1' '01010101'
len(encode('01010101')) = 2 + 1 + 2 + 1 + 3 + 1 + 4 + 1 + 8 = 23
Let's make a small table, with various 'tresholds' to stop the recursion. It represents the number of bits of overhead for various ranges of n.
threshold 2 3 4 5 My proposal
-----------------------------------------------
[0,3] -> 3 4 5 6 8
[4,7] -> 10 4 5 6 8
[8,15] -> 15 9 5 6 8
[16,31] -> 16 10 5 6 8
[32,63] -> 17 11 12 6 8
[64,127] -> 18 12 13 14 8
[128,255]-> 19 13 14 15 16
To be fair, I concentrated on the low end, and my proposal is suited for this task. I wanted to underline that it's not so clear cut though. Especially because near 1, the log function is almost linear, and thus the recursion loses its charm. The treshold helps tremendously and 3 seems to be a good candidate...
As for Elias omega coding, it's even worse. From the wikipedia article:
17 -> '10 100 10001 0'
That's it, a whooping 11 bits.
Moral: You cannot chose an encoding scheme without considering the data at hand.
So, unless your List<Boolean> have a length in the hundreds, don't bother and stick to my little proposal.
I'd use variable-length integers to encode how many bits there are to read. The MSB would indicate if the next byte is also part of the integer. For instance:
11000101 10010110 00100000
Would actually mean:
10001 01001011 00100000
Since the integer is continued 2 times.
These variable-length integers would tell how many bits there are to read. And there'd be another variable-length int at the beginning of all to tell how many bit sets there are to read.
From there on, supposing you don't want to use compression, the only way I can see to optimize it size-wise is to adapt it to your situation. If you often have larger bit sets, you might want for instance to use short integers instead of bytes for the variable-length integer encoding, making you potentially waste less bits in the encoding itself.
EDIT I don't think there exists a perfect way to achieve all you want, all at once. You can't create information out of nothing, and if you need variable-length integers, you obviously have to encode the integer length too. There is necessarily a tradeoff between space and information, but there is also minimal information that you can't cut out to use less space. No system where factors grow at different rates will ever scale perfectly. It's like trying to fit a straight line over a logarithmic curve. You can't do that. (And besides, that's pretty much exactly what you're trying to do here.)
You cannot encode the length of the variable-length integer outside of the integer and get unlimited-size variable integers at the same time, because that would require the length itself to be variable-length, and whatever algorithm you choose, it seems common sense to me that you'll be better off with just one variable-length integer instead of two or more of them.
So here is my other idea: in the integer "header", write one 1 for each byte the variable-length integer requires from there. The first 0 denotes the end of the "header" and the beginning of the integer itself.
I'm trying to grasp the exact equation to determine how many bits are required to store a given integer for the two ways I gave, but my logarithms are rusty, so I'll plot it down and edit this message later to include the results.
EDIT 2
Here are the equations:
Solution one, 7 bits per encoding bit (one full byte at a time):
y = 8 * ceil(log(x) / (7 * log(2)))
Solution one, 3 bits per encoding bit (one nibble at a time):
y = 4 * ceil(log(x) / (3 * log(2)))
Solution two, 1 byte per encoding bit plus separator:
y = 9 * ceil(log(x) / (8 * log(2))) + 1
Solution two, 1 nibble per encoding bit plus separator:
y = 5 * ceil(log(x) / (4 * log(2))) + 1
I suggest you take the time to plot them (best viewed with a logarithmic-linear coordinates system) to get the ideal solution for your case, because there is no perfect solution. In my opinion, the first solution has the most stable results.
I guess for "the most compact way possible" you'll want some compression, but Huffman Coding may not be the way to go as I think it works best with alphabets that have static per-symbol frequencies.
Check out Arithmetic Coding - it operates on bits and can adapt to a dynamic input probabilities. I also see that there is a BSD-licensed Java library that'll do it for you which seems to expect single bits as input.
I suppose for maximum compression you could concatenate each inner list (prefixed with its length) and run the coding algorithm again over the whole lot.
I don't see how encoding an arbitrary set of bits differ from compressing/encoding any other form of data. Note that you only impose a loose restriction on the bits you're encoding: namely, they are lists of lists of bits. With this small restriction, this list of bits becomes just data, arbitrary data, and that's what "normal" compression algorithms compress.
Of course, most compression algorithms work on the assumption that the input is repeated in some way in the future (or in the past), as in the LZxx family of compressor, or have a given frequency distribution for symbols.
Given your prerequisites and how compression algorithms work, I would advice doing the following:
Pack the bits of each list using the less possible number of bytes, using bytes as bitfields, encoding the length, etc.
Try huffman, arithmetic, LZxx, etc on the resulting stream of bytes.
One can argue that this is the pretty obvious and easiest way of doing this, and that this won't work as your sequence of bits have no known pattern. But the fact is that this is the best you can do in any scenario.
UNLESS, you know something from your data, or some transformation on those lists that make them raise a pattern of some kind. Take for example the coding of the DCT coefficients in JPEG encoding. The way of listing those coefficients (diagonal and in zig-zag) is made to favor a pattern in the output of the different coefficients for the transformation. This way, traditional compressions can be applied to the resulting data. If you know something of those lists of bits that allow you to re-arrange them in a more-compressible way (a way that shows some more structure), then you'll get compression.
I have a sneaking suspicion that you simply can't encode a truly random set of bits into a more compact form in the worst case. Any kind of RLE is going to inflate the set on just the wrong input even though it'll do well in the average and best cases. Any kind of periodic or content specific approximation is going to lose data.
As one of the other posters stated, you've got to know SOMETHING about the dataset to represent it in a more compact form and / or you've got to accept some loss to get it into a predictable form that can be more compactly expressed.
In my mind, this is an information-theoretic problem with the constraint of infinite information and zero loss. You can't represent the information in a different way and you can't approximate it as something more easily represented. Ergo, you need at least as much space as you have information and no less.
http://en.wikipedia.org/wiki/Information_theory
You could always cheat, I suppose, and manipulate the hardware to encode a discrete range of values on the media to tease out a few more "bits per bit" (think multiplexing). You'd spend more time encoding it and reading it though.
Practically, you could always try the "jiggle" effect where you encode the data multiple times in multiple ways (try interpreting as audio, video, 3d, periodic, sequential, key based, diffs, etc...) and in multiple page sizes and pick the best. You'd be pretty much guaranteed to have the best REASONABLE compression and your worst case would be no worse then your original data set.
Dunno if that would get you the theoretical best though.
Theoretical Limits
This is a difficult question to answer without knowing more about the data you intend to compress; the answer to your question could be different with different domains.
For example, from the Limitations section of the Wikipedia article on Lossless Compression:
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any (lossless) data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument. ...
Basically, since it's theoretically impossible to compress all possible input data losslessly, it's not even possible to answer your question effectively.
Practical compromise
Just use Huffman, DEFLATE, 7Z, or some ZIP-like off-the-shelf compression algorithm and enocde the bits as variable length byte arrays (or lists, or vectors, or whatever they are called in Java or whatever language you like). Of course, to read the bits back out may require a bit of decompression but that could be done behind the scenes. You can make a class which hides the internal implementation methods to return a list or array of booleans in some range of indices despite the fact that the data is stored internally in pack byte arrays. Updating the boolean at a give index or indices may be a problem but is by no means impossible.
List-of-Lists-of-Ints-Encoding:
When you come to the beginning of a list, write down the bits for ASCII '['. Then proceed into the list.
When you come to any arbitrary binary number, write down bits corresponding to the decimal representation of the number in ASCII. For example the number 100, write 0x31 0x30 0x30. Then write the bits corresponding to ASCII ','.
When you come to the end of a list, write down the bits for ']'. Then write ASCII ','.
This encoding will encode any arbitrarily-deep nesting of arbitrary-length lists of unbounded integers. If this encoding is not compact enough, follow it up with gzip to eliminate the redundancies in ASCII bit coding.
You could convert each List into a BitSet and then serialize the BitSet-s.
Well, first off you will want to pack those booleans together so that you are getting eight of them to a byte. C++'s standard bitset was designed for this purpose. You should probably be using it natively instead of vector, if you can.
After that, you could in theory compress it when you save to get the size even smaller. I'd advise against this unless your back is really up against the wall.
I say in theory because it depends a lot on your data. Without knowing anything about your data, I really can't say any more on this, as some algorithms work better than others on certian kinds of data. In fact, simple information theory tells us that in some cases any compression algorithm will produce output that takes up more space than you started with.
If your bitset is rather sparse (not a lot of 0's, or not a lot of 1's), or is streaky (long runs of the same value), then it is possible you could get big gains with compression. In almost every other circumstance it won't be worth the trouble. Even in that circumstance it may not be. Remember that any code you add will need to be debugged and maintained.
As you point out, there is no reason to store your boolean values using any more space than a single bit. If you combine that with some basic construct, such as each row begins with an integer coding the number of bits in that row, you'll be able to store a 2D table of any size where each entry in the row is a single bit.
However, this is not enough. A string of arbitrary 1's and 0's will look rather random, and any compression algorithm breaks down as the randomness of your data increases - so I would recommend a process like Burrows-Wheeler Block sorting to greatly increase the amount of repeated "words" or "blocks" in your data. Once that's complete a simple Huffman code or Lempel-Ziv algorithm should be able to compress your file quite nicely.
To allow the above method to work for unsigned integers, you would compress the integers using Delta Codes, then perform the block sorting and compression (a standard practice in Information Retrieval postings lists).
If I understood the question correctly, the bits are random, and we have a random-length list of independently random-length lists. Since there is nothing to deal with bytes, I will discuss this as a bit stream. Since files actually contain bytes, you will need to put pack eight bits for each byte and leave the 0..7 bits of the last byte unused.
The most efficient way of storing the boolean values is as-is. Just dump them into the bitstream as a simple array.
In the beginning of the bitstream you need to encode the array lengths. There are many ways to do it and you can save a few bits by choosing the most optimal for your arrays. For this you will probably want to use huffman coding with a fixed codebook so that commonly used and small values get the shortest sequences. If the list is very long, you probably won't care so much about the size of it getting encoded in a longer form that is.
A precise answer as to what the codebook (and thus the huffman code) is going to be cannot be given without more information about the expected list lengths.
If all the inner lists are of the same size (i.e. you have a 2D array), you only need the two dimensions, of course.
Deserializing: decode the lengths and allocate the structures, then read the bits one by one, assigning them to the structure in order.
#zneak's answer (beat me to it), but use huffman encoded integers, especially if some lengths are more likely.
Just to be self-contained: Encode the number of lists as a huffman encoded integer, then for each list, encode its bit length as a huffman encoded integer. The bits for each list follow with no intervening wasted bits.
If the order of the lists doesn't matter, sorting them by length would reduce the space needed, only the incremental length increase of each subsequent list need be encoded.
List-of-List-of-Ints-binary:
Start traversing the input list
For each sublist:
Output 0xFF 0xFE
For each item in the sublist:
Output the item as a stream of bits, LSB first.
If the pattern 0xFF appears anywhere in the stream,
replace it with 0xFF 0xFD in the output.
Output 0xFF 0xFC
Decoding:
If the stream has ended then end any previous list and end reading.
Read bits from input stream. If pattern 0xFF is encountered, read the next 8 bits.
If they are 0xFE, end any previous list and begin a new one.
If they are 0xFD, assume that the value 0xFF has been read (discard the 0xFD)
If they are 0xFC, end any current integer at the bit before the pattern, and begin reading a new one at the bit after the 0xFC.
Otherwise indicate error.
If I understand correctly our data structure is ( 1 2 ( 33483 7 ) 373404 9 ( 337652222 37333788 ) )
Format like so:
byte 255 - escape code
byte 254 - begin block
byte 253 - list separator
byte 252 - end block
So we have:
struct {
int nmem; /* Won't overflow -- out of memory first */
int kind; /* 0 = number, 1 = recurse */
void *data; /* points to array of bytes for kind 0, array of bigdat for kind 1 */
} bigdat;
int serialize(FILE *f, struct bigdat *op) {
int i;
if (op->kind) {
unsigned char *num = (char *)op->data;
for (i = 0; i < op->nmem; i++) {
if (num[i] >= 252)
fputs(255, f);
fputs(num[i], f);
}
} else {
struct bigdat *blocks = (struct bigdat *)op->data
fputs(254, f);
for (i = 0; i < op->nmem; i++) {
if (i) fputs(253, f);
serialize(f, blocks[i]);
}
fputs(252, f);
}
There is a law about numeric digit distribution that says for sets of sets of arbitrary unsigned integers, the higher the byte value the less it happens so put special codes at the end.
Not encoding length in front of each takes up far less room, but makes deserialize a difficult exercise.
This question has a certain induction feel to it. You want a function: (bool list list) -> (bool list) such that an inverse function (bool list) -> (bool list list) generates the same original structure, and the length of the encoded bool list is minimal, without imposing restrictions on the input structure. Since this question is so abstract, I'm thinking these lists could be mind bogglingly large - 10^50 maybe, or 10^2000, or they can be very small, like 10^0. Also, there can be a large number of lists, again 10^50 or just 1. So the algorithm needs to adapt to these widely different inputs.
I'm thinking that we can encode the length of each list as a (bool list), and add one extra bool to indicate whether the next sequence is another (now larger) length or the real bitstream.
let encode2d(list1d::Bs) = encode1d(length(list1d), true) # list1d # encode2d(Bs)
encode2d(nil) = nil
let encode1d(1, nextIsValue) = true :: nextIsValue :: []
encode1d(len, nextIsValue) =
let bitList = toBoolList(len) # [nextIsValue] in
encode1d(length(bitList), false) # bitList
let decode2d(bits) =
let (list1d, rest) = decode1d(bits, 1) in
list1d :: decode2d(rest)
let decode1d(bits, n) =
let length = fromBoolList(take(n, bits)) in
let nextIsValue :: bits' = skip(n, bits) in
if nextIsValue then bits' else decode1d(bits', length)
assumed library functions
-------------------------
toBoolList : int -> bool list
this function takes an integer and produces the boolean list representation
of the bits. All leading zeroes are removed, except for input '0'
fromBoolList : bool list -> int
the inverse of toBoolList
take : int * a' list -> a' list
returns the first count elements of the list
skip : int * a' list -> a' list
returns the remainder of the list after removing the first count elements
The overhead is per individual bool list. For an empty list, the overhead is 2 extra list elements. For 10^2000 bools, the overhead would be 6645 + 14 + 5 + 4 + 3 + 2 = 6673 extra list elements.