Encode text in a comparable way - hash

Let have two text document and we want to compare them on a way that it is not lettered by letter. I am looking for a kind of way that encodes the text into a hash (if the word hash is proper here) of length N (256 characters for instance) and allows for comparisons.
For example, let a='Text1', b='Text 1', c='Text 12' and d ='John'. I want a kind of hashing (here length 5) like this
xyztrg
xyutrg
xyvtrg
abcdef

I think what you need is locality-sensitive hashing:https://en.wikipedia.org/wiki/Locality-sensitive_hashing
This technique hashes similar input items into the same "buckets" with high probability.
Depending on which programming language you are using, there are a lot of implementations

Related

values for MAD compression method?

I am stuck trying to implement the perfect hashing technique using universal hashing at each level from Cormen. Specifically, with the compression method (at least, I think here is my problem).
I am working on strings, I think short strings (between 8 and 150), and for that, I have my set of hash functions with Murmur3/2, xxhash, FNV1, Cityhash and Spookyhash, using the 64-bits keys (for those hash functions like spookyhash I am getting the lower 64-bits), the problem is that there exist collissions with only three unique strings (two of 10 characters and one of 11 characters) in 9 buckets.
I am using the Cormen's hash compression method for that:
h_ab(k) = ((ak+b)mod p) mod m
with a = 3, p = 4294967291 (largest 32-bit prime), b = 5 and m = 9 (because m_j should be the square of n_j). As "k" I am using the hash value returned by the hash function (like murmur).
If for example, I am using a hash function like murmur2 (64-bit version), the p number should be the largest 64-prime number? I that way, I am covering all possibles hashes that murmur could return, is that right?
Which other hash compressions techniques (apart of division) exist and do you recommend?
Any reference, hint, book, paper, help is pretty welcome.
Sorry for the silly question, I am pretty newbie with hash functions and hash tables.
Thanks in advance.

how to get reverse(not complement or inverse) of a binary number

I am implementing cooley-tuckey fft(raddix - 2 DIF / DIT) algorithm in matlab.In that for the bit reversing i want to have reverse of an binary number. so can anyone suggest how can I get the reverse of a binary number(like 100111 -> 111001). One who have worked on fft implementation can help me with the algorithm also.
Topic: How to do bit reversal in Matlab? .
If you're using double precision floating point ('double') numbers
which are integers, you can do this:
dr = bin2dec(fliplr(dec2bin(d,n))); % Bits in dr are in reverse order
where n is the number of bits to be reversed and where 0 <= d < 2^n.
You will experience no precision problems at all as long as the
integers are no more than 52 bits long.
And
Re: How to do bit reversal in Matlab?
How large will the numbers be that you need to reverse? May I ask what
is the purpose of it? Maybe there is a more efficient way to solve the
whole problem. If the numbers are large you can just store the bits as
a string. To reverse it just read the string backwards! Or use
fliplr().
(There may be better places to ask).
If it were VHDL I'd suggest an alias with 'REVERSE'RANGE.
Taken from the help section;
Y = swapbytes(X) reverses the byte ordering of each element in array X, converting little-endian values to big-endian (and vice versa). The input array must contain all full, noncomplex, numeric elements.

how to find all the possible longest common subsequence from the same position

I am trying to find all the possible longest common subsequence from the same position of multiple fixed length strings (there are 700 strings in total, each string have 25 alphabets ). The longest common subsequence must contain at least 3 alphabets and belong to at least 3 strings. So if I have:
String test1 = "abcdeug";
String test2 = "abxdopq";
String test3 = "abydnpq";
String test4 = "hzsdwpq";
I need the answer to be:
String[] Answer = ["abd", "dpq"];
My one problem is this needs to be as fast as possible. I am trying to find the answer with suffix tree, but the solution of suffix tree method is ["ab","pq"].Suffix tree can only find continuous substring from multiple strings.The common longest common subsequence algorithm cannot solve this problem.
Does anyone have any idea on how to solve this with low time cost?
Thanks
I suggest you cast this into a well known computational problem before you try to use any algorithm that sounds like it might do what you want.
Here is my suggestion: Convert this into a graph problem. For each position in the string you create a set of nodes (one for each unique letter at that position amongst all the strings in your collection... so 700 nodes if all 700 strings differ in the same position). Once you have created all the nodes for each position in the string you go through your set of strings looking at how often two positions share more than 3 equal connections. In your example we would look first at position 1 and 2 and see that three strings contain "a" in position 1 and "b" in position 2, so we add a directed edge between the node "a" in the first set of nodes of the graph and "b" in the second group of nodes (continue doing this for all pairs of positions and all combinations of letters in those two positions). You do this for each combination of positions until you have added all necessary links.
Once you have your final graph, you must look for the longest path; I recommend looking at the wikipedia article here: Longest Path. In our case we will have a directed acyclic graph and you can solve it in linear time! The preprocessing should be quadratic in the number of string positions since I imagine your alphabet is of fixed size.
P.S: You sent me an email about the biclustering algorithm I am working on; it is not yet published but will be available sometime this year (fingers crossed). Thanks for your interest though :)
You may try to use hashing.
Each string has at most 25 characters. It means that it has 2^25 subsequences. You take each string, calculate all 2^25 hashes. Then you join all the hashes for all strings and calculate which of them are contained at least 3 times.
In order to get the lengths of those subsequences, you need to store not only hashes, but pairs <hash, subsequence_pointer> where subsequence_pointer determines the subsequence of that hash (the easiest way is to enumerate all hashes of all strings and store the hash number).
Based on the algo, the program in the worst case (700 strings, 25 characters each) will run for a few minutes.

Most compact way to encode a sequence of random variable length binary codes?

Let's say you have a List<List<Boolean>> and you want to encode that into binary form in the most compact way possible.
I don't care about read or write performance. I just want to use the minimal amount of space. Also, the example is in Java, but we are not limited to the Java system. The length of each "List" is unbounded. Therefore any solution that encodes the length of each list must in itself encode a variable length data type.
Related to this problem is encoding of variable length integers. You can think of each List<Boolean> as a variable length unsigned integer.
Please read the question carefully. We are not limited to the Java system.
EDIT
I don't understand why a lot of the answers talk about compression. I am not trying to do compression per se, but just encoding random sequence of bits down. Except each sequence of bits are of different lengths and order needs to be preserved.
You can think of this question in a different way. Lets say you have a list of arbitrary list of random unsigned integers (unbounded). How do you encode this list in a binary file?
Research
I did some reading and found what I really am looking for is Universal code
Result
I am going to use a variant of Elias Omega Coding described in the paper A new recursive universal code of the positive integers
I now understand how the smaller the representation of the smaller integers is a trade off with the larger integers. By simply choosing an Universal code with a "large" representation of the very first integer you save a lot of space in the long run when you need to encode the arbitrary large integers.
I am thinking of encoding a bit sequence like this:
head | value
------+------------------
00001 | 0110100111000011
Head has variable length. Its end is marked by the first occurrence of a 1. Count the number of zeroes in head. The length of the value field will be 2 ^ zeroes. Since the length of value is known, this encoding can be repeated. Since the size of head is log value, as the size of the encoded value increases, the overhead converges to 0%.
Addendum
If you want to fine tune the length of value more, you can add another field that stores the exact length of value. The length of the length field could be determined by the length of head. Here is an example with 9 bits.
head | length | value
------+--------+-----------
00001 | 1001 | 011011001
I don't know much about Java, so I guess my solution will HAVE to be general :)
1. Compact the lists
Since Booleans are inefficient, each List<Boolean> should be compacted into a List<Byte>, it's easy, just grab them 8 at a time.
The last "byte" may be incomplete, so you need to store how many bits have been encoded of course.
2. Serializing a list of elements
You have 2 ways to proceed: either you encode the number of items of the list, either you use a pattern to mark an end. I would recommend encoding the number of items, the pattern approach requires escaping and it's creepy, plus it's more difficult with packed bits.
To encode the length you can use a variable scheme: ie the number of bytes necessary to encode a length should be proportional to the length, one I already used. You can indicate how many bytes are used to encode the length itself by using a prefix on the first byte:
0... .... > this byte encodes the number of items (7 bits of effective)
10.. .... / .... .... > 2 bytes
110. .... / .... .... / .... .... > 3 bytes
It's quite space efficient, and decoding occurs on whole bytes, so not too difficult. One could remark it's very similar to the UTF8 scheme :)
3. Apply recursively
List< List< Boolean > > becomes [Length Item ... Item] where each Item is itself the representation of a List<Boolean>
4. Zip
I suppose there is a zlib library available for Java, or anything else like deflate or lcw. Pass it your buffer and make sure to precise you wish as much compression as possible, whatever the time it takes.
If there is any repetitive pattern (even ones you did not see) in your representation it should be able to compress it. Don't trust it dumbly though and DO check that the "compressed" form is lighter than the "uncompressed" one, it's not always the case.
5. Examples
Where one notices that keeping track of the edge of the lists is space consuming :)
// Tricky here, we indicate how many bits are used, but they are packed into bytes ;)
List<Boolean> list = [false,false,true,true,false,false,true,true]
encode(list) == [0x08, 0x33] // [00001000, 00110011] (2 bytes)
// Easier: the length actually indicates the number of elements
List<List<Boolean>> super = [list,list]
encode(super) == [0x02, 0x08, 0x33, 0x08, 0x33] // [00000010, ...] (5 bytes)
6. Space consumption
Suppose we have a List<Boolean> of n booleans, the space consumed to encode it is:
booleans = ceil( n / 8 )
To encode the number of bits (n), we need:
length = 1 for 0 <= n < 2^7 ~ 128
length = 2 for 2^7 <= n < 2^14 ~ 16384
length = 3 for 2^14 <= n < 2^21 ~ 2097152
...
length = ceil( log(n) / 7 ) # for n != 0 ;)
Thus to fully encode a list:
bytes =
if n == 0: 1
else : ceil( log(n) / 7 ) + ceil( n / 8 )
7. Small Lists
There is one corner case though: the low end of the spectrum (ie almost empty list).
For n == 1, bytes is evaluated to 2, which may indeed seem wasteful. I would not however try to guess what will happen once the compression kicks in.
You may wish though to pack even more. It's possible if we abandon the idea of preserving whole bytes...
Keep the length encoding as is (on whole bytes), but do not "pad" the List<Boolean>. A one element list becomes 0000 0001 x (9 bits)
Try to 'pack' the length encoding as well
The second point is more difficult, we are effectively down to a double length encoding:
Indicates how many bits encode the length
Actually encode the length on these bits
For example:
0 -> 0 0
1 -> 0 1
2 -> 10 10
3 -> 10 11
4 -> 110 100
5 -> 110 101
8 -> 1110 1000
16 -> 11110 10000 (=> 1 byte and 2 bits)
It works pretty well for very small lists, but quickly degenerate:
# Original scheme
length = ceil( ( log(n) / 7)
# New scheme
length = 2 * ceil( log(n) )
The breaking point ? 8
Yep, you read it right, it's only better for list with less than 8 elements... and only better by "bits".
n -> bits spared
[0,1] -> 6
[2,3] -> 4
[4,7] -> 2
[8,15] -> 0 # Turn point
[16,31] -> -2
[32,63] -> -4
[64,127] -> -6
[128,255] -> 0 # Interesting eh ? That's the whole byte effect!
And of course, once the compression kicks in, chances are it won't really matter.
I understand you may appreciate recursive's algorithm, but I would still advise to compute the figures of the actual space consumption or even better to actually test it with archiving applied on real test sets.
8. Recursive / Variable coding
I have read with interest TheDon's answer, and the link he submitted to Elias Omega Coding.
They are sound answers, in the theoretical domain. Unfortunately they are quite unpractical. The main issue is that they have extremely interesting asymptotic behaviors, but when do we actually need to encode a Gigabyte worth of data ? Rarely if ever.
A recent study of memory usage at work suggested that most containers were used for a dozen items (or a few dozens). Only in some very rare case do we reach the thousand. Of course for your particular problem the best way would be to actually examine your own data and see the distribution of values, but from experience I would say you cannot just concentrate on the high end of the spectrum, because your data lay in the low end.
An example of TheDon's algorithm. Say I have a list [0,1,0,1,0,1,0,1]
len('01010101') = 8 -> 1000
len('1000') = 4 -> 100
len('100') = 3 -> 11
len('11') = 2 -> 10
encode('01010101') = '10' '0' '11' '0' '100' '0' '1000' '1' '01010101'
len(encode('01010101')) = 2 + 1 + 2 + 1 + 3 + 1 + 4 + 1 + 8 = 23
Let's make a small table, with various 'tresholds' to stop the recursion. It represents the number of bits of overhead for various ranges of n.
threshold 2 3 4 5 My proposal
-----------------------------------------------
[0,3] -> 3 4 5 6 8
[4,7] -> 10 4 5 6 8
[8,15] -> 15 9 5 6 8
[16,31] -> 16 10 5 6 8
[32,63] -> 17 11 12 6 8
[64,127] -> 18 12 13 14 8
[128,255]-> 19 13 14 15 16
To be fair, I concentrated on the low end, and my proposal is suited for this task. I wanted to underline that it's not so clear cut though. Especially because near 1, the log function is almost linear, and thus the recursion loses its charm. The treshold helps tremendously and 3 seems to be a good candidate...
As for Elias omega coding, it's even worse. From the wikipedia article:
17 -> '10 100 10001 0'
That's it, a whooping 11 bits.
Moral: You cannot chose an encoding scheme without considering the data at hand.
So, unless your List<Boolean> have a length in the hundreds, don't bother and stick to my little proposal.
I'd use variable-length integers to encode how many bits there are to read. The MSB would indicate if the next byte is also part of the integer. For instance:
11000101 10010110 00100000
Would actually mean:
10001 01001011 00100000
Since the integer is continued 2 times.
These variable-length integers would tell how many bits there are to read. And there'd be another variable-length int at the beginning of all to tell how many bit sets there are to read.
From there on, supposing you don't want to use compression, the only way I can see to optimize it size-wise is to adapt it to your situation. If you often have larger bit sets, you might want for instance to use short integers instead of bytes for the variable-length integer encoding, making you potentially waste less bits in the encoding itself.
EDIT I don't think there exists a perfect way to achieve all you want, all at once. You can't create information out of nothing, and if you need variable-length integers, you obviously have to encode the integer length too. There is necessarily a tradeoff between space and information, but there is also minimal information that you can't cut out to use less space. No system where factors grow at different rates will ever scale perfectly. It's like trying to fit a straight line over a logarithmic curve. You can't do that. (And besides, that's pretty much exactly what you're trying to do here.)
You cannot encode the length of the variable-length integer outside of the integer and get unlimited-size variable integers at the same time, because that would require the length itself to be variable-length, and whatever algorithm you choose, it seems common sense to me that you'll be better off with just one variable-length integer instead of two or more of them.
So here is my other idea: in the integer "header", write one 1 for each byte the variable-length integer requires from there. The first 0 denotes the end of the "header" and the beginning of the integer itself.
I'm trying to grasp the exact equation to determine how many bits are required to store a given integer for the two ways I gave, but my logarithms are rusty, so I'll plot it down and edit this message later to include the results.
EDIT 2
Here are the equations:
Solution one, 7 bits per encoding bit (one full byte at a time):
y = 8 * ceil(log(x) / (7 * log(2)))
Solution one, 3 bits per encoding bit (one nibble at a time):
y = 4 * ceil(log(x) / (3 * log(2)))
Solution two, 1 byte per encoding bit plus separator:
y = 9 * ceil(log(x) / (8 * log(2))) + 1
Solution two, 1 nibble per encoding bit plus separator:
y = 5 * ceil(log(x) / (4 * log(2))) + 1
I suggest you take the time to plot them (best viewed with a logarithmic-linear coordinates system) to get the ideal solution for your case, because there is no perfect solution. In my opinion, the first solution has the most stable results.
I guess for "the most compact way possible" you'll want some compression, but Huffman Coding may not be the way to go as I think it works best with alphabets that have static per-symbol frequencies.
Check out Arithmetic Coding - it operates on bits and can adapt to a dynamic input probabilities. I also see that there is a BSD-licensed Java library that'll do it for you which seems to expect single bits as input.
I suppose for maximum compression you could concatenate each inner list (prefixed with its length) and run the coding algorithm again over the whole lot.
I don't see how encoding an arbitrary set of bits differ from compressing/encoding any other form of data. Note that you only impose a loose restriction on the bits you're encoding: namely, they are lists of lists of bits. With this small restriction, this list of bits becomes just data, arbitrary data, and that's what "normal" compression algorithms compress.
Of course, most compression algorithms work on the assumption that the input is repeated in some way in the future (or in the past), as in the LZxx family of compressor, or have a given frequency distribution for symbols.
Given your prerequisites and how compression algorithms work, I would advice doing the following:
Pack the bits of each list using the less possible number of bytes, using bytes as bitfields, encoding the length, etc.
Try huffman, arithmetic, LZxx, etc on the resulting stream of bytes.
One can argue that this is the pretty obvious and easiest way of doing this, and that this won't work as your sequence of bits have no known pattern. But the fact is that this is the best you can do in any scenario.
UNLESS, you know something from your data, or some transformation on those lists that make them raise a pattern of some kind. Take for example the coding of the DCT coefficients in JPEG encoding. The way of listing those coefficients (diagonal and in zig-zag) is made to favor a pattern in the output of the different coefficients for the transformation. This way, traditional compressions can be applied to the resulting data. If you know something of those lists of bits that allow you to re-arrange them in a more-compressible way (a way that shows some more structure), then you'll get compression.
I have a sneaking suspicion that you simply can't encode a truly random set of bits into a more compact form in the worst case. Any kind of RLE is going to inflate the set on just the wrong input even though it'll do well in the average and best cases. Any kind of periodic or content specific approximation is going to lose data.
As one of the other posters stated, you've got to know SOMETHING about the dataset to represent it in a more compact form and / or you've got to accept some loss to get it into a predictable form that can be more compactly expressed.
In my mind, this is an information-theoretic problem with the constraint of infinite information and zero loss. You can't represent the information in a different way and you can't approximate it as something more easily represented. Ergo, you need at least as much space as you have information and no less.
http://en.wikipedia.org/wiki/Information_theory
You could always cheat, I suppose, and manipulate the hardware to encode a discrete range of values on the media to tease out a few more "bits per bit" (think multiplexing). You'd spend more time encoding it and reading it though.
Practically, you could always try the "jiggle" effect where you encode the data multiple times in multiple ways (try interpreting as audio, video, 3d, periodic, sequential, key based, diffs, etc...) and in multiple page sizes and pick the best. You'd be pretty much guaranteed to have the best REASONABLE compression and your worst case would be no worse then your original data set.
Dunno if that would get you the theoretical best though.
Theoretical Limits
This is a difficult question to answer without knowing more about the data you intend to compress; the answer to your question could be different with different domains.
For example, from the Limitations section of the Wikipedia article on Lossless Compression:
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any (lossless) data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument. ...
Basically, since it's theoretically impossible to compress all possible input data losslessly, it's not even possible to answer your question effectively.
Practical compromise
Just use Huffman, DEFLATE, 7Z, or some ZIP-like off-the-shelf compression algorithm and enocde the bits as variable length byte arrays (or lists, or vectors, or whatever they are called in Java or whatever language you like). Of course, to read the bits back out may require a bit of decompression but that could be done behind the scenes. You can make a class which hides the internal implementation methods to return a list or array of booleans in some range of indices despite the fact that the data is stored internally in pack byte arrays. Updating the boolean at a give index or indices may be a problem but is by no means impossible.
List-of-Lists-of-Ints-Encoding:
When you come to the beginning of a list, write down the bits for ASCII '['. Then proceed into the list.
When you come to any arbitrary binary number, write down bits corresponding to the decimal representation of the number in ASCII. For example the number 100, write 0x31 0x30 0x30. Then write the bits corresponding to ASCII ','.
When you come to the end of a list, write down the bits for ']'. Then write ASCII ','.
This encoding will encode any arbitrarily-deep nesting of arbitrary-length lists of unbounded integers. If this encoding is not compact enough, follow it up with gzip to eliminate the redundancies in ASCII bit coding.
You could convert each List into a BitSet and then serialize the BitSet-s.
Well, first off you will want to pack those booleans together so that you are getting eight of them to a byte. C++'s standard bitset was designed for this purpose. You should probably be using it natively instead of vector, if you can.
After that, you could in theory compress it when you save to get the size even smaller. I'd advise against this unless your back is really up against the wall.
I say in theory because it depends a lot on your data. Without knowing anything about your data, I really can't say any more on this, as some algorithms work better than others on certian kinds of data. In fact, simple information theory tells us that in some cases any compression algorithm will produce output that takes up more space than you started with.
If your bitset is rather sparse (not a lot of 0's, or not a lot of 1's), or is streaky (long runs of the same value), then it is possible you could get big gains with compression. In almost every other circumstance it won't be worth the trouble. Even in that circumstance it may not be. Remember that any code you add will need to be debugged and maintained.
As you point out, there is no reason to store your boolean values using any more space than a single bit. If you combine that with some basic construct, such as each row begins with an integer coding the number of bits in that row, you'll be able to store a 2D table of any size where each entry in the row is a single bit.
However, this is not enough. A string of arbitrary 1's and 0's will look rather random, and any compression algorithm breaks down as the randomness of your data increases - so I would recommend a process like Burrows-Wheeler Block sorting to greatly increase the amount of repeated "words" or "blocks" in your data. Once that's complete a simple Huffman code or Lempel-Ziv algorithm should be able to compress your file quite nicely.
To allow the above method to work for unsigned integers, you would compress the integers using Delta Codes, then perform the block sorting and compression (a standard practice in Information Retrieval postings lists).
If I understood the question correctly, the bits are random, and we have a random-length list of independently random-length lists. Since there is nothing to deal with bytes, I will discuss this as a bit stream. Since files actually contain bytes, you will need to put pack eight bits for each byte and leave the 0..7 bits of the last byte unused.
The most efficient way of storing the boolean values is as-is. Just dump them into the bitstream as a simple array.
In the beginning of the bitstream you need to encode the array lengths. There are many ways to do it and you can save a few bits by choosing the most optimal for your arrays. For this you will probably want to use huffman coding with a fixed codebook so that commonly used and small values get the shortest sequences. If the list is very long, you probably won't care so much about the size of it getting encoded in a longer form that is.
A precise answer as to what the codebook (and thus the huffman code) is going to be cannot be given without more information about the expected list lengths.
If all the inner lists are of the same size (i.e. you have a 2D array), you only need the two dimensions, of course.
Deserializing: decode the lengths and allocate the structures, then read the bits one by one, assigning them to the structure in order.
#zneak's answer (beat me to it), but use huffman encoded integers, especially if some lengths are more likely.
Just to be self-contained: Encode the number of lists as a huffman encoded integer, then for each list, encode its bit length as a huffman encoded integer. The bits for each list follow with no intervening wasted bits.
If the order of the lists doesn't matter, sorting them by length would reduce the space needed, only the incremental length increase of each subsequent list need be encoded.
List-of-List-of-Ints-binary:
Start traversing the input list
For each sublist:
Output 0xFF 0xFE
For each item in the sublist:
Output the item as a stream of bits, LSB first.
If the pattern 0xFF appears anywhere in the stream,
replace it with 0xFF 0xFD in the output.
Output 0xFF 0xFC
Decoding:
If the stream has ended then end any previous list and end reading.
Read bits from input stream. If pattern 0xFF is encountered, read the next 8 bits.
If they are 0xFE, end any previous list and begin a new one.
If they are 0xFD, assume that the value 0xFF has been read (discard the 0xFD)
If they are 0xFC, end any current integer at the bit before the pattern, and begin reading a new one at the bit after the 0xFC.
Otherwise indicate error.
If I understand correctly our data structure is ( 1 2 ( 33483 7 ) 373404 9 ( 337652222 37333788 ) )
Format like so:
byte 255 - escape code
byte 254 - begin block
byte 253 - list separator
byte 252 - end block
So we have:
struct {
int nmem; /* Won't overflow -- out of memory first */
int kind; /* 0 = number, 1 = recurse */
void *data; /* points to array of bytes for kind 0, array of bigdat for kind 1 */
} bigdat;
int serialize(FILE *f, struct bigdat *op) {
int i;
if (op->kind) {
unsigned char *num = (char *)op->data;
for (i = 0; i < op->nmem; i++) {
if (num[i] >= 252)
fputs(255, f);
fputs(num[i], f);
}
} else {
struct bigdat *blocks = (struct bigdat *)op->data
fputs(254, f);
for (i = 0; i < op->nmem; i++) {
if (i) fputs(253, f);
serialize(f, blocks[i]);
}
fputs(252, f);
}
There is a law about numeric digit distribution that says for sets of sets of arbitrary unsigned integers, the higher the byte value the less it happens so put special codes at the end.
Not encoding length in front of each takes up far less room, but makes deserialize a difficult exercise.
This question has a certain induction feel to it. You want a function: (bool list list) -> (bool list) such that an inverse function (bool list) -> (bool list list) generates the same original structure, and the length of the encoded bool list is minimal, without imposing restrictions on the input structure. Since this question is so abstract, I'm thinking these lists could be mind bogglingly large - 10^50 maybe, or 10^2000, or they can be very small, like 10^0. Also, there can be a large number of lists, again 10^50 or just 1. So the algorithm needs to adapt to these widely different inputs.
I'm thinking that we can encode the length of each list as a (bool list), and add one extra bool to indicate whether the next sequence is another (now larger) length or the real bitstream.
let encode2d(list1d::Bs) = encode1d(length(list1d), true) # list1d # encode2d(Bs)
encode2d(nil) = nil
let encode1d(1, nextIsValue) = true :: nextIsValue :: []
encode1d(len, nextIsValue) =
let bitList = toBoolList(len) # [nextIsValue] in
encode1d(length(bitList), false) # bitList
let decode2d(bits) =
let (list1d, rest) = decode1d(bits, 1) in
list1d :: decode2d(rest)
let decode1d(bits, n) =
let length = fromBoolList(take(n, bits)) in
let nextIsValue :: bits' = skip(n, bits) in
if nextIsValue then bits' else decode1d(bits', length)
assumed library functions
-------------------------
toBoolList : int -> bool list
this function takes an integer and produces the boolean list representation
of the bits. All leading zeroes are removed, except for input '0'
fromBoolList : bool list -> int
the inverse of toBoolList
take : int * a' list -> a' list
returns the first count elements of the list
skip : int * a' list -> a' list
returns the remainder of the list after removing the first count elements
The overhead is per individual bool list. For an empty list, the overhead is 2 extra list elements. For 10^2000 bools, the overhead would be 6645 + 14 + 5 + 4 + 3 + 2 = 6673 extra list elements.

Where can I find a cheat sheet for hungarian notation?

I'm working on a legacy COM C++ project that makes use of system hungarian notation. Because it's maintenance of legacy code, the convention is to code in the original style it was written in - our newer code isn't coded this way. So I'm not interested in changing that standard or having a a discussion of our past sins =)
Is there an online cheat-sheet available out there for systems hungarian notation?
The best I can find thus far is a pre stack-overflow discussion post, but it doesn't quite have everything I've needed in the past. Does anyone have any other links?
(making this community wiki in the hope this becomes a self populating list)
If this is for a legacy COM project, you'll probably want to follow Microsoft's Hungarian Notation specifications, which are documented on MSDN.
Note that this is Apps Hungarian, i.e. the "good" kind of Hungarian Notation. Systems Hungarian is the "bad" kind, where names are prefixed with their compiler types, e.g. i for int.
Tables from the MSDN article
Table 1. Some examples for procedure names
Name Description
InitSy Takes an sy as its argument and initializes it.
OpenFn fn is the argument. The procedure will "open" the fn. No value is returned.
FcFromBnRn Returns the fc corresponding to the bn,rn pair given. (The names cannot tell us what the types sy, fn, fc, and so on, are.)
The following is a list of standard type constructions. (X and Y stand for arbitrary tags. According to standard punctuation, the actual tags are lowercase.)
Table 2. Standard type constructions
pX Pointer to X.
dX Difference between two instances of type X. X + dX is of type X.
cX Count of instances of type X.
mpXY An array of Ys indexed by X. Read as "map from X to Y."
rgX An array of Xs. Read as "range X." The indices of the array are called:
iX index of the array rgX.
dnX (rare) An array indexed by type X. The elements of the array are called:
eX (rare) Element of the array dnX.
grpX A group of Xs stored one after another in storage. Used when the X elements are of variable size and standard array indexing would not apply. Elements of the group must be referenced by means other then direct indexing. A storage allocation zone, for example, is a grp of blocks.
bX Relative offset to a type X. This is used for field displacements in a data structure with variable size fields. The offset may be given in terms of bytes or words, depending on the base pointer from which the offset is measured.
cbX Size of instances of X in bytes.
cwX Size of instances of X in words.
The following are standard qualifiers. (The letter X stands for any type tag. Actual type tags are in lowercase.)
Table 3. Standard qualifiers
XFirst The first element in an ordered set (interval) of X values.
XLast The last element in an ordered set of X values. XLast is the upper limit of a closed interval, hence the loop continuation condition should be: X <= XLast.
XLim The strict upper limit of an ordered set of X values. Loop continuation should be: X < XLim.
XMax Strict upper limit for all X values (excepting Max, Mac, and Nil) for all other X: X < XMax. If X values start with X=0, XMax is equal to the number of different X values. The allocated length of a dnx vector, for example, will be typically XMax.
XMac The current (as opposed to constant or allocated) upper limit for all X values. If X values start with 0, XMac is the current number of X values. To iterate through a dnx array, for example:
for x=0 step 1 to xMac-1 do ... dnx[x] ...
or
for ix=0 step 1 to ixMac-1 do ... rgx[ix] ...
XNil A distinguished Nil value of type X. The value may or may not be 0 or -1.
XT Temporary X. An easy way to qualify the second quantity of a given type in a scope.
Table 4. Some common primitive types
f Flag (Boolean, logical). If qualifier is used, it should describe the true state of the flag. Exception: the constants fTrue and fFalse.
w Word with arbitrary contents.
ch Character, usually in ASCII text.
b Byte, not necessarily holding a coded character, more akin to w. Distinguished from the b constructor by the capital letter of the qualifier in immediately following.
sz Pointer to first character of a zero terminated string.
st Pointer to a string. First byte is the count of characters cch.
h pp (in heap).
Here's one for 'Systems Hungarian', which in my experience was the more commonly used (and less useful):
http://web.mst.edu/~cpp/common/hungarian.html
But how universally followed this is, I have no idea.
The other form of Hungarian Notation is "Apps Hungarian", which apparently is Simonyi's original intent (the use of the variable was encoded rather than the type). See http://en.wikipedia.org/wiki/Hungarian_notation for some details.
Because this is a legacy project, your software department manager should have a copy of the style guide for whatever version of Hungarian Notation the original programmers used. (I'm assuming that the original programmers have long since fled to more enlightened workplaces.)
You really should reconsider your use of Hungarian notation. It was originally a patch for the lack of strong typing (and compiler type-checking) in C. Modern compilers enforce type-correctness, making Hungarian notation redundant at best, and erroneous otherwise.
There doesn't seem to be any one exhaustive resource for looking up Hungarian Notation prefixes, probably because a lot of it varied from code base to code base. There, of course, were a lot of very commonly used ones.
The best list I could find was here
The rest cover the commonly used conventions such as this entry
MSDN's enty on Hungarian Notation is here
and a couple of short papers on the subject (overlapping each other perhaps) here and here
Your best bet would be to see how the variables are used and that (may) help you figure out the definition of the prefixes (though in practice the naming rarey reflected the use of the variable, sadly).
You might be able to piece together some semblance of notation from those various links.
Just to be complete(!) how about Hungarian Object Notation for Visual Basic from Microsoft Support no less.