LZW TIFF decoding - scala

When decoding tiff files with LZW decompression, the first 9 bits in the encoded bitstream should be "256", the clear-code.
But when I read it I get a 128, which I just can't figure out. I created the file with GDAL.
My code reading the file is:
val res = (for {
i <- 0 until next
if (bitSet.get(i + index))
} yield (1 << i)).sum
The index is the index in the encoded bitstream and next is how many bits I should read (starting with 9).
So my question is why do I read an 128 instead of an 256? When printing the bitstream input the first bit that is set as 1 is bit number 8 (index 7).
The file in question is: https://dl.dropboxusercontent.com/u/42266515/aspect_lzw.tif
Thanks!

Thanks for posting the sample image. There's nothing wrong with the image; the first code is 0x100 (256). You must remember that TIFF LZW is encoded in "Motorola" byte order. The first two bytes of the file are 0x80 0x00. In binary, it's 10000000 00000000. The first 9 bits (when looking in the correct order) are 100000000 which is 256. You must gather the bytes in big-endian order and then you'll be able to decode it correctly. Here is a sample byte stream:
If the data from the file is: 0x80 0x01 0x25 0x43 0x7E
The bits are (laid out in big-endian order)
10000000 00000001 00100101 01000011 01111110
Taking 9-bit codes from this bitstream will get you:
100000000 (256), 000000100 (4), 100101010 (298), ...

Related

Is this a bug in the passlib base64 encoding?

I am trying to decode an re-encode a bytesytring using passlibs base64 encoding:
from passlib.utils import binary
engine = binary.Base64Engine(binary.HASH64_CHARS)
s2 = engine.encode_bytes(engine.decode_bytes(b"1111111111111111111111w"))
print(s2)
This prints b'1111111111111111111111A' which is of course not what I expected. The last character is different.
Where is my mistake? Is this a bug?
No, it's not a bug.
In all variants of Base64, every encoded character represents just 6 bits and depending on the number of bytes encoded you can end up with 0, 2 or 4 insignificant bits on the end.
In this case the encoded string 1111111111111111111111w is 23 characters long, that means 23*6 = 138 bits which can be decoded to 17 bytes (136 bits) + 2 insignifcant bits.
The encoding you use here is not Base64 but Hash64
Base64 character map used by a number of hash formats; the ordering is wildly different from the standard base64 character map.
In the character map
HASH64_CHARS = u("./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz")
we find A on index 12 (001100) and w on index 60 (111100)
Now the 'trick' here is, that
binary.Base64Engine(binary.HASH64_CHARS) has a default parameter big=False, which means that the encoding is done in little endian format by default.
In your example it means that w is 001111 and A is 001100. During decoding the last two bits are cut off as they are not needed as explained above. When you encode it again, A is taken as the first character in the character map that can be used two encode 0011 plus two insignifiant bits.

How can U+203C be represented in (226, 128, 188) in swift chapter Strings and Characters?

When I read The Swift Programming Language Strings and Characters. I don't know how U+203C (means !!) can represented by (226, 128, 188) in utf-8.
How did it happen ?
I hope you already know how UTF-8 reserves certain bits to indicate that the Unicode character occupies several bytes. (This website can help).
First, write 0x203C in binary:
0x230C = 10000000111100
So this character takes 16 bits to represent. Due to the "header bits" in the UTF-8 encoding scheme, it would take 3 bytes to encode it:
0x230C = 10 000000 111100
1st byte 2nd byte 3rd byte
-------- -------- --------
header 1110 10 10
actual data 10 000000 111100
-------------------------------------------
full byte 11100010 10000000 10111100
decimal 226 128 188

How word2vec retrieves result from binary files?

from gensim.models.keyedvectors import KeyedVectors
model = KeyedVectors.load_word2vec_format('google_news.bin', binary=True)
print(model['the']) # this prints the 300D vector for the word 'the'
the code loads the google_news binary file to model.
my question is, how the line 3 computes the output from a binary file ( Since Binary files contains 0's and 1's).
I'm not sure exactly what the question is here, but I assume you're asking how to load the binary into your Python app? You can use gensim for example which has built-in tools to decode the binary:
from gensim.models.keyedvectors import KeyedVectors
model = KeyedVectors.load_word2vec_format('google_news.bin', binary=True)
print(model['the']) # this prints the 300D vector for the word 'the'
EDIT
I feel your question is more about binary files in general? This does not seem related to word2vec specifically. Anyways, in a word2vec binary file each line is a pair of word and weights in binary format. First the word is decoded into a string by looping the characters until it meets the binary character for "space". Then the rest is decoded from binary into floats. We know the number of floats since word2vec binary files have a header, such as "3000000 300", which tells us there are 3m words, each word is a 300D vector.
A binary file is organized as a series of bytes, each 8 bits. Read more about binary on the wiki page.
The number 0.0056 in decimal format, becomes in binary:
00111011 10110111 10000000 00110100
So here there are 4 bytes that make up a float. How do we know this? Because we assume the binary encodes 32 bit float.
What if the binary file represents 64 bit precision floats? Then the decimal 0.0056 in binary becomes:
00111111 01110110 11110000 00000110 10001101 10111000 10111010 11000111
Yes, twice the length because twice the precision. So when we decode the word2vec file, if the weights are 300d, and 64 bit encoding, then there should be 8 bytes to represent each number. So a word embedding would have 300*64=19,200 binary digits in each line of the file. Get it?
You can google "how binary digits" work, millions of examples.

What is the real purpose of Base64 encoding?

Why do we have Base64 encoding? I am a beginner and I really don't understand why would you obfuscate the bytes into something else (unless it is encryption). In one of the books I read Base64 encoding is useful when binary transmission is not possible. Eg. When we post a form it is encoded. But why do we convert bytes into letters? Couldn't we just convert bytes into string format with a space in between? For example, 00000001 00000004? Or simply 0000000100000004 without any space because bytes always come in pair of 8?
Base64 is a way to encode binary data into an ASCII character set known to pretty much every computer system, in order to transmit the data without loss or modification of the contents itself.
For example, mail systems cannot deal with binary data because they expect ASCII (textual) data. So if you want to transfer an image or another file, it will get corrupted because of the way it deals with the data.
Note: base64 encoding is NOT a way of encrypting, nor a way of compacting data. In fact a base64 encoded piece of data is 1.333… times bigger than the original datapiece. It is only a way to be sure that no data is lost or modified during the transfer.
Base64 is a mechanism to enable representing and transferring binary data over mediums that allow only printable characters.It is most popular form of the “Base Encoding”, the others known in use being Base16 and Base32.
The need for Base64 arose from the need to attach binary content to emails like images, videos or arbitrary binary content . Since SMTP [RFC 5321] only allowed 7-bit US-ASCII characters within the messages, there was a need to represent these binary octet streams using the seven bit ASCII characters...
Hope this answers the Question
Base64 is a more or less compact way of transmitting (encoding, in fact, but with goal of transmitting) any kind of binary data.
See http://en.wikipedia.org/wiki/Base64
"The general rule is to choose a set of 64 characters that is both part of a subset common to most encodings, and also printable."
That's a very general purpose and the common need is not to waste more space than needed.
Historically, it's based on the fact that there is a common subset of (almost) all encodings used to store chars into bytes and that a lot of the 2^8 possible bytes risk loss or transformations during simple data transfer (for example a copy-paste-emailsend-emailreceive-copy-paste sequence).
(please redirect upvote to Brian's comment, I just make it more complete and hopefully more clear).
For data transmission, data can be textual or non-text(binary) like image, video, file etc.
As we know, during transmission only a stream of data(textual/printable characters) can be sent or received, hence we need a way encode non-text data like image, video, file.
Binary and ASCII representation of non-text(image, video, file) is easily obtainable.
Such non-text(binary) represenation is encoded in textual format such that each ASCII character takes one out of sixty four(A-Z, a-z, 0-9, + and /) possible character set.
Table 1: The Base 64 Alphabet
Value Encoding Value Encoding Value Encoding Value Encoding
0 A 17 R 34 i 51 z
1 B 18 S 35 j 52 0
2 C 19 T 36 k 53 1
3 D 20 U 37 l 54 2
4 E 21 V 38 m 55 3
5 F 22 W 39 n 56 4
6 G 23 X 40 o 57 5
7 H 24 Y 41 p 58 6
8 I 25 Z 42 q 59 7
9 J 26 a 43 r 60 8
10 K 27 b 44 s 61 9
11 L 28 c 45 t 62 +
12 M 29 d 46 u 63 /
13 N 30 e 47 v
14 O 31 f 48 w (pad) =
15 P 32 g 49 x
16 Q 33 h 50 y
These sixty four character set is called Base64 and encoding a given data into this character set having sixty four allowed characters is called Base64 encoding.
Let us take examples of few ASCII characters when encoded to Base64.
1 ==> MQ==
12 ==> MTI=
123 ==> MTIz
1234 ==> MTIzNA==
12345 ==> MTIzNDU=
123456 ==> MTIzNDU2
Here few points are to be noted:
Base64 encoding occurs in size of 4 characters. Because an ASCII character can take any out of 256 characters, which needs 4 characters of Base64 to cover. If the given ASCII value is represented in lesser character then rest of characters are padded with =.
= is not part of base64 character set. It is used for just padding.
Hence, one can see that the Base64 encoding is not encryption but just a way to transform any given data into a stream of printable characters which can be transmitted over network.

Compression in Scala

I'm working on Scala with VERY larg lists of Int (maybe large) and I need to compress them and to hold it in memory.
The only requirement is that I can pull (and decompress) the first number on the list to work with, whithout touching the rest of the list.
I have many good ideas but most of them translate the numbers to bits.
Example:
you can write any number x as the tuple |log(x)|,x-|log(x)| the first element we right it as a string of 1's and a 0 at the end (Unary Code) and the second in binary. e.g:
1 -> 0,1 -> 0 1
...
5 -> 2,1 -> 110 01
...
8 -> 3,0 -> 1110 000
9 -> 3,1 -> 1110 001
...
While a Int takes a fixed 32 bits of memory and a long 64, with this compression x requires 2log(x) bits for storage and can grow indefinetly. This Compression does reducememory in most cases.
How would you handle such type of data? Is there something such as bitarray or something?
Any other way to compress such data in Scala?
Thanks
Depending on the sparseness and range of your data set, you may keep your data as a list of deltas instead of numbers. That's used for sound compression, for instance, and can be both lossy or lossless, depending on your needs.
For instance, if you have Int numbers but know they will hardly ever be more than a (signed) Byte apart, you could do something like this list of bytes:
-1 // Use -1 to imply the next number cannot be computed as a byte delta
0, 0, 4, 0 // 1024 encoded as bytes
1 // 1025 as a delta
-5 // 1020 as a delta
-1 // Next number can't be computed as a byte delta
0, 0, -1, -1 // 65535 encoded as bytes -- -1 doesn't have special meaning here
10 // 65545 as a delta
So you don't have to handle bits using this particular encoding. But, really, you won't get good answers without a very clear indication of the particular problem, the characteristics of the data, etc.
Rereading your question, it seems you are not discarding compression techniques that turn data into bits. If not, then I suggest Huffman -- predictive if needed -- or something from the Lempel-Ziv family.
And, no, Scala has no library to handle binary data, unfortunately. Though paulp probably has something like that in the compiler itself.