I'm using owasp EnDe web-based tool to understand nibbles and encoding in general. I'm testing a sample input which is abcd.
Now, the results of encoding it based upon first nibble and second nibble is given
as 36,1,36,2,36,3,36,4,37,7,37,8,37,9,37,A and 6,31,6,32,6,33,6,34,7,37,7,38,7,39,7,61 respectively.
A simple representation in hex of above sample input is 61 62 63 64 77 78 79 7a.
Should nibble 1 and nibble 2 in simple terms would be mean LSB nibble and MSB nibble respectively. Can someone explain how it relates to the use in this tool?
Thanks
When looking at the code that performs the encoding it seems to work on the hex strings of the ASCII codes instead of taking the nibble of the ASCII code. So for you example of "abcd" and the 1 nibble encoding it works as follows.
'a' -> 0x61 -> '61'
First nibble of '61' is '6', with '6' -> 0x36 -> '36'
So 'a' ends up being encoded as %%361
'b' -> 0x62 -> '62'
First nibble of '62' is '6' and again will be '36'.
So 'b' ends up being encoded as %%362
....
I am not sure where this encoding is documented, perhaps you can try Google.
You can find the function that performs the encodings at https://github.com/EnDe/EnDe/blob/master/EnDe.js#L982
Related
I am trying to decode an re-encode a bytesytring using passlibs base64 encoding:
from passlib.utils import binary
engine = binary.Base64Engine(binary.HASH64_CHARS)
s2 = engine.encode_bytes(engine.decode_bytes(b"1111111111111111111111w"))
print(s2)
This prints b'1111111111111111111111A' which is of course not what I expected. The last character is different.
Where is my mistake? Is this a bug?
No, it's not a bug.
In all variants of Base64, every encoded character represents just 6 bits and depending on the number of bytes encoded you can end up with 0, 2 or 4 insignificant bits on the end.
In this case the encoded string 1111111111111111111111w is 23 characters long, that means 23*6 = 138 bits which can be decoded to 17 bytes (136 bits) + 2 insignifcant bits.
The encoding you use here is not Base64 but Hash64
Base64 character map used by a number of hash formats; the ordering is wildly different from the standard base64 character map.
In the character map
HASH64_CHARS = u("./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz")
we find A on index 12 (001100) and w on index 60 (111100)
Now the 'trick' here is, that
binary.Base64Engine(binary.HASH64_CHARS) has a default parameter big=False, which means that the encoding is done in little endian format by default.
In your example it means that w is 001111 and A is 001100. During decoding the last two bits are cut off as they are not needed as explained above. When you encode it again, A is taken as the first character in the character map that can be used two encode 0011 plus two insignifiant bits.
I am trying to understand the string character encoding of a proprietary file format (fp7-file format from Filemaker Pro).
I found that each character is obfuscated by XOR with 0b01011010 and that the string length is encoded using a single starting byte (max string length in Filemaker is 100 characters).
Encoding is a variable byte encoding, where by default ISO 8859-1 (Western) is used to encode most characters.
If a unicode character outside ISO 8859-1 is to be encoded, some sort of control characters are included into the string that modify the decoding of the next or several following characters. These control characters are using the ASCII control character space (0x01 to 0x1f in particular). This is were I am stuck, as I can't seem to find a pattern to how these control characters work.
Some examples of what I think I have found:
When encountering a control character 0x11 the following characters are created by adding 0x40 to the byte value, e.g. the character Ā (Unicode \U0100) is encoded as 0x11 0xC0 (0xC0 + 0x40 = 0x100).
When encountering the control character 0x10 the previous control character seems to be reset.
When encountering the control character 0x03 the next (only the next!) character is created by adding 0x100 to the byte value. If the control character 0x03 is preceeded by 0x1b then all following characters are created by adding 0x100.
An example string (0_ĀĐĠİŀŐŠŰƀƐƠưǀǐǠǰȀ), its unicode code points and the encoding in Filemaker:
char 0 _ Ā Đ Ġ İ ŀ Ő Š Ű ƀ Ɛ Ơ ư ǀ ǐ Ǡ ǰ Ȁ
unicode 30 5f 100 110 120 130 140 150 160 170 180 190 1a0 1b0 1c0 1d0 1e0 1f0 200
encoded 30 5f 11 c0 d0 e0 f0 3 40 3 50 3 60 3 70 1b 3 80 90 a0 b0 c0 d0 e0 f0 1c 4 80
As you can see the characters 0 and _ are encoded with their direct unicode/ASCII value. The characters ĀĐĠİ are encoded using the 0x11 control byte. Then ŀŐŠŰ are encoded using 0x03 for each character, then 0x1B 0x03 are used to encode the next 8 characters, etc.
Does this encoding scheme look familiar to anybody?
The rules are simple for characters up to 0x200, but then become more and more confusing, even to the point where they seem position dependent.
I can provide more examples for a weekend of puzzles and joy.
When I read The Swift Programming Language Strings and Characters. I don't know how U+203C (means !!) can represented by (226, 128, 188) in utf-8.
How did it happen ?
I hope you already know how UTF-8 reserves certain bits to indicate that the Unicode character occupies several bytes. (This website can help).
First, write 0x203C in binary:
0x230C = 10000000111100
So this character takes 16 bits to represent. Due to the "header bits" in the UTF-8 encoding scheme, it would take 3 bytes to encode it:
0x230C = 10 000000 111100
1st byte 2nd byte 3rd byte
-------- -------- --------
header 1110 10 10
actual data 10 000000 111100
-------------------------------------------
full byte 11100010 10000000 10111100
decimal 226 128 188
Is representing UTF-8 encoding in decimals even possible? I think only values till 255 would be correct, am I right?
As far as I know, we can only represent UTF-8 in hex or binary form.
I think it is possible. Let's look at an example:
The Unicode code point for ∫ is U+222B.
Its UTF-8 encoding is E2 88 AB, in hexadecimal representation. In octal, this would be 342 210 253. In decimal, it would be 226 136 171. That is, if you represent each byte separately.
If you look at the same 3 bytes as a single number, you have E288AB in hexadecimal; 70504253 in octal; and 14846123 in decimal.
I had always been taught 0–9 to represent values zero to nine, and A, B, C, D, E, F for 10-15.
I see this format 0x00000000 and it doesn't fit into the pattern of hexadecimal. Is there a guide or a tutor somewhere that can explain it?
I googled for hexadecimal but I can't find any explanation of it.
So my 2nd question is, is there a name for the 0x00000000 format?
0x simply tells you the number after it will be in hex
so 0x00 is 0, 0x10 is 16, 0x11 is 17 etc
The 0x is just a prefix (used in C and many other programming languages) to mean that the following number is in base 16.
Other notations that have been used for hex include:
$ABCD
ABCDh
X'ABCD'
"ABCD"X
Yes, it is hexadecimal.
Otherwise, you can't represent A, for example. The compiler for C and Java will treat it as variable identifier. The added prefix 0x tells the compiler it's hexadecimal number, so:
int ten_i = 10;
int ten_h = 0xA;
ten_i == ten_h; // this boolean expression is true
The leading zeroes indicate the size: 0x0080 hints the number will be stored in two bytes; and 0x00000080 represents four bytes. Such notation is often used for flags: if a certain bit is set, that feature is enabled.
P.S. As an off-topic note: if the number starts with 0, then it's interpreted as octal number, for example 010 == 8. Here 0 is also a prefix.
Everything after the x are hex digits (the 0x is just a prefix to designate hex), representing 32 bits (if you were to put 0xFFFFFFFF in binary, it would be 1111 1111 1111 1111 1111 1111 1111 1111).
hexadecimal digits are often prefaced with 0x to indicate they are hexadecimal digits.
In this case, there are 8 digits, each representing 4 bits, so that is 32 bits or a word. I"m guessing you saw this in an error, and it is a memory address. this value means null, as the hex value is 0.