I want to identify encoding of these videos - encoding

I have a bunch of videos I downloaded 20 years ago now. The website I believe had them in Japanese. My PC at the time didn't understand unicode characters and I downloaded them with Download Accelerator Plus I believe! So all of the video titles look like a mixture of broken ASCII and URLEncoded characters
Is there any way to get these titles back? Here are some samples:
%ec†%a1%ecŠ%b9%ec„%a0.avi
%ea%b0•%ec%a2…%ea%b5%ac, %ec†%ec%a3%bc%ed™˜.avi
%ea%b5%ac%ec%a2…%eb%a7Œ.avi
%ecœ%a4%ec%b0%bd%ec%bc.avi
%ea%b6Œ%eb%af%bc%ec%a3%bc (%e2˜…%e2˜…).avi
I dont remember the url, so I cannot check web archives
Any input welcome.
Thank you

How did you translate it all?
First, suppose UTF-8 as 0xec, 0xed or 0xea are first bytes of three-byte UTF-8 sequences; then
convert every URL-encoded character to its byte value (e.g. %a1 to 0xa1), and
take ANSI 1252 byte values of every literal character e.g. †Š•…™˜Œœ,().
Then you have UTF-8 byte sequence for whole string and you can decode it simply.
Exceptions:
missing character in string #17 (two-char string %ec† should be converted to a three-byte sequence, added 0x81);
the same in string #19 (two-char string %ec%bc should be converted to a three-byte sequence).
Example (manual conversion, exceptions indicated by ↑↑↑↑ in the following byte sequences, added 0x81):
16 0xec,134,0xa1,0xec,138,0xb9,0xec,132,0xa0
송승선
17 0xea,0xb0,149,0xec,0xa2,133,0xea,0xb5,0xac,0x2c,0x20,0xec,134,0x81,0xec,0xa3,0xbc,0xed,153,152
강종구, 솁주환 ↑↑↑↑
18 0xea,0xb5,0xac,0xec,0xa2,133,0xeb,0xa7,140
구종만
19 0xec,156,0xa4,0xec,0xb0,0xbd,0xec,0xbc,0x81
윤창켁 ↑↑↑↑
20 0xea,0xb6,140,0xeb,0xaf,0xbc,0xec,0xa3,0xbc,0x20,0x28,0xe2,152,133,0xe2,152,133,0x29
권민주 (★★)
Google translate:

Related

Change of char encoding in Eclipse

I am working on an assignment where I need to XOR the bits of each char of a given text. For example, weird char's like '��'.
When trying to save, Eclipse prompts that "Some characters cannot be mapped with Cp1252...", after which I can choose to save as UTF-8.
My knowledge of character encoding is quite fuzzy; wouldn't saving to UTF-8 change the bits? If so, how may I instead work with the original message (original bits) to XOR them and do my assignment?
Thanks!
I am assuming you are using Java in this answer.
The file encoding only changes how the data is represented in the file. When you read the file again (using the correct encoding) it will converted back to Unicode in your String so the program will see the same bits.
Encoding Cp1252 can only represent a small number of characters (less than 256) compared to the 113,021 characters in Unicode 7 all of which can be encoded with UTF-8.

How do I shorten a base64 string?

What is the easiest way to shorten a base 64 string. e.g
PHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6eG1wPSJodHRwOi8v
I just learned how to convert binary to base64. If I'm correct, groups of 24bits are made and groups of 6bits are used to create the 64 charcters A-Z a-z 0-9 +/
I was wondering is it possible to further shrink a base 64 string and make it smaller; I was hoping to reduce a 100 character base64 string to 20 or less characters.
A 100-character base64 string contains 600 bits of information. A base64 string contains 6 bits in each character and requires 100 characters to represent your data. It is encoded in US-ASCII (by definition) and described in RFC 4648. This is In order to represent your data in 20 characters you need 30 bits in each character (600/20).
In a contrived fashion, using a very large Unicode mapping, it would be possible to render a unified CJK typeface, but it would still require the minimum of about 40 glyphs (~75 bytes) to represent the data. It would also be really difficult to debug the encoding and be really prone to misinterpretation. Further, the purpose of base64 encoding is to present a representation that is not destroyed by broken intermediate systems. This would very likely not work with anything as obscure as a base2Billion encoding.

NSString unicode encoding problem

I'm having problems converting the string to something readable . I'm using
NSString *substring = [NSString stringWithUTF8String:[symbol.data cStringUsingEncoding:NSUTF8StringEncoding]];
but I can't convert \U7ab6\U51b1 into '
It shows as 窶冱 which is what I don't want, it should show as an '. Can anyone help me?
it is shown as a ’
That's character U+2019 RIGHT SINGLE QUOTATION MARK.
What has happened is you've had the character sequence ’s submitted to you, in the UTF-8 encoding, which comes out as bytes:
’ s
E2 80 99 73
That byte sequence has then, incorrectly, been interpreted as if it were encoded in Windows code page 932 (Japanese; more or less Shift-JIS):
E2 80 99 73
窶 冱
So in this one particular case, you could recover the ’s string by firstly encoding the characters into cp932 bytes, and then decoding those bytes back to characters using UTF-8.
However, this will not solve your real problem, which is that the strings were read in incorrectly in the first place. You got 窶冱 in this case because the UTF-8 byte sequence resulting from encoding ’s happened also to be a valid Shift-JIS byte sequence. But that won't be the case for all possible UTF-8 byte sequences you might get. Many other characters will be unrecoverably mangled.
You need to find where bytes are being read into the system and decoded as Shift-JIS, and fix that to use UTF-8 instead.

What multi-byte character set starts with 0x7F and is 4 bytes long?

I'm trying to get some legacy code to display Chinese characters properly. One character encoding I'm trying to work with starts with a 0x7F and is 4 bytes long (including the 0x7F byte). Does anyone know what kind of encoding this is and where I can find information for it? Thanks..
UPDATE:
I've also had to work with some Japanese encoding that starts every character with a 0xE3 and is three bytes long. It displays on my computer properly if I choose the Japanese locale in Windows, however, it doesn't display properly in our application. However, if any other locale other than Japanese is selected, I cannot even view the filenames properly. So I'm guessing this encoding is not Unicode. Anyone know what it is? Is it ANSI? Is it Shift JIS?
For the Chinese one, I've tested it with Unicode and UTF-8 characters and I'm getting the same pattern; 0x7F followed by three bytes. Are Unicode and UTF-8 the same?
One character encoding I'm trying to work with starts with a 0x7F and is 4 bytes long
What are the other bytes? Do you have any Latin text in this encoding?
If it's “0x7f 0x... 0x00 0x00” you are looking at UTF-32LE. It could also be two UTF-16 (either LE or BE) characters.
Most East Asian encodings use 0x80-0xFF as lead bytes for non-ASCII characters; there is none I know of that would use a leading 0x7F as anything other than an ASCII delete.
ETA:
are there supposed to be Byte Order Marks?
There doesn't need to be a BOM if there is an out-of-band way of signalling that the encoding is ‘UTF-32LE’ (possibly one that is lost before it gets to you).
I've also had to work with some Japanese encoding that starts every character with a 0xE3 and is three bytes long.
That's surely UTF-8. Sequence 0xE3 0x... 0x... would result in a character between U+3000 and U+4000, which is where the hiragana/katakana live.
It displays on my computer properly if I choose the Japanese locale in Windows, however, it doesn't display properly in our application.
Then chances are your application is is one of the regrettable horde of non-Unicode-compliant apps, still using ‘A’(*) versions of the Win32 interfaces inside of the ‘W’-suffixed ones. Whether you can read in the string according to its real encoding is moot: a non-Unicode-compliant app will never be able to display an East Asian ideograph on a Western locale.
(*: named for “ANSI”, which is Windows's misleading term for “whatever the system codepage is set to at the moment”. That's why changing your locale affected it.)
ETA(2):
OK, cracked it. It's not any standardised encoding I've met before, but it's relatively easy to decipher if you assume the premise that Unicode code points are being encoded.
0x00-0x7E: plain ASCII
0x7F A B C: Unicode character
The character encoded in a Unicode escape can be calculated by taking the index in a key string of A, B and C and adding together:
A*0x1000 + B*0x40 + C
That is, it's a base-64 character set, but it's not the usual Base64 standard. A little experimentation gives a key string of:
.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz
The ‘.’ and ‘_’ characters are guesses, since none of the characters you posted uses them. We'd need more data to find out the exact string.
So, for example:
0x7F 3 u g
A=4 B=58 C=44
4*0x1000 + 58*0x40 + 44 = 0x4EAC
U+4EAC = 京
ETA(3):
Yeah, it should be easy to create a native Unicode string by sucking out each code point manually and joining as a character. Not quite sure what's available on whatever platform you're using, but any Unicode-capable platform should be able to make a string from codepoints simply (and hopefully without having to manually re-encode to UTF-16LE bytes).
I figured it must be Unicode codepoints by noticing that the three example characters had first escape-characters in the same general range, and in the same numerical order as their Unicode codepoints. The other two characters seemed to change randomly, so it was very likely a big-endian encoding of the code point, and probably a base-64 encoding as 6 is as many bits as you can get out of readable ASCII.
Standard Base64 itself starts with letters, which would put something starting with a number too far up to be in the Basic Multilingual Plane. So I started guessing with ‘0123456789ABCDEFG...’ which would be the other obvious choice of key string. That got resulting numbers that were close to the code points for the given characters, but a bit too low. Inserting an extra character at the start of the key string (so digit ‘0’ doesn't map to number 0) got one of the characters right and the other two very close; the one that was right had no lower-case letters, so to change only the lower-case letters I inserted another character between the upper and lower cases. This came up with the right numbers.
It's not guaranteed that this is actually right, but (apart from the arbitrary choice of inserted characters) it's very likely to be it.
You might want to look at chinese character encoding page on Wikipedia. The only encoding in there that I can see that is always 4 bytes is UTF-32.
GB 18030 is the current standard Chinese character set, but it can be 1 to 4 bytes long.
Try chardet. It does a good job of guessing the character encoding of a string of bytes.
Are Unicode and UTF-8 the same?
No. UTF-8 is just one way to represent Unicode characters as a sequence of bytes. Unicode is the full standard, assigning numeric and human-readable identifiers to each character, as well as lots of metadata about the characters.
It might be a valid unicode encoding, such as a utf-8 or UTF16 surrogate pair.
Yes, the Chinese one is UTF-8, a implementation (encoding) of Unicode.
The UTF-8 is 1 byte long for ASCII characters and up to 4 bytes for others.

What is base 64 encoding used for?

I've heard people talking about "base 64 encoding" here and there. What is it used for?
When you have some binary data that you want to ship across a network, you generally don't do it by just streaming the bits and bytes over the wire in a raw format. Why? because some media are made for streaming text. You never know -- some protocols may interpret your binary data as control characters (like a modem), or your binary data could be screwed up because the underlying protocol might think that you've entered a special character combination (like how FTP translates line endings).
So to get around this, people encode the binary data into characters. Base64 is one of these types of encodings.
Why 64?
Because you can generally rely on the same 64 characters being present in many character sets, and you can be reasonably confident that your data's going to end up on the other side of the wire uncorrupted.
It's basically a way of encoding arbitrary binary data in ASCII text. It takes 4 characters per 3 bytes of data, plus potentially a bit of padding at the end.
Essentially each 6 bits of the input is encoded in a 64-character alphabet. The "standard" alphabet uses A-Z, a-z, 0-9 and + and /, with = as a padding character. There are URL-safe variants.
Wikipedia is a reasonably good source of more information.
Years ago, when mailing functionality was introduced, so that was utterly text based, as the time passed, need for attachments like image and media (audio,video etc) came into existence. When these attachments are sent over internet (which is basically in the form of binary data), the probability of binary data getting corrupt is high in its raw form. So, to tackle this problem BASE64 came along.
The problem with binary data is that it contains null characters which in some languages like C,C++ represent end of character string so sending binary data in raw form containing NULL bytes will stop a file from being fully read and lead in a corrupt data.
For Example :
In C and C++, this "null" character shows the end of a string. So "HELLO" is stored like this:
H E L L O
72 69 76 76 79 00
The 00 says "stop here".
Now let’s dive into how BASE64 encoding works.
Point to be noted : Length of the string should be in multiple of 3.
Example 1 :
String to be encoded : “ace”, Length=3
Convert each character to decimal.
a= 97, c= 99, e= 101
Change each decimal to 8-bit binary representation.
97= 01100001, 99= 01100011, 101= 01100101
Combined : 01100001 01100011 01100101
Separate in a group of 6-bit.
011000 010110 001101 100101
Calculate binary to decimal
011000= 24, 010110= 22, 001101= 13, 100101= 37
Covert decimal characters to base64 using base64 chart.
24= Y, 22= W, 13= N, 37= l
“ace” => “YWNl”
Example 2 :
String to be encoded : “abcd” Length=4, it's not multiple of 3. So to make string length multiple of 3 , we must add 2 bit padding to make length= 6. Padding bit is represented by “=” sign.
Point to be noted : One padding bit equals two zeroes 00 so two padding bit equals four zeroes 0000.
So lets start the process :–
Convert each character to decimal.
a= 97, b= 98, c= 99, d= 100
Change each decimal to 8-bit binary representation.
97= 01100001, 98= 01100010, 99= 01100011, 100= 01100100
Separate in a group of 6-bit.
011000, 010110, 001001, 100011, 011001, 00
so the last 6-bit is not complete so we insert two padding bit which equals four zeroes “0000”.
011000, 010110, 001001, 100011, 011001, 000000 ==
Now, it is equal. Two equals sign at the end show that 4 zeroes were added (helps in decoding).
Calculate binary to decimal.
011000= 24, 010110= 22, 001001= 9, 100011= 35, 011001= 25, 000000=0 ==
Covert decimal characters to base64 using base64 chart.
24= Y, 22= W, 9= j, 35= j, 25= Z, 0= A ==
“abcd” => “YWJjZA==”
Base-64 encoding is a way of taking binary data and turning it into text so that it's more easily transmitted in things like e-mail and HTML form data.
http://en.wikipedia.org/wiki/Base64
It's a textual encoding of binary data where the resultant text has nothing but letters, numbers and the symbols "+", "/" and "=". It's a convenient way to store/transmit binary data over media that is specifically used for textual data.
But why Base-64? The two alternatives for converting binary data into text that immediately spring to mind are:
Decimal: store the decimal value of each byte as three numbers: 045 112 101 037 etc. where each byte is represented by 3 bytes. The data bloats three-fold.
Hexadecimal: store the bytes as hex pairs: AC 47 0D 1A etc. where each byte is represented by 2 bytes. The data bloats two-fold.
Base-64 maps 3 bytes (8 x 3 = 24 bits) in 4 characters that span 6-bits (6 x 4 = 24 bits). The result looks something like "TWFuIGlzIGRpc3Rpb...". Therefore the bloating is only a mere 4/3 = 1.3333333 times the original.
Aside from what's already been said, two very common uses that have not been listed are
Hashes:
Hashes are one-way functions that transform a block of bytes into another block of bytes of a fixed size such as 128bit or 256bit (SHA/MD5). Converting the resulting bytes into Base64 makes it much easier to display the hash especially when you are comparing a checksum for integrity. Hashes are so often seen in Base64 that many people mistake Base64 itself as a hash.
Cryptography:
Since an encryption key does not have to be text but raw bytes it is sometimes necessary to store it in a file or database, which Base64 comes in handy for. Same with the resulting encrypted bytes.
Note that although Base64 is often used in cryptography is not a security mechanism. Anyone can convert the Base64 string back to its original bytes, so it should not be used as a means for protecting data, only as a format to display or store raw bytes more easily.
Certificates
x509 certificates in PEM format are base 64 encoded. http://how2ssl.com/articles/working_with_pem_files/
In the early days of computers, when telephone line inter-system communication was not particularly reliable, a quick & dirty method of verifying data integrity was used: "bit parity". In this method, every byte transmitted would have 7-bits of data, and the 8th would be 1 or 0, to force the total number of 1 bits in the byte to be even.
Hence 0x01 would be transmited as 0x81; 0x02 would be 0x82; 0x03 would remain 0x03 etc.
To further this system, when the ASCII character set was defined, only 00-7F were assigned characters. (Still today, all characters set in the range 80-FF are non-standard)
Many routers of the day put the parity check and byte translation into hardware, forcing the computers attached to them to deal strictly with 7-bit data. This force email attachments (and all other data, which is why HTTP & SMTP protocols are text-based), to be convert into a text-only format.
Few of the routers survived into the 90s. I severely doubt any of them are in use today.
From http://en.wikipedia.org/wiki/Base64
The term Base64 refers to a specific MIME content transfer encoding.
It is also used as a generic term for any similar encoding scheme that
encodes binary data by treating it numerically and translating it into
a base 64 representation. The particular choice of base is due to the
history of character set encoding: one can choose a set of 64
characters that is both part of the subset common to most encodings,
and also printable. This combination leaves the data unlikely to be
modified in transit through systems, such as email, which were
traditionally not 8-bit clean.
Base64 can be used in a variety of contexts:
Evolution and Thunderbird use Base64 to obfuscate e-mail passwords[1]
Base64 can be used to transmit and store text that might otherwise cause delimiter collision
Base64 is often used as a quick but insecure shortcut to obscure secrets without incurring the overhead of cryptographic key management
Spammers use Base64 to evade basic anti-spamming tools, which often do not decode Base64 and therefore cannot detect keywords in encoded
messages.
Base64 is used to encode character strings in LDIF files
Base64 is sometimes used to embed binary data in an XML file, using a syntax similar to ...... e.g.
Firefox's bookmarks.html.
Base64 is also used when communicating with government Fiscal Signature printing devices (usually, over serial or parallel ports) to
minimize the delay when transferring receipt characters for signing.
Base64 is used to encode binary files such as images within scripts, to avoid depending on external files.
Can be used to embed raw image data into a CSS property such as background-image.
Some transportation protocols only allow alphanumerical characters to be transmitted. Just imagine a situation where control characters are used to trigger special actions and/or that only supports a limited bit width per character. Base64 transforms any input into an encoding that only uses alphanumeric characters, +, / and the = as a padding character.
Base64 is a binary to a text encoding scheme that represents binary data in an ASCII string format. It is designed to carry data stored in binary format across the network channels.
Base64 mechanism uses 64 characters to encode. These characters consist of:
10 numeric value: i.e., 0,1,2,3,...,9
26 Uppercase alphabets: i.e., A,B,C,D,...,Z
26 Lowercase alphabets: i.e., a,b,c,d,...,z
2 special characters (these characters depends on operating system): i.e. +,/
How base64 works
The steps to encode a string with base64 algorithm are as follow:
Count the number of characters in a String. If it is not multiple of 3, then pad it with special characters (i.e. =) to make it multiple of 3.
Convert string to ASCII binary format 8-bit using the ASCII table.
After converting to binary format, divide binary data into chunks of 6-bits.
Convert chunks of 6-bit binary data to decimal numbers.
Convert decimals to string according to the base64 Index Table. This table can be an example, but as I said, 2 special characters may vary.
Now, we got the encoded version of the input string.
Let's make an example: convert string THS to base64 encoding string.
Count the number of characters: it is already a multiple of 3.
Convert to ASCII binary format 8-bit. We got (T)01010100 (H)01001000 (S)01010011
Divide binary data into chunks of 6-bits. We got 010101 000100 100001 010011
Convert chunks of 6-bit binary data to decimal numbers.We got 21 4 33 19
Convert decimals to string according to the base64 Index Table. We got VEhT
It's used for converting arbitrary binary data to ASCII text.
For example, e-mail attachments are sent this way.
“Base64 encoding schemes are commonly used when there is a need to encode binary data that needs be stored and transferred over media that are designed to deal with textual data. This is to ensure that the data remains intact without modification during transport”(Wiki, 2017)
Example could be the following: you have a web service that accept only ASCII chars. You want to save and then transfer user’s data to some other location (API) but recipient want receive untouched data. Base64 is for that. . . The only downside is that base64 encoding will require around 33% more space than regular strings.
Another Example:: uenc = url encoded = aHR0cDovL2xvYy5tYWdlbnRvLmNvbS9hc2ljcy1tZW4tcy1nZWwta2F5YW5vLXhpaS5odG1s = http://loc.querytip.com/asics-men-s-gel-kayano-xii.html.
As you can see we can’t put char “/” in URL if we want to send last visited URL as parameter because we would break attribute/value rule for “MOD rewrite” – GET parameter.
A full example would be: “http://loc.querytip.com/checkout/cart/add/uenc/http://loc.magento.com/asics-men-s-gel-kayano-xii.html/product/93/”
I use it in a practical sense when we transfer large binary objects (images) via web services. So when I am testing a C# web service using a python script, the binary object can be recreated with a little magic.
[In python]
import base64
imageAsBytes = base64.b64decode( dataFromWS )
The usage of Base64 I'm going to describe here is somewhat a hack. So if you don't like hacks, please do not go on.
I went into trouble when I discovered that MySQL's utf8 does not support 4-byte unicode characters since it uses a 3-byte version of utf8. So what I did to support full 4-byte unicode over MySQL's utf8? Well, base64 encode strings when storing into the database and base64 decode when retrieving.
Since base64 encoding and decoding is very fast, the above worked perfectly.
You have the following points to take note of:
Base64 encoding uses 33% more storage
Strings stored in the database wont be human readable (You could sell that as a feature that database strings use a basic form of encryption).
You could use the above method for any storage engine that does not support unicode.
Mostly, I've seen it used to encode binary data in contexts that can only handle ascii - or a simple - character sets.
The base64 is a binary to a text encoding scheme that represents binary data in an ASCII string format. base64 is designed to carry data stored in binary format across the channels. It takes any form of data and transforms it into a long string of plain text. Earlier we can not transfer a large amount of data like files because it is made up of 2⁸ bit bytes but our actual network uses 2⁷ bit bytes. This is where base64 encoding came into the picture. But, what actually does base64 mean?
let’s understand the meaning of base64.
base64 = base+64
we can call base64 as a radix-64 representation.base64 uses only 6-bits(2⁶ = 64 characters) to ensure the printable data is human readable. but, how? we can also write base65 or base78, but why only 64? let’s prove it.
base64 encoding contains 64 characters to encode any string.
base64 contains:
10 numeric value i.e., 0,1,2,3,…..9.
26 Uppercase alphabets i.e., A,B,C,D,…….Z.
26 Lowercase alphabets i.e., a,b,c,d,……..z.
two special characters i.e., +,/. Depends upon your OS.
The steps followed by the base64 algorithm are as follow:
count the number of characters in a String.
If it is not multiple of 3 pad with special character i.e., = to
make it multiple of 3.
Encode the string in ASCII format.
Now, it will convert the ASCII to binary format 8-bit each.
After converting to binary format, it will divide binary data into
chunks of 6-bits each.
The chunks of 6-bit binary data will now be converted to decimal
number format.
Using the base64 Index Table, the decimals will be again converted
to a string according to the table format.
Finally, we will get the encoded version of our input string.
To expand a bit on what Brad is saying: many transport mechanisms for email and Usenet and other ways of moving data are not "8 bit clean", which means that characters outside the standard ascii character set might be mangled in transit - for instance, 0x0D might be seen as a carriage return, and turned into a carriage return and line feed. Base 64 maps all the binary characters into several standard ascii letters and numbers and punctuation so they won't be mangled this way.
One hexadecimal digit is of one nibble (4 bits). Two nibbles make 8 bits which are also called 1 byte.
MD5 generates a 128-bit output which is represented using a sequence of 32 hexadecimal digits, which in turn are 32*4=128 bits. 128 bits make 16 bytes (since 1 byte is 8 bits).
Each Base64 character encodes 6 bits (except the last non-pad character which can encode 2, 4 or 6 bits; and final pad characters, if any). Therefore, per Base64 encoding, a 128-bit hash requires at least ⌈128/6⌉ = 22 characters, plus pad if any.
Using base64, we can produce the encoded output of our desired length (6, 8, or 10).
If we choose to decide 8 char long output, it occupies only 8 bytes whereas it was occupying 16 bytes for 128-bit hash output.
So, in addition to security, base64 encoding is also used to reduce the space consumed.
Base64 can be used for many purposes.
The primary reason is to convert binary data to something passable.
I sometimes use it to pass JSON data around from one site to another, store information
in cookies about a user.
Note:
You "can" use it for encryption - I don't see why people say you can't, and that it's not encryption, although it would be easily breakable and is frowned upon. Encryption means nothing more than converting one string of data to another string of data that can be either later decrypted or not, and that's what base64 does.