Why importdata is not working here? - matlab

I have two data text files with the same text content but they have different sizes. The following snapshot compares them (using Beyond Compare).
It seems that the hex content of the files is different.
The MATLAB function importdata reads fine the file to the left but gives the following error with the file to the right (the bigger size file):
Unable to load file. Use TEXTSCAN or FREAD for more complex formats.
What exactly is the difference between the two files ?
How to make importdata work with the file to the right ?

The problem and solution is already mentioned in the comments:
it seems like one text file is in ascii encoding (that is 8 bits per char) while the second one is unicode (16 bits per char). try converting the big file into simple ascii and re-read it
You may see the difference already in BeyondCompare: On the left, you have an ANSI-File, on the right you have a unicode with BOM (The hex-code looks like UTF-16LE. I'm not sure, which version the BeyondCompare you used to get the screenshot. My BeyondCompare would not show 'Unicode' but 'UTF16-LE' ...).

Related

Compare filenames with different encoding in Octave

I'm trying to accomplish following task in Octave:
Read filename from text file
Search for this file in particular location on hard drive
My script works for most files, but for certain files containing unicode characters I'm unable to match the filename from textfile with filename as it appears in the file system.
Filenames in textfile are in UTF-8 encoding and I read them in Octave with function fgetl().
Filenames from file system are obtained via function readdir(). I'm on Windows, NTFS file system.
For example, one problematic filename contains character "Č".
When printed out in Octave console, the characters appear exactly the same. However, a HEX viewer reveals that the characters are not actually the same. In the first case the character is encoded as 0x010C, in the second case as 0x0043 + 0x030C. Comparing both of them via strcmp() fails, of course.
What I tried to do is to omitt all non-ASCII characters from the filename and then compare them. But this didn't work, probably because in the second variant the first part of the character (0x0043) is actually ASCII.
Now I'm looking for some way of converting one format to another to be able to compare them. Any ideas?
EDIT:
As I discovered later, the character Č in the filename on Windows is actually written as C+ˇ, which is just another way you can write that character. So the difference probably insn't in encoding standard, but in 2 different ways to achieve 1 visible character (glyph).
This question basically then changes to a task of matching characters written "at once" and corresponding pair of letter+combining character.

Found some square boxes in a xliff file and not sure what they are?

I'm looking at a xliff file and found some weird boxes which I don't know what they are? (Please see screenshot)
Do you guys have any ideas what the weird bug boxes are?
Thank you very much and I'm looking forward to your reply!
I have never seen that character, but here is how I would go about finding out what it is:
The first thing to do is to check the source and target language of the XLIFF file, which should be defined in the XLIFF header. Perhaps this character is a valid character in either the source or the target language script.
The next step depends on whether you can contact the person who created the XLIFF file. If yes, you can show them what the file looks like for you and ask them if the file has perhaps been garbled during transmission.
If not, you could check the encoding of the XLIFF file. If it is UTF-16, just open the file in a hex editor, find the code point for this character, and look it up on unicode.org. If the file is encoded as UTF-8 open it in Notepad++ (or any other text editor that allows you to change the encoding), convert it to UTF-16, then proceed as described above.
If you don't know the encoding of the file it becomes a matter of guessing. You can look at some other <trans-units> (assuming that there are more than this one in your XLIFF file): if they contain other extended characters and they are displayed correctly your editor has probably guessed the right encoding, and you can convert to Unicode and look up the character code. Different text editors have different ways of guessing encodings: try a few.
It's possible that those characters are the result of an encoding conversion error, which are commonly called mojibake.
It's also possible this is some sort of emoji or unusual glyph that's not rendering correctly in your editor. This would be unusual, but given that it appears to be a UI string, it might be possible.

read() function for FileReader reads 8212

I'm trying to read the binary content of a text file (which contains the compressed version of a different text file). The first two characters (01111011 and 00100110) are correct (going by the values that were original put there during the compression process.
However, when it gets to the third character, it should be reading 10010111 (again, going by what was added during the compression process), but instead it reads 10000000010100 (aka 8212). Does anyone know what is causing this discrepancy, or how to fix it? Thanks!
The Java FileReader should not be used to read binary data from files since it reads a character at a time using the default encoding (which is most likely not very good for binary reading)
Instead, use FileInputStream which has read methods that reads actual raw bytes without any encoding applied.

Strange character rendered correctly in notepad, but as a control character elsewhere

I have a .csv list of businesses. The file has some strange characters in. For example, in this field: Stocktonon-Tees, the first hyphen, between Stockton and on seems to be a character with the value 6 rather than a hyphen, with the value 45. Stack overflow will probably sanatize this so you can't see it, so here is a pastebin:
http://pastebin.com/NuyyaQy9
Can anyone explain why this could be? Is it some encoding issue that I have missed? Or a corruption in the dataset?
Yes, it's almost certainly an encoding issue. A file just consists of binary data - it's how you interpret that binary data that matters. It sounds like Notepad is guessing at the originally-intended encoding, but whatever else you're using isn't.
Unfortunately you haven't said anything about what software is trying to read the file or what wrote it in the first place - but you should look at what encoding Notepad thinks it is, and work from there.
If it's your code that wrote the file out, and you get to decide the encoding, I'd recommend UTF-8 as a good general purpose, platform-portable encoding.

Are there any special variations of uuencoding / uudecoding?

I have written a small program which can encode/decode a text with uuencode/uudecode. The code is based on the algorithm described on Wikipedia. It works fine when I encode/decode a string. But I have found a uuencoded file which I can't decode. This website can decode the file, but when I encode it again I don't get the same file. In addition, when I decode only one line of the file I don't get readable text (neither with my program nor with the decoder I linked before). But in uuenoding all lines are independent from each other - this must be able.
Do someone know whether there are some special variations of the uuenoding, which are not described on Wikipedia? I can decode some strings so my decoder can't be totally wrong. Perhaps someone has written his own decoder, so I post the whole file:
begin 666 Restricted.zip
M4$L#!!0````(`%T[="_]<LYX`P(``'0#```.````4F5S=')I8W1E9"YT>'1M
M4\MNVT`0NQOP/TSNM#PT0!/X4N16`RE0%.GC.I9&TE;2CKH/J_K[<E;IX]"+
M'UJ20W)6^]U3)SX=]KO][D*]SD(7XHD2CX/S'26EU`L%U_6)9#E1?46NQ4,7
MR?E6P\3)J:=%#ABZY7'$P2MO"0J1GGT3Z;B1YJ#?I4ZT:!X;N#KI34)%3Y%6
MS8#>A#I-&[;E`-H%'(EY#G[/(-I',=GI;XN"H49?''YXT#LE]BNU.<!&,*(W
M0&4Y7V#,F_&11NV<-TNU-!D!>HZP5"MF91^YE0-D&H2C5CAL\T&P:#/'A*<+
M#F6(!IEXW?Q?13Q=#P[XLBHJ>L[UX,;U8+`"X3I)0S^RJX=Q+3-28)##+IK:
MEAD#AQRM7DY)ICG%BK[:(,\=L$C>20*EUCR/8BP'&'H+.OT5:+`V>,*NK$%9
MZ<;>Q1X"1WJOBZ#_8HQ+`3?K%(U<1U-:7.HI6A]_+/V[\RU,J]DW!SMV#<37
M89W+>5QCL6/"MDHTQPV&UT5-<R!=?%D)MG^AR&Y3^>]::JP0H2MZ4>3UR?F,
M[>18,L'"..I2K'.,BP8TF<K)YT_/IG1S#<#VZ^,KX$QO'[\\WC_<W;V[?_-P
MW>^`/%.?TGP^G99EJ29MCC^K6JL\G%H78CJQC[CGU=S/V_M2KEN<A0?;A5U`
M[AC.U2*6OUOE0<KD#Q#\MM_]`E!+`0(4`!0````(`%T[="_]<LYX`P(``'0#
M```.``````````$`(`"V#0````!297-T<FEC=&5D+G1X=%!+!08``````0`!
+`#P````O`#``````
`
end
I found the solution! The problem was that I did not notice the first line. This line holds information about the data encoded - a file named Restricted.zip. So the decoded data is a ZIP file which I just had to unpack.
I got a text file named Restricted.txt which contains the readable data.
The problem was so easy, but it took me days to see its solution.
That's a good change over to packing algorithms - perhaps the next thing I do is writing my own program which can pack/unpack zip files.