When I do
u','.join([u'\u4e8c\u6797', unicode(10)])
it fails with UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128). All three items in the list are of type unicode.
Why is it trying to encode in ascii? I don't want it to convert to ascii string. How do I avoid it?
Not really enough information, since it works just fine as displayed:
>>> u','.join([u'\u4e8c\u6797', unicode(10)])
u'\u4e8c\u6797,10'
But my crystal ball says you are trying to print it in some manner:
>>> print u','.join([u'\u4e8c\u6797', unicode(10)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\dev\Python27\lib\encodings\cp437.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_map)
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-1: character maps to <undefined>
The reason is, at least on my system, is the terminal uses cp437 encoding and can't encode the Unicode characters in that encoding.
Or more likely writing to a file or pipe:
>>> with open('out.txt','w') as f:
... f.write(u','.join([u'\u4e8c\u6797', unicode(10)]))
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
Files have to be told the encoding, such as UTF-8, or they default to ASCII:
>>> import codecs
>>> with open('out.txt','w') as f:
... f.write(u','.join([u'\u4e8c\u6797', unicode(10)]).encode('utf8'))
...
>>> # SUCCESS!
Related
I have a file with this unicode character ỗ
File saved in notepad as UTF-8
I tried this line
C:\blah>perl -wln -e "/\x{1ed7}/ and print;" blah.txt
But it's not picking it up. If the file has a letter like 'a'(unicode hex 61), then \x{61} picks it up. But for a 4 digit unicode character, I have an issue picking up the character.
You had the right idea with using /\x{1ed7}/. The problem is that your regex wants to match characters but you're giving it bytes. You'll need to tell Perl to decode the bytes from UTF-8 when it reads them and then encode them to UTF-8 when it writes them:
perl -CiO -ne "/\x{1ed7}/ and print" blah.txt
The -C option controls how Unicode semantics are applied to input and output filehandles. So for example -CO (capital 'o' for 'output') is equivalent to adding this before the start of your script:
binmode(STDOUT, ":utf8")
Similarly, -CI is equivalent to:
binmode(STDIN, ":utf8")
But in your case, you're not using STDIN. Instead, the -n is wrapping a loop around your code that opens each file listed on the command-line. So you can instead use -Ci to add the ':utf8' I/O layer to each file Perl opens for input. You can combine the -Ci and the -CO as: -CiO
Your script works fine. The problem is the unicode you're using for searching. Since your file is utf-8 then your unique search parameters need to be E1, BB, or 97. Check the below file encoding and how that changes the search criteria.
UTF-8 Encoding: 0xE1 0xBB 0x97
UTF-16 Encoding: 0x1ED7
UTF-32 Encoding: 0x00001ED7
Resource https://www.compart.com/en/unicode/U+1ED7
I am importing a CSV file into a PostgreSQL table through the PostgreSQL import functionality, but I get this error:
character with byte sequence 0xe2 0x80 0xa6 in encoding utf8 has no equivalent in encoding latin1
Please help me out with this...
You are trying to import an UTF-8 file that contains the character … (“horizontal ellipsis”, Unicode code point 2026).
This character cannot be encoded in LATIN1, so you will not be able to do that.
Either use a database with encoding UTF8 or edit the import file to remove the character.
I am using a perl tokenizer for German. The tokenizer works fine for some files but now I am facing the following error:
perl tokenizer.perl -l de < ~/Desktop/me.txt > ~/Desktop/me.txt.tok
Tokenizer v3
Language: de
utf8 "\xFF" does not map to Unicode at tokenizer.perl line 44, <STDIN> line 1.
Malformed UTF-8 character (byte 0xff) in pattern match (m//) at tokenizer.perl line 45, <STDIN> line 1.
Malformed UTF-8 character (byte 0xff) in pattern match (m//) at tokenizer.perl line 45, <STDIN> line 1.
Malformed UTF-8 character (fatal) at tokenizer.perl line 64, <STDIN> line 1.
Any thoughts?
Thanks in advance.
Neg.
The error message is misleading, but the intended information is correct and useful: the byte FF (hexadecimal) was encountered in the data, but it cannot appear in UTF-8 data. So “utf8 "\xFF"” is nonsense as such, but read it as “byte FF encountered as data purported to be UTF-8 encoded”. Similarly, read “Malformed UTF-8 character (byte 0xff)” as “Invalid data (byte FF) encountered in purported UTF8 data”.
To find out why your data contains the byte FF, you need to reveal more of it. My guess is that it is actually part of a byte order mark in UTF-16 encoding, but this is just a guess.
I need to create and export an excel file in my iPhone app. Unfortunately, excel won't read it if the line encoding is LF (the unix default when I write the file) instead of CRLF (the Windows standard)...Is there any way to write a file using CRLF line breaks?
I can tell this is the issue as if I open the file in TextWrangler after outputting it, then change the line breaks to CRLF, excel opens it fine.
Thanks,
Toby
If you're using printf or fprintf in C you typically terminate lines like this:
printf( "this is a line of text.\n" );
The \n outputs a linefeed. You can output a carriage return with \r, so to get a CRLF, you just:
printf( "this is a CRLF terminated line.\r\n" );
I am using Perl to read UTF-16LE files in Windows 7.
If I read in an ASCII file with following code then each "\r\n" in file will be converted into a "\n" in memory:
open CUR_FILE, "<", $asciiFile;
If I read in an UTF-16LE(windows 1200) file with following code, this inconsistency cause problems when I trying to regexp lines with line breaks.
open CUR_FILE, "<:encoding(UTF-16LE)", $utf16leFile;
Then "\r\n" will keep unchanged.
Update:
For each line of a UTF-16LE file:
line =~ /(.*)$/
Then the string matched in $1 will include a "\r" at the end...
What version of Perl are you using? UTF-16 and CRLF handling did not mix properly before 5.8.9 (Unicode changes in 5.8.9). I'm not sure about 5.10.0, but it works in 5.10.1 and 5.8.9. You might need to use "<:encoding(UTF-16LE):crlf" when opening the file.
That is windows performing that magic for you.... If you specify UTF this is the equivalent of opening the file in binary mode vs text.
Newer versions of Perl have the \R which is a generic newline (ie, will match both \r\n and \n) as well as \v which will match all the OS and Unicode notions of vertical whitespace (ie, \r \n \r\n nonbreaking space, etc)
Does you regex logic allow using \R instead of \n?