Apologies if this is a dupe (I tried all manner of searches!). This is driving me nuts...
I need a quick fix to replace à with a space.
I've tried the following, with no success:
$str =~ s/Ã/ /g;
$str =~ s/\xC3/ /g;
What am I doing wrong here ?
The statement "replace à with a space" is meaningless, because the statement does not specify which encoding is used for the character in question.
The context of this statement could be using the UTF-8 encoding, for example, as well as one of several ISO-8859 encodings. Or, maybe even UTF-16 or UTF-32.
So, for starters, you need to specify, at least, which encoding you are using. And after that, it's also necessary to specify where the input or the output is coming from.
Assuming:
1) You are using UTF-8 encoding
2) You are reading/writing STDIN and STDOUT
Then here's a short example of a filter that shows how to replace this character with a space. Assuming, of course, that the Perl script itself is also encoded in UTF-8.
use utf8;
use feature 'unicode_strings';
binmode(STDIN, ":utf8");
binmode(STDOUT, ":utf8");
while (<STDIN>)
{
s/Ã/ /g;
print;
}
You need to specify that you want UNICODE and not Latin-1 (or another encoding).
If you're reading from a file then:
#!/usr/bin/perl
open INFILE, '<:encoding(UTF-8)', '/mypath/file';
while(<INFILE>) {
s/\xc3/ /g;
print;
}
I'll break that down better for you:
In <:encoding(UTF-8) you are specifying that you want to read (the <), and that you want UNICODE (the :encoding(UTF-8) part).
If you weren't using unicode you would use:
open INFILE, '<', '/mypath/file';
or
open INFILE, '/mypath/file';
because by default perl will read. If you want to write you use >:encoding(UTF-8) and if you want to append (because the > overwrites the file) you use >>:encoding(UTF-8).
Hope it helped!
There is another answer that specifies how to do binmode(STDIN, ":utf8") if you're trying to unicode from STDIN.
Following this, for the simple "quick fix" Wonko was looking for:
tr/ -~//cd;
Related
I have script for reading html files in Perl, it works, but it breaks encoding.
This is my script:
use utf8;
use Data::Dumper;
open my $fr, '<', 'file.html' or die "Can't open file $!";
my $content_from_file = do { local $/; <$fr> };
print Dumper($content_from_file);
Content of file.html:
<span class="previews-counter">Počet hodnotení: [%product.rating_votes%]</span>
[%L10n.msg('Zobraziť recenzie')%]
Output from reading:
<span class=\"previews-counter\">Po\x{10d}et hodnoten\x{ed}: [%product.rating_votes%]</span>
[%L10n.msg('Zobrazi\x{165} recenzie')%]
As you can see lot of characters are escaped, how can I read this file and show content of it as it is?
You open the file with perl's default encoding:
open my $fh, '<', ...;
If that encoding doesn't match the actual encoding, Perl might translate some characters incorrectly. If you know the encoding, specify it in the open mode:
open my $fh, '<:utf8', ...;
You aren't done yet, though. Now that you have a probably decoded string, you want to output it. You have the same problem again. The standard output file handle's encoding has to match what you are trying to print to. If you've set up your terminal (or whatever) to expect UTF-8, you need to actually output UTF-8. One way to fix that is to make the standard filehandles use UTF-8:
use open qw(:std :utf8);
You have use utf8, but that only signals the encoding for your program file.
I've written a much longer primer for Perl and Unicode in the back of Learning Perl. The StackOverflow question Why does modern Perl avoid UTF-8 by default? has lots of good advice.
I need to sort lines from file, saved as UTF-8. These lines can start with cyrillic or latin characters. My code works wrong on cyrillic one.
sub sort_by_default {
my #sorted_lines = sort {
$a <=> $b
||
fc( $a) cmp fc($b)
} #_;
}
The cmp used with sort can't help with this; it has no notion of encodings and merely compares by codepoint, character by character, with surprises in many languages. Use Unicode::Collate.† See this post for a bit more and for far more this post by tchrist and this perl.com article.
The other issue is of reading (decoding) input and writing (encoding) output in utf8 correctly. One way to ensure that data on standard streams is handled is via the open pragma, with which you can set "layers" so that input and output is decoded/encoded as data is read/written.
Altogether, an example
use warnings;
use strict;
use feature 'say';
use Unicode::Collate;
use open ":std", ":encoding(UTF-8)";
my $file = ...;
open my $fh, '<', $file or die "Can't open $file: $!";
my #lines = <$fh>;
chomp #lines;
my $uc = Unicode::Collate->new();
my #sorted = $uc->sort(#lines);
say for #sorted;
The module's cmp method can be used for individual comparisons (if data
is in a complex data structure and not just a flat list of lines, for instance)
my #sorted = map { $uc->cmp($a, $b) } #data;
where $a and $b need be set suitably so to extract what to compare from #data.
If you have utf8 data right in the source you need use utf8, while if you receive utf8 via yet other channels (from #ARGV included) you may need to manually Encode::decode those strings.
Please see the linked post (and links in it) and documentation for more detail. See this perlmonks post for far more rounded information. See this Effective Perler article on custom sorting.
† Example: by codepoint comparison ä > b while the accepted order in German is ä < b
perl -MUnicode::Collate -wE'use utf8; binmode STDOUT, ":encoding(UTF-8)";
#s = qw(ä b);
say join " ", sort { $a cmp $b } #s; #--> b ä
say join " ", Unicode::Collate->new->sort(#s); #--> ä b
'
so we need to use Unicode::Collate (or a custom sort routine).
To open a file saved as UTF-8, use the appropriate layer:
open my $FH, '<:encoding(UTF-8)', 'filename' or die $!;
Don't forget to set the same layer for the output.
#! /usr/bin/perl
use warnings;
use strict;
binmode *DATA, ':encoding(UTF-8)';
binmode *STDOUT, ':encoding(UTF-8)';
print for sort <DATA>;
__DATA__
Борис
Peter
John
Владимир
The key to handle UTF-8 correctly in Perl is to make sure that Perl knows that a certain source or destination of information is in UTF-8. This is done differently depending on the way you get info in or out. If the UTF-8 is coming from an input file, the way to open the file is:
open( my $fh, '<:encoding(UTF-8)', "filename" ) or die "Cannot open file: $!\n";
If you are going to have UTF-8 inside the source of your script, then make sure you have:
use utf8;
At the beginning of the script.
If you are going to get UTF-8 characters from STDIN, use this at the beginning of the script:
binmode(STDIN, ':encoding(UTF-8)');
For STDOUT use:
binmode(STDOUT, ':encoding(UTF-8)');
Also, make sure you read UTF-8 vs. utf8 vs. UTF8 to know the difference between each encoding name. utf8 or UTF8 will allow valid UTF-8 and also non-valid UTF-8 (according to the first UTF-8 proposed standard) and will not complain about non-valid codepoints. UTF-8 will allow valid UTF-8 but will not allow non-valid codepoint combinations; it is a short name for utf-8-strict. You may also read the question How do I sanitize invalid UTF-8 in Perl?
.
Finally, following #zdim advise, you may use at the beginning of the script:
use open ':encoding(UTF-8)';
And other variants as described here. That will set the encoding layer for all open instructions that do not specify a layer explicitly.
How can this character representation in the original file:
" – "
be displayed as this using perl script in the output console of np++ ?
" ÔÇô "
The original encoding is UTF-8 (according to np++) and to open and read in the file I use this line:
open(DATA, '<:encoding(utf-8)', "C:\\test.csv") or die "Can't open data";
#lines = <DATA>;
If i iterate over lines with:
foreach (#lines){
print $_;
}
the character is representad as mentioned above. I display the ouput in the notepad++ console not in a new file.
Before your print statement, try to add this:
binmode(STDOUT, ":utf8");
foreach (#lines){
print $_;
}
On Windows systems,
use Encode;
binmode(STDOUT, 'encoding(cp850)');
the code page (850) number in your system may be different, write this order in a DOS console to get yours:
C:\>chcp
That said, it might now work even if you do everything right because the character in question, U+2013, is not part of the two most common console encodings, cp850 and cp437. It cannot be displayed in consoles using those encodings.
If that's the case, your best bet is switching the console's encoding to UTF-8 by entering chcp 65001 at the prompt. You'll need to edit the console's properties to switch the font to an appropriate font (e.g. Lucidia Console). After doing that, you can use :encoding(UTF-8).
I am scraping a site based on German language , I am trying to store the content of the site in a CSV using Perl , but i am facing garbage value in the csv, the code i use is
open my $fh, '>> :encoding(UTF-8)', 'output.csv';
print {$fh} qq|"$title"\n|;
close $fh;
For example :I expect Weiß ,Römersandalen , but i get Weiß, Römersandalen
Update :
Code
use strict;
use warnings;
use utf8;
use WWW::Mechanize::Firefox;
use autodie qw(:all);
my $m = WWW::Mechanize::Firefox->new();
print "\n\n *******Program Begins********\n\n";
$m->get($url) or die "unable to get $url";
my $Home_Con=$m->content;
my $title='';
if($Home_Con=~m/<span id="btAsinTitle">([^<]*?)<\/span>/is){
$title=$1;
print "title ::$1\n";
}
open my $fh, '>> :encoding(UTF-8)', 's.txt'; #<= (Weiß)
print {$fh} qq|"$title"\n|;
close $fh;
open $fh, '>> :encoding(UTF-8)', 's1.csv'; #<= (Weiß)
print {$fh} qq|"$title"\n|;
close $fh;
print "\n\n *******Program ends********";
<>;
This is the part of code. The method works fine in text files, but not in csv.
You've shown us the code where you're encoding the data correctly as you write it to the file.
What we also need to see is how the data gets into your program. Are you decoding it correctly at that point?
Update:
If the code was really just my $title='Weiß ,Römersandalen' as you say in the comments, then the solution would be as simple as adding use utf8 to your code.
The point is that Perl needs to know how to interpret the stream of bytes that it's dealing with. Outside your program, data exists as bytes in various encodings. You need to decode that data as it enters your program (decoding turns a stream of bytes into a string of characters) and encode it again as it leaves your program. You're doing the encoding step correctly, but not the decoding step.
The reason that use utf8 fixes that in the simple example you've given is that use utf8 tells Perl that your source code should be interpreted as a stream of bytes encoded as utf8. It then converts that stream of bytes into a string of characters containing the correct characters for 'Weiß ,Römersandalen'. It can then successfully encode those characters into bytes representing those characters encoded as utf8 as they are written to the file.
Your data is actually coming from a web page. I assume you're using LWP::Simple or something like that. That data might be encoded as utf8 (I doubt it, given the problems you're having) but it might also be encoded as ISO-8859-1 or ISO-8859-9 or CP1252 or any number of other encodings. Unless you know what the encoding is and correctly decode the incoming data, you will see the results that you are getting.
Check if there are any weird characters at start or anywhere in the file using commands like head or tail
I'm new to Perl.
I was getting errors in my print statement: "Wide character in print"
And adding this line of code made it work #binmode(STDOUT, ":utf8");
I read the doc, simply put, binmode encodes characters in a manner that the platform can understand.
Without it, the platform may be expecting the characters to mean something else because it is using a different encoding.
Or is my understanding of binmode off ?
Is there a way with perl to find out what encoding the platform is using ?
use open ':std', ':locale';
can help. Doesn't work on all systems, though.