Remove 2-byte white space in Perl - perl

I have a text document converted from pdf that contains white space I am not able to match and replace. I managed to print its ord() value and got 194, and length() on the character returned 2 (thus I assume it's 2 bytes). How can I remove this character in Perl? Thanks.

The first character is 19410 = C216 = Â
Seeing as that's not whitespace, and seeing that C216 is commonly found at the start of UTF-8 multi-byte sequences, it appears that you forgot to decode the text. That's the first thing you need to do.
Then, you'll probably find that you have U+00A0 NO BREAK SPACE. You can remove it with
s/\xA0//

Related

Perl regex presumably removing non ASCII characters

I found a code with regex where it is claimed that it strips the text of any non-ASCII characters.
The code is written in Perl and the part of code that does it is:
$sentence =~ tr/\000-\011\013-\014\016-\037\041-\055\173-\377//d;
I want to understand how this regex works and in order to do this I have used regexr. I found out that \000, \011, \013, \014, \016, \037, \041, \055, \173, \377 mean separate characters as NULL, TAB, VERTICAL TAB ... But I still do not get why "-" symbols are used in the regex. Do they really mean "dash symbol" as shown in regexr or something else? Is this regex really suited for deleting non-ASCII characters?
This isn't really a regex. The dash indicates a character range, like inside a regex character class [a-z].
The expression deletes some ASCII characters, too (mainly whitespace) and spares a range of characters which are not ASCII; the full ASCII range would simply be \000-\177.
To be explicit, the d flag says to delete any characters not between the first pair of slashes. See further the documentation.

Are there any character sets that don't respect ASCII?

As far as I understand, a character encoding maps bits to integers and a character set maps integers to characters.
So in the Unicode character set there is a telephone character. It is represented using the integer 9742, more commonly represented using Hexadecimal as 260E. This is then saved to a file using UTF-8 which translates the integer 9742 into 10011000001110. Please correct me if I am wrong.
Yesterday I created a text file that used the Unicode character set and UTF-8 encoding and I saved it to my desktop. I then reopened the file in my text editor and started to manually switch the character sets for fun. Unsurprisingly there were problems and odd characters starting displaying! I noticed that only some of the characters are misrepresented though. This got me thinking, why do only some of the characters break? Why not all?
Someone told me that the characters breaking are those outside the original ASCII specification. Upon reflection this seemed to make sense, as it's only non US characters that break. I was told that because all character sets use the ASCII character set up to the first 128 characters they will remain unbroken, and that it's the characters above 127 that break. Please correct me if I am wrong.
Finally, I got thinking. Are there any character sets that don't respect ASCII? If so, what are they called and what are they used for?
Based on my findings from the comments I am able to answer my own question. Thank you to everyone who commented!
Yes, there are a couple; EBCDIC and Baudot.

understanding file encodings

in eclipse, I have a file where some place this is written:
onclick='obj1.help_open_new_window(fn1(), "/redir/url_name")'
and in eclipse Edit menu->set encoding, I see this:
Now I change the encoding to UTF-8 using the same dialog box and the text changes to:
onclick='obj1.help_open_new_window(fn1(),�"/redir/url_name")'
All I know is if this was not happening, then my website would be working fine. Why is this happening and what do I do to prevent this?
I do have some knowledge about encodings: Â and nbsp mystery explained The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) but still I do not understand why this is happening. Feel free to go to byte level(how file is stored) just to explain it.
UPDATE: Here's what I understand: if the file is encoded in latin-1 then every character is a byte and so is the . it should be hex(32). now when I convert it to utf-8, it still remains hex(32) and that is definitely . this leads me to believe that in latin-1, is not hex(32) but a combination of two bytes. How is that possible?
The character you have between the comma and the quote appear sto not be a normal space but some other whitespace character, probably the famous U+00A0 NO-BREAK SPACE. Since the file is encoded in latin1, the character is stored on disk as the byte \xA0, which does not form a valid character in UTF-8. This means that if you reload the file in your editor using UTF-8 you will see the universal replacement character � in its stead. (The proper UTF-8 encoding of no-break space would be \xC2\xA0.)
To get rid of the problem replace the no-break space with a normal space (U+0020). There is no reason why you should use a no-break space in this context, i.e. in program text.

Why is this A0 character appearing in my HTML::Element output?

I'm parsing an HTML document with a couple Perl modules: HTML::TreeBuilder and HTML::Element. For some reason whenever the content of a tag is just , which is to be expected, it gets returned by HTML::Element as a strange character I've never seen before:
alt text http://www.freeimagehosting.net/uploads/2acca201ab.jpg
I can't copy the character so can't Google it, couldn't find it in character map, and strangely when I search with a regular expression, \w finds it. When I convert the returned document to ANSI or UTF-8 it disappears altogether. I couldn't find any info on it in the HTML::Element documentation either.
How can I detect and replace this character with something more useful like null and how should I deal with strange characters like this in the future?
The character is "\xa0" (i.e. 160), which is the standard Unicode translation for . (That is, it's Unicode's non-breaking space.) You should be able to remove them with s/\xa0/ /g if you like.
The character is non-breaking space which is what stands for:
In word processing and digital typesetting, a non-breaking space (" ") (also called no-break space, non-breakable space (NBSP), hard space, or fixed space) is a space character that prevents an automatic line break at its position. In some formats, including HTML, it also prevents consecutive whitespace characters from collapsing into a single space.
In HTML, the common non-breaking space, which is the same width as the ordinary space character, is encoded as   or  . In Unicode, it is encoded as U+00A0.

I do replace literal \xNN with their character in Perl?

I have a Perl script that takes text values from a MySQL table and writes it to a text file. The problem is, when I open the text file for viewing I am getting a lot of hex characters like \x92 and \x93 which stands for single and double quotes, I guess.
I am using DBI->quote function to escape the special chars before writing the values to the text file. I have tried using Encode::Encoder, but with no luck. The character set on both the tables is latin1.
How do I get rid of those hex characters and get the character to show in the text file?
ISO Latin-1 does not define characters in the range 0x80 to 0x9f, so displaying these bytes in hex is expected. Most likely your data is actually encoded in Windows-1252, which is the same as Latin1 except that it defines additional characters (including left/right quotes) in this range.
\x92 and \x93 are empty characters in the latin1 character set (see here or here). If you are certain that you are indeed dealing with latin1, you can simply delete them.
It sounds like you need to change the character sets on the tables, or translate the non-latin-1 characters into latin-1 equivalents. I'd prefer the first solution. Get used to Unicode; you're going to have to learn it at some point. :)