How to convert special ASCII characters to hex in Perl - perl

I have written a serial port program in Perl. Reading the output on STDOUT( screen), I get output as the special ASCII characters: :- Black smiley white heart. How do I convert them back to hex format?

See perldoc -f ord.

Related

How do I find a 4 digit unicode character using this perl one liner?

I have a file with this unicode character ỗ
File saved in notepad as UTF-8
I tried this line
C:\blah>perl -wln -e "/\x{1ed7}/ and print;" blah.txt
But it's not picking it up. If the file has a letter like 'a'(unicode hex 61), then \x{61} picks it up. But for a 4 digit unicode character, I have an issue picking up the character.
You had the right idea with using /\x{1ed7}/. The problem is that your regex wants to match characters but you're giving it bytes. You'll need to tell Perl to decode the bytes from UTF-8 when it reads them and then encode them to UTF-8 when it writes them:
perl -CiO -ne "/\x{1ed7}/ and print" blah.txt
The -C option controls how Unicode semantics are applied to input and output filehandles. So for example -CO (capital 'o' for 'output') is equivalent to adding this before the start of your script:
binmode(STDOUT, ":utf8")
Similarly, -CI is equivalent to:
binmode(STDIN, ":utf8")
But in your case, you're not using STDIN. Instead, the -n is wrapping a loop around your code that opens each file listed on the command-line. So you can instead use -Ci to add the ':utf8' I/O layer to each file Perl opens for input. You can combine the -Ci and the -CO as: -CiO
Your script works fine. The problem is the unicode you're using for searching. Since your file is utf-8 then your unique search parameters need to be E1, BB, or 97. Check the below file encoding and how that changes the search criteria.
UTF-8 Encoding: 0xE1 0xBB 0x97
UTF-16 Encoding: 0x1ED7
UTF-32 Encoding: 0x00001ED7
Resource https://www.compart.com/en/unicode/U+1ED7

What does \x do in print

I would like to start by saying that I am not familiar with Perl. That being said, I came across this piece of code and I could not figure out what the \x was for in the code below. In addition, I was unsure why nothing was displayed when I ran the following:
perl -e 'print "\x7c\x8e\x04\x08"'
It's not about print: it's about string representation, in which codes represent characters from your character set. For more information you should read Quote and Quote-like Operators and Effects of Character Semantics
In your case the character code is in hex. You should look in your character set table, and you may need to convert to decimal first.
You said "I was unsure why nothing was displayed when I ran the following:"
perl -e 'print "\x7c\x8e\x04\x08"'
That command outputs 4 characters to STDOUT. Each of the characters is specified in hexadecimal. The "\x7c" part will output the vertical bar character |. The other three characters are control characters, so probably wouldn't produce any visible output. If you redirect output to a file, you will end up with a 4 byte file.
It's possible that you're not seeing the vertical bar character because it's being overwritten by your command prompt. Unlike the shell echo or Python's print, Perl's print function does not automatically append a newline to all output. If you want new lines, you can insert them in the string using \n.
\x signifies the start of a hexadecimal character notation.

Zebra printer: how to print UTF-8 special character

I need to print label with special character like degree (°).
I'm using qz print applet on my website.
How i can say to applet that i'm going to print UtF-8 character?
Because it doesn't print correctly.
Thanks!
Well, you need to escape characters above ASCII by putting ^FH (Field Hexadecimal Indicator) before any ^FD field that might contain an UTF char and you also need to prefix the UTF Hex code with an underscore. Like happens in this other question: Unicode characters on ZPL printer

CAM::PDF returning non ascii character instead of quotes

I am having trouble with non ascii characters being returned. I am not sure at which level the issue resides. It could be the actual PDF encoding, the decoding used by CAM::PDF (which is FlateDecode) or CAM::PDF itself. The following returns a string full of the commands used to create the PDF (Tm, Tj, etc).
use CAM::PDF;
my $filename = "sample.pdf";
my $cam_obj = CAM::PDF->new($filename) or die "$CAM::PDF::errstr\n";
my $tree = $cam_obj->getPageContentTree(1);
my $page_string = $tree->toString();
print $page_string;
You can download sample.pdf here
The text returned in the Tj often has one character which is non ASCII. In the PDF, the actual character is almost always a quote, single or double.
While reproducing this I found that the returned character is consistent within the PDF but varies amongst PDFs. I also noticed the PDF is using a specific font file. I'm now looking into font files to see if the same character can be mapped to varying binary values.
:edit:
Regarding Windows-1252. My PDF returns an "Õ" instead of apostrophes. The Õ character is hex 0xD5 in Windows-1252 and UTF-8. If the idea is that the character is encoded with Windows-1252, then it should be a hex 0x91 or 0x92 which it is not. Which is why the following does nothing to the character:
use Encode qw(decode encode);
my $page_string = 'Õ';
my $characters = decode 'Windows-1252', $page_string;
my $octets = encode 'UTF-8', $characters;
open STS, ">TEST.txt";
print STS $octets . "\n";
I'm the author of CAM-PDF. Your PDF is non-compliant. From the PDF 1.7 specification, section 3.2.3 "String Objects":
"Within a literal string, the backslash (\) is used as an escape
character for various purposes, such as to include newline characters,
nonprinting ASCII characters, unbalanced parentheses, or the backslash
character itself in the string. [...] The \ddd escape sequence provides
a way to represent characters outside the printable ASCII character set."
If you have large quantities of non-ASCII characters, you can represent them using hexadecimal string notation.
EDIT: Perhaps my interpretation of the spec is incorrect, given a_note's alternative answer. I'll have to revisit this... Certainly, the spec could be clearer in this area.
Sorry to intrude, and with all due respect, sir, but file IS compliant. Section 3.2.3 further states:
[The \ddd] notation provides a way to specify characters outside the
7-bit ASCII character set by using ASCII characters only. However,
any 8-bit value may appear in a string.
"receiving" - where? You get "Õ" instead of expected what? And doing exactly what? You know that windows command prompt uses dos code page, not windows-1252, right? (oops, new thread again... probably i should register here :-) )

Length of string in Perl independent of character encoding

The length function assumes that Chinese characters are more than one character. How do I determine length of a string in Perl independent of character encoding (treat Chinese characters as one character)?
The length function operates on characters, not octets (AKA bytes). The definition of a character depends on the encoding. Chinese characters are still single characters (if the encoding is correctly set!) but they take up more than one octet of space. So, the length of a string in Perl is dependent on the character encoding that Perl thinks the string is in; the only string length that is independent of the character encoding is the simple byte length.
Make sure that the string in question is flagged as UTF-8 and encoded in UTF-8. For example, this yields 3:
$ perl -e 'print length("长")'
whereas this yields 1:
$ perl -e 'use utf8; print length("长")'
as does:
$ perl -e 'use Encode; print length(Encode::decode("utf-8", "长"))'
If you're getting your Chinese characters from a file, make sure that you binmode $fh, ':utf8' the file before reading or writing it; if you're getting your data from a database, make sure the database is returning strings in UTF-8 format (or use Encode to do it for you).
I don't think you have to have everything in UTF-8, you really only need to ensure that the string is flagged as having the correct encoding. I'd go with UTF-8 front to back (and even sideways) though as that's the lingua franca for Unicode and it will make things easier if you use it everywhere.
You might want to spend some time reading the perlunicode man page if you're going to be dealing with non-ASCII data.