Sending ESC/P cut command to Brother VC-500w - command-line

for my current project, in which I'm using the Brother vc-500w as printer, I need to perform a full cut. I intend to use ESC/P commands to do it.
For the first step I tried to send data from commandline to printer, as shown in this answer: https://stackoverflow.com/a/47655000/12818636
In the same type I have tried following:
data="\x1Bia\x00"
data="${data}\x1B#"
data="${data}\x1BiC\x1"
data="${data}\x0C"
echo -ne $data | hexdump -C
Resulting in:
00000000 1b 69 61 00 1b 40 1b 69 43 01 0c |.ia..#.iC..|
echo -ne $data | lp -d Brother_VC_500W_6615
Resulting in: Anfrage-ID ist Brother_VC_500W_6615–1465 (0 Datei(en))
I want to mention, that in case I am sending strings to the printer, he prints the strings in his default/configured parameters. But when I'm sending shown cut command he is not printing anything, only the LED indicators behaves a little strange.
I will be happy about your ideas to solve that issue.

Related

Can raku avoid this Malformed UTF-8 error?

When I run this raku script...
my $proc = run( 'tree', '--du', :out);
$proc.out.slurp(:close).say;
I get this error on MacOS...
Malformed UTF-8 near bytes ef b9 5c
... instead of something like this tree output from zsh which is what I want...
.
├── 00158825_20210222_0844.csv
├── 1970-Article\ Text-1971-1-2-20210118.docx
├── 1976-Article\ Text-1985-1-2-20210127.docx
├── 2042-Article\ Text-2074-1-10-20210208.pdf
├── 2045-Article\ Text-2076-1-10-20210208.pdf
├── 6.\ Guarantor\ Form\ (A).pdf
I have tried slurp(:close, enc=>'utf8-c8') and the error is the same.
I have also tried...
shell( "tree --du >> .temp.txt" );
my #lines = open(".temp.txt").lines;
dd #lines;
... and the error is the same.
Opening .temp.txt reveals this...
.
â<94><9c>â<94><80>â<94><80> [ 1016739] True
â<94><9c>â<94><80>â<94><80> [ 9459042241] dir-name
â<94><82>   â<94><9c>â<94><80>â<94><80> [ 188142] Business
â<94><82>   â<94><82>   â<94><9c>â<94><80>â<94><80> [ 9117] KeyDates.xlsx
â<94><82>   â<94><82>   â<94><9c>â<94><80>â<94><80> [ 13807] MondayNotes.docx
file -I gives this...
.temp.txt: text/plain; charset=unknown-8bit
Any advice?
[this is Catalina 10.15.17, Terminal encoding Unicode(UTF-8)
Welcome to 𝐑𝐚𝐤𝐮𝐝𝐨™ v2020.10.
Implementing the 𝐑𝐚𝐤𝐮™ programming language v6.d.
Built on MoarVM version 2020.10.]
It seems like you have a codepage/locale that is not Utf8. (Or tree is ignoring the codepage and using something different.)
A quick … get something, anything out of it; is to use an 8-bit single-byte encoding.
run( 'tree', '--du', :out, :enc<latin1> );
It generally is enough to see where decoding starts to go wrong with Utf8.
That said, let's look at your expected output, and the file output.
say '├──'.encode; # utf8:0x<E2 94 9C E2 94 80 E2 94 80>
In your file you have
â<94><9c>â<94><80>â<94><80> [ 1016739] True
Wait …
say 'â'.encode('latin1'); # Blob[uint8]:0x<E2>
<E2><94><9c><E2><94><80><E2><94><80>
<E2 94 9c E2 94 80 E2 94 80>
utf8:0x<E2 94 9C E2 94 80 E2 94 80>
Yeah, those look an awful lot alike.
In that they are exactly the same.
So it does appear to be producing the expected output to some extent.
Which seems to confirm, that yes there is an encoding problem in-between tree and your code. That indicates that the codepage/locale is set wrong.
You haven't really provided enough information to figure out exactly what's going wrong where.
You should have used run in binary mode to give us the exact output.
say run('echo', 'hello', :out, :bin).out.slurp;
# Buf[uint8]:0x<68 65 6C 6C 6F 0A>
You also didn't say if <9c> is literally in the file as four text characters, or if it is a feature of whatever you used to open the file turning binary data into text.
It also would be nice if all of the example data was of the same thing.
On a slightly related note…
Since tree gives filenames, and filenames are not Unicode, using utf8-c8 is appropriate here.
(Same generally goes for usernames and passwords.)
Here's some code that I ran on my computer to hopefully show why.
say dir(:test(/^ r.+sum.+ $/)).map: *.relative.encode('utf8-c8').decode
# (résumé résumé résumé résumé)
dir(:test(/^ r.+sum.+ $/)).map: *.relative.encode('utf8-c8').say
# Blob[uint8]:0x<72 65 CC 81 73 75 6D 65 CC 81>
# Blob[uint8]:0x<72 C3 A9 73 75 6D 65 CC 81>
# Blob[uint8]:0x<72 C3 A9 73 75 6D C3 A9>
# Blob[uint8]:0x<72 65 CC 81 73 75 6D C3 A9>
say 'é'.NFC;
# NFC:0x<00e9>
say 'é'.NFD
# NFD:0x<0065 0301>
sub to-Utf8 ( Uni:D $_ ){
.map: *.chr.encode
}
say to-Utf8 'é'.NFC
# (utf8:0x<C3 A9>)
say to-Utf8 'é'.NFD
# (utf8:0x<65> utf8:0x<CC 81>)
So é is either encoded as one composed codepoint <C3 A9> or two decomposed codepoints <65> <CC 81>.
Did I really create 4 files with the “same name” just for this purpose?
Yes. Yes I did.
Update I had deleted this nanswer because Brad's excellent answer and Valle Lukas's spot on comment seemed to render it moot. Then #p6steve confirmed both Brad's answer and Valle Lukas's solutions worked for them, so all the more reason to keep it deleted. But too late! A mistake in my nanswer had misled #p6steve who made a similar mistake in a follow up SO. Wea Culpa. To atone for my sins, I'm now permanently undeleting and leaving my shameful past for all to see.
This is a nanswer. I don't know Mac, but do love investigation, and what I've got to say won't fit in the comments.
Update The 'find .' in the following should be 'find', '.'. See run doc.
What do you get with this?:
say .out.lines given run 'find .', :out
If find . works, the problem is presumably tree.
If find . doesn't work, then try something really simple, that's built into MacOS, something that really should work. If it doesn't work, then the problem isn't tree but something more basic.
Malformed UTF-8 near bytes ef b9 5c
That means Raku was expecting UTF-8 but the input wasn't UTF-8.
Translating the message from computerese into English:
The supposedly English string "[Linux] xshell远程登陆CentOS时中文乱码解决_Cindy的博客 ..." is Malformed near 远程登.
In other words, the tree command is not generating UTF-8.
(Therefore using utf8-c8 will almost certainly be useless in the first instance. Its purpose is to cheat. It's for when text is either almost all UTF-8 except for a handful of rogue bytes, and you can't be bothered to sort out the input, or when you have absolutely no choice but to accept the input as it is and still want to muddle through. But in this case you surely ought either sort the problem out by getting to the bottom of things, or find some alternative to tree.)
Terminal encoding Unicode(UTF-8)
A google for "Terminal encoding Unicode(UTF-8)" yields just 7 matches. None appeared to be exact matches for "Terminal encoding Unicode(UTF-8)". All but one look to me like ... ef b9 5c looks to Rakudo. :)
If you copy/pasted that string, where did you copy it from?
If you yourself wrote that string, why were you so sure MacOS really was encoding tree's output as UTF-8 when run via the kernel (not a shell) that you wrote that it was?
run doesn't use a shell.
The current doc claims shell uses /bin/sh -c on MacOS.
What's the output of this?:
readlink -e $(which sh)
Is the output zsh?
If so sh -c should be using it.
If not, that may be the problem.
When one uses shell, one has to ensure the passed string is appropriately quoted and escaped. What do you get when you try these?:
say .out.lines given shell "'find .'", :out;
say .out.lines given shell "'tree --du'", :out;
What exactly is tree invoking? Is it a shell alias in zsh? If it's a binary, where did you install it from and how did you configure it, especially in terms of influencing zsh's handling of encodings?

perl perlpacktut not making sense for me

I am REALLY confused about pack and unpack definition for perl.
Below is the excerpt from perl.doc.org
The pack function converts values to a byte sequence containing
representations according to a given specification, the so-called
"template" argument. unpack is the reverse process, deriving some values
from the contents of a string of bytes.
So I get the idea that pack takes human readable things(such as A) and turn it into binary format. Am I wrong on this interpretation??
So that is my interpreation but then same doc immediately proceeds to put this example which put my understanding exactly the opposite.
my( $hex ) = unpack( 'H*', $mem );
print "$hex\n";
What am I missing?
The pack function puts one or more things together in a single string. It represents things as octets (bytes) in a way that it can unpack reliably in some other program. That program might be far away (like, the distance to Mars far away). It doesn't matter if it starts as something human readable or not. That's not the point.
Consider some task where you have a numeric ID that's up to about 65,000 and a string that might be up to six characters.
print pack 'S A6', 137, $ARGV[0];
It's easier to see what this is doing if you run it through a hex dumper as you run it:
$ perl pack.pl Snoopy | hexdump -C
00000000 89 00 53 6e 6f 6f 70 79 |..Snoopy|
The first column counts the position in the output so ignore that. Then the first two octets represent the S (short, 'word', whatever, but two octets) format. I gave it the number 137 and it stored that as 0x8900. Then it stored 'Snoopy' in the next six octets.
Now try it with a shorter name:
$ perl test.pl Linus | hexdump -C
00000000 89 00 4c 69 6e 75 73 20 |..Linus |
Now there's a space character at the end (0x20). The packed data still has six octets. Try it with a longer name:
$ perl test.pl 'Peppermint Patty' | hexdump -C
00000000 89 00 50 65 70 70 65 72 |..Pepper|
Now it truncates the string to fit the six available spaces.
Consider the case where you immediately send this through a socket or some other way of communicating with something else. The thing on the other side knows it's going to get eight octets. It also knows that the first two will be the short and the next six will be the name. Suppose the other side stored that it $tidy_little_package. It gets the separate values by unpacking them:
my( $id, $name ) = unpack 'S A6', $tidy_little_package;
That's the idea. You can represent many values of different types in a binary format that's completely reversible. You send that packed string wherever it needs to be used.
I have many more examples of pack in Learning Perl and Programming Perl.

Snort rules for byte code

I just started to learn how to use Snort today.
However, I need a bit of help with my rules setup.
I am trying to look for the following code on the network sent to a machine. This machine has snort installed on it (as I installed it now).
The code I want to analyze on the network is in bytes.
\xAA\x00\x00\x00\x00\x00\x00\x0F\x00\x00\x02\x74\x00\x00' (total of 14 bytes)
Now, I am looking at wanting to analyze the first 7 bytes of the code. For me if the 1st byte is (AA) and 7th byte is (0F). Then I want snort to set off an alarm.
So far my rules are:
alert tcp any any -> any any \
(content:"|aa 00 00 00 00 00 00 0f|"; msg:"break in attempt"; sid:10; rev:1; \
classtype:shellcode-detect; rawbytes;)
byte_test:1, =, aa, 0, relative;
byte_test:7 =, 0f, 7, relative;
I'm guessing I obviously have made a mistake somewhere. Maybe someone that is familair with snort could help me out?
Thanks.
Congrats on deciding to learn snort.
Assuming the bytes are going to be found in the payload of a TCP packet your rule header should be fine:
alert tcp any any -> any any
We can then specify the content match using pipes (||) to let snort know that these characters should be interpreted as hex bytes and not ascii:
content:"|AA 00 00 00 00 00 00 0F|"; depth:8;
And since we only want the rule to match if these bytes are found in the first 8 bytes of the packet or buffer we can add "depth". The "depth" keyword modifier tells snort to check where in the packet or buffer the content match was found. For the above content match to return true all eight bytes must be found within the first eight bytes of the packet or buffer.
"rawbytes" is not necessary here and should only ever be used for one specific purpose; to match on telnet control characters. "byte_test" isn't needed either since we've already verified that bytes 1 and 8 are "AA" and "0F" respectively using a content match.
So, the final rule becomes:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
If you decide that this should only match inside a file you can use the "sticky" buffer "file_data" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if the shellcode is found inside the alternate data (file data) buffer.
If you'd like for your rule to only look inside certain file types for this shellcode you can use "flowbits" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
flowbits:isset,file.pdf; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if these bytes are found when the file.pdf flowbit is set. You will need the rule enabled that sets the pdf flowbit. Rules that set file flowbits and other good examples can be found in the community ruleset available for free here https://www.snort.org/snort-rules.

Base64 Encoding and Decoding

I would appreciate if someone could please explain this to me.
I came across this post (not important just reference) and saw a token encoded with base64 where the guy decoded it.
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= -> t3+(:APPMOBI
I then tried to encode t3+(:APPMOBI again using base64 to see if I would get the same result, but was very surprised to get:
t3+(:APPMOBI - > dDMrKDpBUFBNT0JJ
Completly different token.
I then tried to decode the original token EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= and got t3+(:APPMOBI with random characters between it. (I got ◄ëtå╒3å+╪═┬(√:╚╚APPMOBI could be wrong, I quickly did it off the top off my head)
What is the reason for the difference in tokens were they not supposed to be the same?
The whole purpose of base64 encoding is to encode binary data into text representation so that they can be transmitted over the network or displayed without corruption. But it ironically happened with the original post you were referring to,
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk= does NOT decode to t3+(:APPMOBI
instead, it contains some binary bytes(not random btw) that you correctly showed. So the problem was due to the original post where either the author or the tool/browser that she/he used "cleaned up", or rather corrupted the decoded binary data.
There is always one-to-one relationship between encoded and decoded data (provided the same "base" is used, i.e. the same set of characters are used for encoded text.)
t3+(:APPMOBI indeed will be encoded into dDMrKDpBUFBNT0JJ
The problem is in the encoding that displayed the output to you, or in the encoding that you used to input the data to base64. This is actually the problem that base64 encoding was invented to help solve.
Instead of trying to copy and paste the non-ASCII characters, save the output as a binary file, then examine it. Then, encode the binary file. You'll see the same base64 string.
c:\TEMP>type b.txt
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk=
c:\TEMP>base64 -d b.txt > b.bin
c:\TEMP>od -t x1 b.bin
0000000 11 89 74 86 d5 33 86 2b d8 cd c2 28 fb 3a c8 c8
0000020 41 50 50 4d 4f 42 49
c:\TEMP>base64 -e b.bin
EYl0htUzhivYzcIo+zrIyEFQUE1PQkk=
od is a tool (octal dump) that outputs binary data using hexadecimal notation, and shows each of the bytes.
EDIT:
You asked about a different string in your comments, dDMrKDpBUFBNT0JJ, and why does that decode to the same thing? Well, it doesn't decode to the same thing. It decodes to this string of bytes: 74 33 2b 28 3a 41 50 50 4d 4f 42 49. Your original string decoded to this string of bytes: 11 89 74 86 d5 33 86 2b d8 cd c2 28 fb 3a c8 c8 41 50 50 4d 4f 42 49.
Notice the differences: your original string decoded to 23 bytes, your second string decoded to only 12 bytes. The original string included non-ASCII bytes like 11, d5, d8, cd, c2, fb, c8, c8. These bytes don't print the same way on every system. You referred to them as "random bytes", but they're not. They're part of the data, and base64 is designed to make sure they can be transmitted.
I think to understand why these strings are different, you need to first understand the nature of character data, what base64 is, and why it exists. Remember that computers work only on numbers, but people need to work with familiar concepts like letters and digits. So ASCII was created as an "encoding" standard that represents a little number (we call this little number a "byte") as a letter or a digit, so that we humans can read it. If we line up a group of bytes, we can spell out a message. 41 50 50 4d 4f 42 49 are the bytes that represent the word APPMOBI. We call a group of bytes like this a "string".
Every letter from A-Z and every digit from 0-9 has a number specified in ASCII that represents it. But there are many extra numbers that are not in the standard, and not all of those represent visible or sensible letters or digits. We say they're non-printable. Your longer message includes many bytes that aren't printable (you called them random.)
When a computer program like email is dealing with a string, if the bytes are printable ASCII characters, it's easy. The email program knows what to do with them. But if your bytes instead represent a picture, the bytes could have values that aren't ASCII, and various email programs won't know what to do with them. Base64 was created to take all kinds of bytes, both printable and non-printable bytes, and translate them into a string of bytes representing only printable letters. Because they're all printable, a program like email or a web server can easily handle them, even if it doesn't know that they actually contain a picture.
Here's the decode of your new string:
c:\TEMP>type c.txt
dDMrKDpBUFBNT0JJ
c:\TEMP>base64 -d c.txt
t3+(:APPMOBI
c:\TEMP>base64 -d c.txt > c.bin
c:\TEMP>od -t x1 c.bin
0000000 74 33 2b 28 3a 41 50 50 4d 4f 42 49
0000014
c:\TEMP>type c.bin
t3+(:APPMOBI
c:\TEMP>

EMV application selection using AID

I am trying to read a Visa Credit Card, using the command:
00 A4 04 07 A0 00 00 00 03 10 10
but I'm getting this response
61 2E
I am unable to understand this response, because the EMV Book 1 says (pag 146):
6A 81 : command not supported
90 00 or 62 83 command is successfull
Any help on how to proceed now? What I'm missing? What should I do?
Thanks.
Found the issue, posted here in case anyone runs in a similar issue:
From EMV Book #1, pag #114:
The GET RESPONSE command is issued by the TTL to obtain available data
from the ICC when processing case 2 and 4 commands. It is employed
only when the T=0 protocol type is in use.
So, the next command to send in this case is:
OO C0 00 00 2E
in order to receive the actual data.