Snort rules for byte code - snort

I just started to learn how to use Snort today.
However, I need a bit of help with my rules setup.
I am trying to look for the following code on the network sent to a machine. This machine has snort installed on it (as I installed it now).
The code I want to analyze on the network is in bytes.
\xAA\x00\x00\x00\x00\x00\x00\x0F\x00\x00\x02\x74\x00\x00' (total of 14 bytes)
Now, I am looking at wanting to analyze the first 7 bytes of the code. For me if the 1st byte is (AA) and 7th byte is (0F). Then I want snort to set off an alarm.
So far my rules are:
alert tcp any any -> any any \
(content:"|aa 00 00 00 00 00 00 0f|"; msg:"break in attempt"; sid:10; rev:1; \
classtype:shellcode-detect; rawbytes;)
byte_test:1, =, aa, 0, relative;
byte_test:7 =, 0f, 7, relative;
I'm guessing I obviously have made a mistake somewhere. Maybe someone that is familair with snort could help me out?
Thanks.

Congrats on deciding to learn snort.
Assuming the bytes are going to be found in the payload of a TCP packet your rule header should be fine:
alert tcp any any -> any any
We can then specify the content match using pipes (||) to let snort know that these characters should be interpreted as hex bytes and not ascii:
content:"|AA 00 00 00 00 00 00 0F|"; depth:8;
And since we only want the rule to match if these bytes are found in the first 8 bytes of the packet or buffer we can add "depth". The "depth" keyword modifier tells snort to check where in the packet or buffer the content match was found. For the above content match to return true all eight bytes must be found within the first eight bytes of the packet or buffer.
"rawbytes" is not necessary here and should only ever be used for one specific purpose; to match on telnet control characters. "byte_test" isn't needed either since we've already verified that bytes 1 and 8 are "AA" and "0F" respectively using a content match.
So, the final rule becomes:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
If you decide that this should only match inside a file you can use the "sticky" buffer "file_data" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if the shellcode is found inside the alternate data (file data) buffer.
If you'd like for your rule to only look inside certain file types for this shellcode you can use "flowbits" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
flowbits:isset,file.pdf; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if these bytes are found when the file.pdf flowbit is set. You will need the rule enabled that sets the pdf flowbit. Rules that set file flowbits and other good examples can be found in the community ruleset available for free here https://www.snort.org/snort-rules.

Related

Sending ESC/P cut command to Brother VC-500w

for my current project, in which I'm using the Brother vc-500w as printer, I need to perform a full cut. I intend to use ESC/P commands to do it.
For the first step I tried to send data from commandline to printer, as shown in this answer: https://stackoverflow.com/a/47655000/12818636
In the same type I have tried following:
data="\x1Bia\x00"
data="${data}\x1B#"
data="${data}\x1BiC\x1"
data="${data}\x0C"
echo -ne $data | hexdump -C
Resulting in:
00000000 1b 69 61 00 1b 40 1b 69 43 01 0c |.ia..#.iC..|
echo -ne $data | lp -d Brother_VC_500W_6615
Resulting in: Anfrage-ID ist Brother_VC_500W_6615–1465 (0 Datei(en))
I want to mention, that in case I am sending strings to the printer, he prints the strings in his default/configured parameters. But when I'm sending shown cut command he is not printing anything, only the LED indicators behaves a little strange.
I will be happy about your ideas to solve that issue.

Parser is not encoding string correctly

Text I'm trying to get:
przełącznica
This is what I actually have (browser might now view it properly - there are two squares instead of "łą"):
przecznica
BLOB:
70 72 7A 65 C5 82 C4 85 63 7A 6E 69 63 61
EDIT: This is what I get from parser
70 72 7A 65 1A 1A 63 7A 6E 69 63 61
ESQL used to parse BLOB:
DECLARE blobMsg BLOB InputRoot.BLOB.BLOB ;
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN ('XMLNSC') NAME 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC PARSE(blobMsg OPTIONS FolderBitStream CCSID 1208 FORMAT 'XMLNSC');
I have tried CCSIDs: 1208 (UTF8), 912 (ISO-8859-2), 1200(UTF16 I guess):
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/nls/rbagsccsidcdepgscharsets.htm
EDIT: Working code:
DECLARE blobMsg BLOB InputRoot.BLOB.BLOB;
DECLARE remove BLOB X'EFBBBF';
DECLARE message BLOB REPLACE(InputRoot.BLOB.BLOB, remove, CAST('' AS BLOB));
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN ('XMLNSC') NAME 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC PARSE(message OPTIONS FolderBitStream CCSID 05348 FORMAT 'XMLNSC');
Firstly przełącznica by itself is not valid XML and so you'll get an exception when you try to invoke the XMLNSC parser using the code you have outlined. You need to do a CAST instead.
I generated a little test Application/MsgFlow in IIB 10 to illustrate CASTing the BLOB.
The code in ConvertAndParse is
CREATE COMPUTE MODULE ConvertAndParse
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE blobMsg BLOB X'70727A65C582C485637A6E696361';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC NAME 'AsUtf8' VALUE CAST(blobMsg AS CHAR CCSID 1208);
CREATE LASTCHILD OF OutputRoot DOMAIN 'XMLNSC';
CREATE LASTCHILD OF OutputRoot.XMLNSC.EncodingResponse NAME 'AsUtf8InTag' VALUE CAST(blobMsg AS CHAR CCSID 1208);
CREATE LASTCHILD OF OutputRoot.XMLNSC.EncodingResponse NAME CAST(blobMsg AS CHAR CCSID 1208) VALUE 'As a tag name';
RETURN TRUE;
END;
END MODULE;
When I run a debug session the value put into the LocalEnvironment tree looks like.
And the result of invoking the flow from a browser.
Now let's deal with the which encoding we are looking at. Looking at what I assume is the input BLOB let's see if the BLOB matches up with UTF-8.
70 72 7A 65 C5 82 C4 85 63 7A 6E 69 63 61
UTF-8 is a variable width character encoding that sets the high order bit to indicate two or more bytes. We also want a page that shows the common code points for UTF-8 Complete Character List for UTF-8. Note it's not actually complete.
Looking at the first 4 bytes none of them have the high order bit on
70 72 7A 65
And the aforementioned Character List says that's prze, so far so good.
Then we hit C8 which has the high order bit on. Doing a bit of visual parsing we get two sets of probable two byte character pairs
C5 82
C4 85
Referring to the Character List our two candidate pairs do in fact match the two characters we want and the next six characters which do not have their high order bits on translate to cznica. Looking really good.
Now to eliminate the other candidate encodings, if we can.
UTF-16 uses 2 or 4 bytes to represent each character depending on the Byte Order Mark with prze encoded as
UTF-16BE - CP 1200 - 00 70 00 72 00 7A 00 65
UTF-16LE - CP 1202 - 70 00 72 00 7A 00 65 00
Given that there are not lots and lots of null characters 00 it is reasonable to discount UTF-16.
ISO-8859-2 - CP 912 is a single byte character set and the C5 and C4 code points do not match the two desired characters and thus we can eliminate it.

How does 1-wire decide what address to use?

I've been following this simple tutorial to get a temperature reading from a raspberrypi,
http://blog.vokiel.com/raspberry-pi-odczyt-temperatury-przez-nodejs/?lang=en
Under w1/devices, what I'm calling the address is the file where the value of the 1-wire bus is stored.
For example, the tutorial says
/sys/bus/w1/devices/28-00000249bf39 $ cat w1_slave
c3 01 4b 46 7f ff 0d 10 2f : crc=2f YES
c3 01 4b 46 7f ff 0d 10 2f t=28187
Where the address is 28-00000249bf39. On my device, the address is 28-000004acb882.
How are these addresses set? Is it possible to define your own?
As the docs says:
Each Device has a Unique 64-Bit Serial Code
Stored in an On-Board ROM
So you can't set your own.
To read the temperature from youre devide just type
cat 28.00000249bf39/temperature

EMV application selection using AID

I am trying to read a Visa Credit Card, using the command:
00 A4 04 07 A0 00 00 00 03 10 10
but I'm getting this response
61 2E
I am unable to understand this response, because the EMV Book 1 says (pag 146):
6A 81 : command not supported
90 00 or 62 83 command is successfull
Any help on how to proceed now? What I'm missing? What should I do?
Thanks.
Found the issue, posted here in case anyone runs in a similar issue:
From EMV Book #1, pag #114:
The GET RESPONSE command is issued by the TTL to obtain available data
from the ICC when processing case 2 and 4 commands. It is employed
only when the T=0 protocol type is in use.
So, the next command to send in this case is:
OO C0 00 00 2E
in order to receive the actual data.

Base64 decoding gives different result

I'm working on a little Streamserve project (google it :P) where I get some Base64 encoded content. I've tried to decode the base64 string with multiple decoders and all return the correct result.. except the Base64DecodeString method in Streamserve.
The encoded string is: 'VABlAHMAdABpAG4AZwAgAGIAYQBzAGUANgA0AA==' The expected result is: 'Testing base64'
However within Streamserve the result is: 'Tsig ae4'
It simply skips every other letter. Now I know most people dont know Streamserve, but I have a hunch that this might be a character encoding problem.. problem and was hoping someone has a clue what might be happening here.
I can without any problem encode/decode strings within streamserve.. just not strings I get as input
The issue is that you're encoding in UTF-16 and decoding back to ASCII or UTF8. Change your string encoding to UTF8 before encoding the string to base64 and it should work fine.
Here's the hex dump of that base64 blob:
54 00 65 00 73 00 74 00 69 00 6e 00 67 00 20 00 62 00 61 00 73 00 65 00 36 00 34 00
If you remove the null bytes, you get this:
54 65 73 74 69 6e 67 20 62 61 73 65 36 34
Which translates to the following ASCII text:
Testing base64
The result of decoding base64 is binary data (and likewise the input when encoding it is binary data). To go from binary data to a string or vice versa, you need to apply an encoding such as UTF-8 or UTF-16. You need to find out which encoding Streamserve is using, and use the same encoding when you convert your text data to binary data to start with, before base64-encoding it.
It sounds like you might want to use UTF-16 to encode your text to start with, although in that case I'm surprised you're not just getting garbage out... it looks like it's actually ignoring every other byte in the decoded base64, rather than taking it as the high byte in a UTF-16 code unit.