Text I'm trying to get:
przełącznica
This is what I actually have (browser might now view it properly - there are two squares instead of "łą"):
przecznica
BLOB:
70 72 7A 65 C5 82 C4 85 63 7A 6E 69 63 61
EDIT: This is what I get from parser
70 72 7A 65 1A 1A 63 7A 6E 69 63 61
ESQL used to parse BLOB:
DECLARE blobMsg BLOB InputRoot.BLOB.BLOB ;
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN ('XMLNSC') NAME 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC PARSE(blobMsg OPTIONS FolderBitStream CCSID 1208 FORMAT 'XMLNSC');
I have tried CCSIDs: 1208 (UTF8), 912 (ISO-8859-2), 1200(UTF16 I guess):
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/nls/rbagsccsidcdepgscharsets.htm
EDIT: Working code:
DECLARE blobMsg BLOB InputRoot.BLOB.BLOB;
DECLARE remove BLOB X'EFBBBF';
DECLARE message BLOB REPLACE(InputRoot.BLOB.BLOB, remove, CAST('' AS BLOB));
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN ('XMLNSC') NAME 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC PARSE(message OPTIONS FolderBitStream CCSID 05348 FORMAT 'XMLNSC');
Firstly przełącznica by itself is not valid XML and so you'll get an exception when you try to invoke the XMLNSC parser using the code you have outlined. You need to do a CAST instead.
I generated a little test Application/MsgFlow in IIB 10 to illustrate CASTing the BLOB.
The code in ConvertAndParse is
CREATE COMPUTE MODULE ConvertAndParse
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE blobMsg BLOB X'70727A65C582C485637A6E696361';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg DOMAIN 'XMLNSC';
CREATE LASTCHILD OF OutputLocalEnvironment.Variables.inpMsg.XMLNSC NAME 'AsUtf8' VALUE CAST(blobMsg AS CHAR CCSID 1208);
CREATE LASTCHILD OF OutputRoot DOMAIN 'XMLNSC';
CREATE LASTCHILD OF OutputRoot.XMLNSC.EncodingResponse NAME 'AsUtf8InTag' VALUE CAST(blobMsg AS CHAR CCSID 1208);
CREATE LASTCHILD OF OutputRoot.XMLNSC.EncodingResponse NAME CAST(blobMsg AS CHAR CCSID 1208) VALUE 'As a tag name';
RETURN TRUE;
END;
END MODULE;
When I run a debug session the value put into the LocalEnvironment tree looks like.
And the result of invoking the flow from a browser.
Now let's deal with the which encoding we are looking at. Looking at what I assume is the input BLOB let's see if the BLOB matches up with UTF-8.
70 72 7A 65 C5 82 C4 85 63 7A 6E 69 63 61
UTF-8 is a variable width character encoding that sets the high order bit to indicate two or more bytes. We also want a page that shows the common code points for UTF-8 Complete Character List for UTF-8. Note it's not actually complete.
Looking at the first 4 bytes none of them have the high order bit on
70 72 7A 65
And the aforementioned Character List says that's prze, so far so good.
Then we hit C8 which has the high order bit on. Doing a bit of visual parsing we get two sets of probable two byte character pairs
C5 82
C4 85
Referring to the Character List our two candidate pairs do in fact match the two characters we want and the next six characters which do not have their high order bits on translate to cznica. Looking really good.
Now to eliminate the other candidate encodings, if we can.
UTF-16 uses 2 or 4 bytes to represent each character depending on the Byte Order Mark with prze encoded as
UTF-16BE - CP 1200 - 00 70 00 72 00 7A 00 65
UTF-16LE - CP 1202 - 70 00 72 00 7A 00 65 00
Given that there are not lots and lots of null characters 00 it is reasonable to discount UTF-16.
ISO-8859-2 - CP 912 is a single byte character set and the C5 and C4 code points do not match the two desired characters and thus we can eliminate it.
I have a DESFIRE EV1 card(NOT BLANK) when I want to select application(ISO 7816 mentioned in DESFIRE 7816 manual ) I get "6A 82" response
I use this APDU for selecting application by it's 3-byte identifier .
-> 00 A4 04 00 03 XX XX XX
<- 6a 82
I can select this application by wrapped APDU of native DESfire commands .
there is an another issue , I have problem with EXTERNAL AUTHENTICATION , I should define algorithm id byte, i want to select "3DES diversify"
how can I do that?
To select application you have to execute below command:
90 5A 00 00 03 010000 00
where application id is "010000"
After going through some basics documents what I understood is, Base Address Register is Address space which can be accessed by PCIe IP. PCIe IP can either transmit data in Base Address Register or it can write received data on to it.
Am I right? Or missing anything?
Linux kernel point of view
A good way to learn something is to interact with it, so let's use the Linux kernel for that.
Here is a minimal PCI example on a QEMU emulated device: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/366b1c1af269f56d6a7e6464f2862ba2bc368062/kernel_module/pci.c
The first 64 bytes of the PCI configuration are standardized as:
Image from LDD3.
So we can see that there are 6 BARs. The wiki page then shows the contents of each BAR:
The region width requires a magic write however: How is a PCI / PCIe BAR size determined?
This memory is setup by the PCI device, and gives information to the kernel.
Each BAR corresponds to an address range that serves as a separate communication channel to the PCI device.
The length of each region is defined by the hardware, and communicated to software via the configuration registers.
Each region also has further hardware defined properties besides length, notably the memory type:
IORESOURCE_IO: must be accessed with inX and outX
IORESOURCE_MEM: must be accessed with ioreadX and iowriteX
Several Linux kernel PCI functions take the BAR as a parameter to identify which communication channel is to be used, e.g.:
mmio = pci_iomap(pdev, BAR, pci_resource_len(pdev, BAR));
pci_resource_flags(dev, BAR);
pci_resource_start(pdev, BAR);
pci_resource_end(pdev, BAR);
By looking into the QEMU device source code, we see that QEMU devices register those regions with:
memory_region_init_io(&edu->mmio, OBJECT(edu), &edu_mmio_ops, edu,
"edu-mmio", 1 << 20);
pci_register_bar(pdev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &edu->mmio);
and it is clear that properties of the BAR are hardware defined, e.g. the BAR number 0, has type memory PCI_BASE_ADDRESS_SPACE_MEMORY, and the memory region is 1MiB long 1 << 20.
See also: http://wiki.osdev.org/PCI#Base_Address_Registers of course.
I think this is a very basic question and I would suggest to read:
PCI Express Base 3.1 Specification (pcisig.com) or
PCI Express Technology 3.0 (MindShare Press) book
A Base Address Register (BAR) is used to:
- specify how much memory a device wants to be mapped into main memory, and
- after device enumeration, it holds the (base) address, where the mapped memory block begins.
A device can have up to six 32-bit BARs or combine two BARs to a 64-bit BAR.
BAR is record of the device address starting at memory.
root#Ubuntu:~$ lspci -s 00:04.0 -x
00:04.0 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 10)
00: 86 80 cd 24 06 00 00 00 10 20 03 0c 10 00 00 00
10: 00 10 02 f3 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 f4 1a 00 11
30: 00 00 00 00 00 00 00 00 00 00 00 00 05 04 00 00
root#Ubuntu:~$ lspci -s 00:04.0 -v
00:04.0 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 10) (prog-if 20 [EHCI])
Subsystem: Red Hat, Inc QEMU Virtual Machine
Physical Slot: 4
Flags: bus master, fast devsel, latency 0, IRQ 35
Memory at f3021000 (32-bit, non-prefetchable) [size=4K]
Kernel driver in use: ehci-pci
root#Ubuntu:~$ grep 00:04.0 /proc/iomem
f3021000-f3021fff : 0000:00:04.0
0xfff equals to 4095, which is 4K. Memory starts at 0xf3021000 is this USB device seen by CPU. This address is init during BIOS and in this example it is on BAR0. Why is BAR0?
Before that, one need to understand PCI spec, especially the below, type 0 and type 1:
Notice the header type is both defined at 0x0c third field, that is how BARs differ. In this example, it is 00, which means it is type 0. Thus BAR0 stores the address, which is 00 10 02 f3.
One may wonder why this is not exactly f3021000, this is because lspci goes with Little Endian. What is Endian? One may need to read "Gulliver's Travels".
BAR0 generally has three states, uninitialized, all 1s, and written address. And we now in the third since the device already init. Bit 11 ~ 4 is set to 0 at uninitialized state; Bit 3 means NP when set to 0, P when set to 1; Bit 2 ~ 1 means 32 bit when set to 00, 64 bit when set to 10; Bit 0 means memory request when set to 0, IO request when set to 1.
0xf3021000
====>>>>
11110011000000100001000000000000
From this, we can know this device is 32-bit, non-prefetchable, memory request. The uninitialized address is 32 ~ 12, since 2 ^ 12 = 4K.
For more device and vendor, one can find via https://pcilookup.com/
Roughly speaking, the root-complex (aka the host computer) acts as the "dealer" and talks to each end-point device in a process called enumeration, where each device has its own set of configuration registers. It does this access using configuration space, rather than normal memory space. memory space for the pci device doesn't exist until the bar registers are setup and mapped by the root complex.
Using configuration space, the root-complex sequentially writes all 1's the bar register, in each PCI device, and read them back to determine the size of the bar address space assigned to each device. If the root complex sees zeros in the lower order bits above bit 4, this means that these are addressable space, then it picks a physical memory address and assigned it to the non-zero bits in the bar register...
For PCIe device with 32-bit bars the configuration space has the following 32-bit DWORDS:
UInt32 PCIEBAR32_0, PCIEBAR32_1, PCIEBAR32_2,
PCIEBAR32_3, PCIEBAR32_4, PCIEBAR32_5;
bool cond32_0 = (PCIeBAR32_0 & 0x7) == 0x00);
bool cond32_1 = (PCIeBAR32_1 & 0x7) == 0x00);
bool cond32_2 = (PCIeBAR32_2 & 0x7) == 0x00);
bool cond32_3 = (PCIeBAR32_3 & 0x7) == 0x00);
bool cond32_4 = (PCIeBAR32_4 & 0x7) == 0x00);
bool cond32_5 = (PCIeBAR32_5 & 0x7) == 0x00);
For PCIe device with 64-bit bars, the two adjacent 32-bit DWORDS are concatenated to form a 64-bit bar:
UInt64 PCIEBAR64_0, PCIEBAR64_1, PCIEBAR64_2;
bool cond64_0 = (PCIEBAR32_0 & 0x7) == 0x4);
bool cond64_1 = (PCIEBAR32_2 & 0x7) == 0x4);
bool cond64_2 = (PCIEBAR32_4 & 0x7) == 0x4);
if (!(cond64_0 && cond64_1 && cond64_2)) {
Console.Writeline("Whoops, we don't have 3 adjacent 64-bit bars");
return -1;
}
PCIEBAR64_0 = (UInt64)PCIEBAR32_1<<32 | (UInt64)PCIEBAR32_0;
PCIEBAR64_1 = (UInt64)PCIEBAR32_3<<32 | (UInt64)PCIEBAR32_2;
PCIEBAR64_2 = (UInt64)PCIEBAR32_5<<32 | (UInt64)PCIEBAR32_4;
//note that since lower 4-bits of Least significant
//bar indicate its a 64-bit bar, this means the
//next adjacent 32-bit bar doesn't knockout
//the bottom 4-bits of the bar. so that it can be concatenated.
Not really sure what happens for a system with a mix of 32-bit, and 64-bit bars... maybe you need to check the bars in order from 0 to 5 to find non-aligned cases...
I just started to learn how to use Snort today.
However, I need a bit of help with my rules setup.
I am trying to look for the following code on the network sent to a machine. This machine has snort installed on it (as I installed it now).
The code I want to analyze on the network is in bytes.
\xAA\x00\x00\x00\x00\x00\x00\x0F\x00\x00\x02\x74\x00\x00' (total of 14 bytes)
Now, I am looking at wanting to analyze the first 7 bytes of the code. For me if the 1st byte is (AA) and 7th byte is (0F). Then I want snort to set off an alarm.
So far my rules are:
alert tcp any any -> any any \
(content:"|aa 00 00 00 00 00 00 0f|"; msg:"break in attempt"; sid:10; rev:1; \
classtype:shellcode-detect; rawbytes;)
byte_test:1, =, aa, 0, relative;
byte_test:7 =, 0f, 7, relative;
I'm guessing I obviously have made a mistake somewhere. Maybe someone that is familair with snort could help me out?
Thanks.
Congrats on deciding to learn snort.
Assuming the bytes are going to be found in the payload of a TCP packet your rule header should be fine:
alert tcp any any -> any any
We can then specify the content match using pipes (||) to let snort know that these characters should be interpreted as hex bytes and not ascii:
content:"|AA 00 00 00 00 00 00 0F|"; depth:8;
And since we only want the rule to match if these bytes are found in the first 8 bytes of the packet or buffer we can add "depth". The "depth" keyword modifier tells snort to check where in the packet or buffer the content match was found. For the above content match to return true all eight bytes must be found within the first eight bytes of the packet or buffer.
"rawbytes" is not necessary here and should only ever be used for one specific purpose; to match on telnet control characters. "byte_test" isn't needed either since we've already verified that bytes 1 and 8 are "AA" and "0F" respectively using a content match.
So, the final rule becomes:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
If you decide that this should only match inside a file you can use the "sticky" buffer "file_data" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if the shellcode is found inside the alternate data (file data) buffer.
If you'd like for your rule to only look inside certain file types for this shellcode you can use "flowbits" like so:
alert tcp any any -> any any ( \
msg:"SHELLCODE Break in attempt"; \
flowbits:isset,file.pdf; file_data; \
content:"|AA 00 00 00 00 00 00 0F|"; depth:8; \
classtype:shellcode-detect; sid:10;)
This will alert if these bytes are found when the file.pdf flowbit is set. You will need the rule enabled that sets the pdf flowbit. Rules that set file flowbits and other good examples can be found in the community ruleset available for free here https://www.snort.org/snort-rules.
I am trying to read a Visa Credit Card, using the command:
00 A4 04 07 A0 00 00 00 03 10 10
but I'm getting this response
61 2E
I am unable to understand this response, because the EMV Book 1 says (pag 146):
6A 81 : command not supported
90 00 or 62 83 command is successfull
Any help on how to proceed now? What I'm missing? What should I do?
Thanks.
Found the issue, posted here in case anyone runs in a similar issue:
From EMV Book #1, pag #114:
The GET RESPONSE command is issued by the TTL to obtain available data
from the ICC when processing case 2 and 4 commands. It is employed
only when the T=0 protocol type is in use.
So, the next command to send in this case is:
OO C0 00 00 2E
in order to receive the actual data.