Wiegand RFID reader VS USB RFID reader Raspberry PI - tags

I have two Raspberry Pis running python code to retrieve the serial number of an RFID tag. One has an RFID reader with a Wiegand interface hooked to GPIO pins and other has an RFID reader that behaves like a keyboard connected over USB. However, I get different numbers from the two reader when scanning the same RFID tag.
For example, for one tag, I get 57924897 from the Raspberry Pi with the Wiegand reader and 0004591983 from the Raspberry Pi with the USB keyboard reader.
Can sombody explain the difference? Are both readers reading the same? Or are they just reading some different parameter?

Looking at those two values, it seems that you do not properly read and convert the value from the Wiegand interface.
The USB keyboard reader reads the serial number in 10 digit decimal form. A Wiegand reader typically trunkaes the serial number into a 26 bit value (1 parity bit + 8 bit site code + 16 bit tag ID + 1 parity bit).
So let's look at the two values that you get:
+--------------+------------+-------------+-----------------------------------------+
| READER | DECIMAL | HEXADECIMAL | BINARY |
+--------------+------------+-------------+-----------------------------------------+
| USB keyboard | 0004591983 | 0046116F | 0000 0000 0100 0110 0001 0001 0110 1111 |
| Wiegand | 57924897 | 373DD21 | 1 1011 1001 1110 1110 1001 0000 1 |
+--------------+------------+-------------+-----------------------------------------+
When you take a close look at the binary representation of those two values, you will see that they correlate with each other:
USB keyboard: 0000 0000 0100 0110 0001 0001 0110 1111
Wiegand: 1 1011 1001 1110 1110 1001 0000 1
So it seems as if the Wiegand value matches the inverted value obtained from the USB keyboard reader:
USB keyboard: 0000 0000 0100 0110 0001 0001 0110 1111
NOT(Wiegand): 0 0100 0110 0001 0001 0110 1111 0
So the inverted value (logical NOT) from the Wiegand interface matches the value read by the USB reader.
Next, let's look at the two parity bits. The data over the Wiegand interface typically looks like:
b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 b16 b17 b18 b19 b20 b21 b22 b23 b24 b25
PE D23 D22 D21 D20 D19 D18 D17 D16 D15 D14 D13 D12 D11 D10 D9 D8 D7 D6 D5 D4 D3 D2 D1 D0 PO
The first line being the bits numbered as they arrive over the Wiegand wires. The second line being the same bits as they need to be interpreted by the receiver, where PE (b0) is an even parity bit over D23..D12 (b1..b12), PO (b25) is an odd parity bit over D11..D0 (b13..b24), and D23..D0 are the data bits representing an unsigned integer number.
So looking at your number, you would have received:
PE D23 D22 D21 D20 D19 D18 D17 D16 D15 D14 D13 D12 D11 D10 D9 D8 D7 D6 D5 D4 D3 D2 D1 D0 PO
0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 1 1 0 1 1 1 1 0
If we check the parity bits PE and PO, we get:
PE D23........D12
0 0100 0110 0001
contains 4 ones (1), hence even parity is met.
D21.........D0 PO
0001 0110 1111 0
contains 7 ones (1), hence odd parity is met.
So, to summarize the above, your code reading from the Wiegand interface does not properly handle the Wiegand data format. First, it does not trim the parity bits and second, it reads the bits with wrong polarity (zeros are ones and ones are zeros).
In order to get the correct number from the Wiegand reader, you either have to fix you code for reading from the Wiegand interface (to fix polarity, to skip the first and the last bit from the data value, and to possibly check the parity bits). Or you can take the value that you currently get, invert that value, and strip the lower and upper bits. In C, that would look something like this:
int unsigned currentWiegandValue = ...;
int unsigned newWiegandValue = ((~currentWiegandValue) >> 1) & 0x0FFFFFF;

Related

Why `GO` integer `uint32` numbers are not equal after being converted to `float32`

Why is the float32 integer part inconsistent with the original uint32 integer part after uint32 integer is converted to float32, but the float64 integer part is consistent with the original uint32 after converting it to float64.
import (
"fmt"
)
const (
deadbeef = 0xdeadbeef
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Println("%f",bb)
}
The result printed is:
3735928559
3735928576.000000
The explanation given in the book is that float32 is rounded up
Analyzed the binary information of 3735928559, 3735928576
3735928559: 1101 1110 1010 1101 1011 1110 1110 1111
3735928576: 1101 1110 1010 1101 1011 1111 0000 0000
It is found that 3735928576 is the result of setting the last eight positions of 3735928559 to 0 and the ninth last position to 1.
And this case is the result of rounding off the last 8 digits
const (
deadbeef = 0xdeadbeef - 238
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Printf("%f",bb)
}
The result printed is:
3735928321
3735928320
Its binary result is
3735928321:1101 1110 1010 1101 1011 1110 0000 0000
3735928320:1101 1110 1010 1101 1011 1110 0000 0001
Why float32(deadbeef) integer part is not equal to uint32(deadbeef) integer part? no fractional part after deadbeef
Why float32(deadbeef) integer part differs from uint32(deadbeef) integer part by > 1
Why does this happen? If there is rounding up or down, what is the logic?
Because the precision range of float is not complete enough to represent the part of uint32, the real effective number of digits is actually seriously insufficient.
The precision of float32 can only provide about 6 decimal digits (after representing post-scientific notation, 6 decimal places)
The precision of float64 provides about 15 decimal digits (after representing post-scientific notation, 15 decimal places)
So it's normal for this to happen.
And this is not only the Go language that will exist, but all languages that need to deal with this will have this problem.

Safenet HSM doesn't response to the message

I'm new to HSM, I'm using TCP connection to communicate with 'safenet ProtectHost EFT' HSM. So as for a beginning i tried to call 'HSM_STATUS' method by sending following message.
full message (with header) :
0000 0001 0000 0001 0001 1001 1011 1111 0000 0000 0000 0001 0000 0001
this message can be broken down as follows (in reverse order):
message :
0000 0001 is the 1 byte long command : '01' (function code of
HSM_STATUS method is '01')
safenet header :
0000 0000 0000 0001 is the 2 byte long length of the message : '1'
(length of the function call '0000 0001' is '1')
0001 1001 1011 1111 is 2 byte long Sequence Number (An arbitrary
value in the request message which is returned with the response
message and is not interpreted by the ProtectHost EFT).
0000 0001 is the 1 byte long version number(binary 1 as in the
manual)
0000 0001 is the 1 byte long ASCII Start of Header character (Hex
01)
But the HSM does not give any output for this message.
Could anyone please tell what might be the reason for this? Am i doing something wrong in forming this message?
Even I faced the same issue and resolved it.The root cause for this is that the HSM expects the user to provide the accurate length of Input.
That is the SOH value needs to be accurately of 1 byte length,where as your input is of 4 bytes length.So the following input to HSM will give you correct output :
String command = "01"// SOH + "01"// version + "00"// arbitary value
+ "00"// arbitary value + "00"// length + "01"// length + "01" ; // function call
Hope this helps :)

Decoding a Time and Date Format

The Olympus webserver on my camera returns dates the I cannot decode to a human readable format.
I have a couple of values to work with.
17822 is supposed to be 30.12.2014
17953 is supposed to be 01.01.2015 (dd mm yyyy)
17954 is supposed to be 02.01.2015
So I assumed this was just the number of days since xxx and it turns out this is 05.11.1965, so I guess this is wrong.
Also the time is an integer value as well.
38405 is 18:48
27032 is 13:12
27138 is 13:16
The right values are UTC+1
Maybe somebody has an idea how to decode these two formats.
They are DOS timestamps
dos timestamps are a bitfield format with the parts of the date and time encoded into adjacent bits in the number, here are some worked examples.
number hex binary
17822 0x459E = 0010 0101 1001 1110
YYYY YYYM MMMD DDDD
Y=001 0010 = 34 ( add 1980 to get 2014)
M=1100 = 12
D=1 1110 = 30
17953 0x4621 = 0010 0110 0010 0001
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0001 = 1
17954 0x4622 = 0010 0110 0010 0010
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0010 = 2
and the times are simiilar
38405 = 0x9605 = 1001 0110 0000 0101
HHHH HMMM MMMS SSSS
H= 1 0010 = 18
M=11 0000 = 48
S= 0 0101 = 5 (double it to get 10)

DEFLATE Encoding with static Huffman Codes

need some help to understand how DEFLATE Encoding works. I know that is a combination of the LZSS algorithm and Huffman coding.
So let encode for example "Deflate late". Params: [Search buffer: 8kb and Look-ahead buffer 4kb] Well, the output of LZSS algorithm is "Deflate <5, 4>" The next step uses static huffman coding to reduce the redundancy. Here is my problem, I dont know how should i encode this pair <5, 4> with huffman.
[Edited]
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
So well, according to this table the string "Deflate " is written as 000 11 001 010 011 100 11 101. As a next step lets encode the pair (5, 4). The fixed prefix code of the length 4 according to the book "Data Compression - The Complete Reference" is 258, followed by fixed prefix code of the distance 5 (Code 4 + 1 Extra bit).
That can be summarized as:
length 4 -> 258 -> 0000010
distance 5 -> 4 + 1 extra bit -> 00100|0
So, the encoded string is written as [header: 1 01] 000 11 001 010 011 100 11 101 0000010 001000 [end-of-block: 0000000], BUT if i create a huffman tree, it is not a static huffman anymore, right?
Good day
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
is not the Deflate static code. The static literal/length codes are all 7, 8, or 9 bits, and the distance codes are all 5 bits. You asked about the static codes.
'Deflate late' encoded in static deflate format as the literals 'Deflate ' and a length 4, distance 5 match in hex is:
73 49 4d cb 49 2c 49 55 00 11 00
That is broken down as follows (bits are read from the least significant part of each byte first):
011 - 01 means fixed code, 1 means last block
00101110 - D
10101001 - e
01101001 - f
00111001 - l
10001001 - a
00100101 - t
10101001 - e
00001010 - space
0100000 - length 4
00100 - distance 5 or 6 depending on one extra bit
0 - extra bit -> distance 5
0000000 - end code
0 - fill bit to byte boundary

Hex combination of binary flags

Which of the following give back 63 as long (in Java) and how?
0x0
0x1
0x2
0x4
0x8
0x10
0x20
I'm working with NetworkManager API flags if that helps. I'm getting 63 from one of the operations but don't know how should I match the return value to the description.
Thanks
63 is 32 | 16 | 8 | 4 | 2 | 1, where | is the binary or operator.
Or in other words (in hex): 63 (which is 0x3F) is 0x20 | 0x10 | 0x8 | 0x4 | 0x2 | 0x1. If you look at them all in binary, it is obvious:
0x20 : 00100000
0x10 : 00010000
0x08 : 00001000
0x04 : 00000100
0x02 : 00000010
0x01 : 00000001
And 63 is:
0x3F : 00111111
If you're getting some return status and want to know what it means, you'll have to use binary and. For example:
if (status & 0x02)
{
}
Will execute if the flag 0x02 (that is, the 2nd bit from the right) is turned on in the returned status. Most often, these flags have names (descriptions), so the code above will read something like:
if (status & CONNECT_ERROR_FLAG)
{
}
Again, the status can be a combination of stuff:
// Check if both flags are set in the status
if (status & (CONNECT_ERROR_FLAG | WRONG_IP_FLAG))
{
}
P.S.: To learn why this works, this is a nice article about binary flags and their combinations.
I'd give you the same answer as Chris: your return value 0x63 seems like a combination of all the flags you mention in your list (except 0x0).
When dealing with flags, one easy way to figure out by hand which flags are set is by converting all numbers to their binary representation. This is especially easy if you already have the numbers in hexadecimal, since every digit corresponds to four bits. First, your list of numbers:
0x01 0x02 0x04 ... 0x20
| | | |
| | | |
V V V V
0000 0001 0000 0010 0000 0100 ... 0010 0000
Now, if you take your value 63, which is 0x3F (= 3 * 161 + F * 160, where F = 15) in hexadecimal, it becomes:
0x3F
|
|
V
0011 1111
You quickly see that the lower 6 bits are all set, which is an "additive" combination (bitwise OR) of the above binary numbers.
63 (decimal) equals 0x3F (hex). So 63 is a combination of all of the following flags:
0x20
0x10
0x08
0x04
0x02
0x01
Is that what you were looking for?