Why `GO` integer `uint32` numbers are not equal after being converted to `float32` - quic-go

Why is the float32 integer part inconsistent with the original uint32 integer part after uint32 integer is converted to float32, but the float64 integer part is consistent with the original uint32 after converting it to float64.
import (
"fmt"
)
const (
deadbeef = 0xdeadbeef
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Println("%f",bb)
}
The result printed is:
3735928559
3735928576.000000
The explanation given in the book is that float32 is rounded up
Analyzed the binary information of 3735928559, 3735928576
3735928559: 1101 1110 1010 1101 1011 1110 1110 1111
3735928576: 1101 1110 1010 1101 1011 1111 0000 0000
It is found that 3735928576 is the result of setting the last eight positions of 3735928559 to 0 and the ninth last position to 1.
And this case is the result of rounding off the last 8 digits
const (
deadbeef = 0xdeadbeef - 238
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Printf("%f",bb)
}
The result printed is:
3735928321
3735928320
Its binary result is
3735928321:1101 1110 1010 1101 1011 1110 0000 0000
3735928320:1101 1110 1010 1101 1011 1110 0000 0001
Why float32(deadbeef) integer part is not equal to uint32(deadbeef) integer part? no fractional part after deadbeef
Why float32(deadbeef) integer part differs from uint32(deadbeef) integer part by > 1
Why does this happen? If there is rounding up or down, what is the logic?

Because the precision range of float is not complete enough to represent the part of uint32, the real effective number of digits is actually seriously insufficient.
The precision of float32 can only provide about 6 decimal digits (after representing post-scientific notation, 6 decimal places)
The precision of float64 provides about 15 decimal digits (after representing post-scientific notation, 15 decimal places)
So it's normal for this to happen.
And this is not only the Go language that will exist, but all languages that need to deal with this will have this problem.

Related

Safenet HSM doesn't response to the message

I'm new to HSM, I'm using TCP connection to communicate with 'safenet ProtectHost EFT' HSM. So as for a beginning i tried to call 'HSM_STATUS' method by sending following message.
full message (with header) :
0000 0001 0000 0001 0001 1001 1011 1111 0000 0000 0000 0001 0000 0001
this message can be broken down as follows (in reverse order):
message :
0000 0001 is the 1 byte long command : '01' (function code of
HSM_STATUS method is '01')
safenet header :
0000 0000 0000 0001 is the 2 byte long length of the message : '1'
(length of the function call '0000 0001' is '1')
0001 1001 1011 1111 is 2 byte long Sequence Number (An arbitrary
value in the request message which is returned with the response
message and is not interpreted by the ProtectHost EFT).
0000 0001 is the 1 byte long version number(binary 1 as in the
manual)
0000 0001 is the 1 byte long ASCII Start of Header character (Hex
01)
But the HSM does not give any output for this message.
Could anyone please tell what might be the reason for this? Am i doing something wrong in forming this message?
Even I faced the same issue and resolved it.The root cause for this is that the HSM expects the user to provide the accurate length of Input.
That is the SOH value needs to be accurately of 1 byte length,where as your input is of 4 bytes length.So the following input to HSM will give you correct output :
String command = "01"// SOH + "01"// version + "00"// arbitary value
+ "00"// arbitary value + "00"// length + "01"// length + "01" ; // function call
Hope this helps :)

print binary strings to a textfile with specific row lengths

Unix+matlab R2016b
I have a 1 dim table with numbers that I'd like to export to a 12-bit binary table in a text file.
x = [0:1:250];
a = exp(-((x*8)/350));
b = exp(-((x*8)/90));
y = a-b;
y_12bit = y*4095;
y_12bitRound = ceil(y_12bit);
y_12bitRoundBinary = de2bi(y_12bitRound,12);
fileID = fopen('expo_1.txt','w');
fprintf(fileID,'%12s',y_12bitRoundBinary);
Now, y_12bitRoundBinary looks good when i print this in the console.
y_12bitRoundBinary =
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 0 1 0 0 1 1 1 1 0 0 0
0 0 0 0 1 1 0 1 0 1 0 0
0 0 1 0 0 1 1 0 1 1 0 0
0 0 1 0 0 0 0 0 0 0 1 0
This looks good, but I would like the bit order to be reversed. The MSBs are towards the right and I would like a little-endian ordering. Also, I don't know if this printing actually proves that y_12bitRoundBinary is correct.
Are there spaces in between each bit or is this just the format of the printing function?
How to change the ordering?
The first lines of the data written to file expo_1.txt:
0000 0000 0000 0000 0000 0101 0001 0001
0001 0000 0101 0001 0101 0100 0000 0100
0101 0000 0000 0100 0100 0101 0000 0100
0000 0001 0001 0101 0100 0100 0000 0100
0101 0101 0001 0101 0000 0101 0101 0100
0100 0000 0101 0001 0101 0101 0100 0101
As you see, the data is beyond recognition compared with how y_12bitRoundBinary was printed in the Matlab console.
Would anyone have any pointers on where my mistake is concerning the data written to file?
Assuming that you would like to print the numbers as ASCII strings, this will work
x = 0:250;
a = exp(-((x*8)/350));
b = exp(-((x*8)/90));
y = a-b;
% Reduce to 12 bit precision
y_12bit = y*2^12;
y_12bitRound = floor(y_12bit);
% Convert y_12bitRound to char arrays with 12 bits precision
y_12bitCharArray = dec2bin(y_12bitRound,12);
% Convert to cell array of string
y_12bitCellArray = cellstr(y_12bitCharArray);
fileID = fopen('expo_1.txt','wt');
fprintf(fileID,'%s\n', y_12bitCellArray{:});
fclose(fileID);
This will print the following to the file expo_1.txt
000000000000
000011111111
000111100100
001010101111
001101100011
010000000011
010010010000
...
The trick is to convert the char array to a cell array of strings, which is easier to print as desired, using the {:} operator to expand the cell array.

Wiegand RFID reader VS USB RFID reader Raspberry PI

I have two Raspberry Pis running python code to retrieve the serial number of an RFID tag. One has an RFID reader with a Wiegand interface hooked to GPIO pins and other has an RFID reader that behaves like a keyboard connected over USB. However, I get different numbers from the two reader when scanning the same RFID tag.
For example, for one tag, I get 57924897 from the Raspberry Pi with the Wiegand reader and 0004591983 from the Raspberry Pi with the USB keyboard reader.
Can sombody explain the difference? Are both readers reading the same? Or are they just reading some different parameter?
Looking at those two values, it seems that you do not properly read and convert the value from the Wiegand interface.
The USB keyboard reader reads the serial number in 10 digit decimal form. A Wiegand reader typically trunkaes the serial number into a 26 bit value (1 parity bit + 8 bit site code + 16 bit tag ID + 1 parity bit).
So let's look at the two values that you get:
+--------------+------------+-------------+-----------------------------------------+
| READER | DECIMAL | HEXADECIMAL | BINARY |
+--------------+------------+-------------+-----------------------------------------+
| USB keyboard | 0004591983 | 0046116F | 0000 0000 0100 0110 0001 0001 0110 1111 |
| Wiegand | 57924897 | 373DD21 | 1 1011 1001 1110 1110 1001 0000 1 |
+--------------+------------+-------------+-----------------------------------------+
When you take a close look at the binary representation of those two values, you will see that they correlate with each other:
USB keyboard: 0000 0000 0100 0110 0001 0001 0110 1111
Wiegand: 1 1011 1001 1110 1110 1001 0000 1
So it seems as if the Wiegand value matches the inverted value obtained from the USB keyboard reader:
USB keyboard: 0000 0000 0100 0110 0001 0001 0110 1111
NOT(Wiegand): 0 0100 0110 0001 0001 0110 1111 0
So the inverted value (logical NOT) from the Wiegand interface matches the value read by the USB reader.
Next, let's look at the two parity bits. The data over the Wiegand interface typically looks like:
b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 b16 b17 b18 b19 b20 b21 b22 b23 b24 b25
PE D23 D22 D21 D20 D19 D18 D17 D16 D15 D14 D13 D12 D11 D10 D9 D8 D7 D6 D5 D4 D3 D2 D1 D0 PO
The first line being the bits numbered as they arrive over the Wiegand wires. The second line being the same bits as they need to be interpreted by the receiver, where PE (b0) is an even parity bit over D23..D12 (b1..b12), PO (b25) is an odd parity bit over D11..D0 (b13..b24), and D23..D0 are the data bits representing an unsigned integer number.
So looking at your number, you would have received:
PE D23 D22 D21 D20 D19 D18 D17 D16 D15 D14 D13 D12 D11 D10 D9 D8 D7 D6 D5 D4 D3 D2 D1 D0 PO
0 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 1 0 1 1 0 1 1 1 1 0
If we check the parity bits PE and PO, we get:
PE D23........D12
0 0100 0110 0001
contains 4 ones (1), hence even parity is met.
D21.........D0 PO
0001 0110 1111 0
contains 7 ones (1), hence odd parity is met.
So, to summarize the above, your code reading from the Wiegand interface does not properly handle the Wiegand data format. First, it does not trim the parity bits and second, it reads the bits with wrong polarity (zeros are ones and ones are zeros).
In order to get the correct number from the Wiegand reader, you either have to fix you code for reading from the Wiegand interface (to fix polarity, to skip the first and the last bit from the data value, and to possibly check the parity bits). Or you can take the value that you currently get, invert that value, and strip the lower and upper bits. In C, that would look something like this:
int unsigned currentWiegandValue = ...;
int unsigned newWiegandValue = ((~currentWiegandValue) >> 1) & 0x0FFFFFF;

Decoding a Time and Date Format

The Olympus webserver on my camera returns dates the I cannot decode to a human readable format.
I have a couple of values to work with.
17822 is supposed to be 30.12.2014
17953 is supposed to be 01.01.2015 (dd mm yyyy)
17954 is supposed to be 02.01.2015
So I assumed this was just the number of days since xxx and it turns out this is 05.11.1965, so I guess this is wrong.
Also the time is an integer value as well.
38405 is 18:48
27032 is 13:12
27138 is 13:16
The right values are UTC+1
Maybe somebody has an idea how to decode these two formats.
They are DOS timestamps
dos timestamps are a bitfield format with the parts of the date and time encoded into adjacent bits in the number, here are some worked examples.
number hex binary
17822 0x459E = 0010 0101 1001 1110
YYYY YYYM MMMD DDDD
Y=001 0010 = 34 ( add 1980 to get 2014)
M=1100 = 12
D=1 1110 = 30
17953 0x4621 = 0010 0110 0010 0001
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0001 = 1
17954 0x4622 = 0010 0110 0010 0010
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0010 = 2
and the times are simiilar
38405 = 0x9605 = 1001 0110 0000 0101
HHHH HMMM MMMS SSSS
H= 1 0010 = 18
M=11 0000 = 48
S= 0 0101 = 5 (double it to get 10)

DEFLATE Encoding with static Huffman Codes

need some help to understand how DEFLATE Encoding works. I know that is a combination of the LZSS algorithm and Huffman coding.
So let encode for example "Deflate late". Params: [Search buffer: 8kb and Look-ahead buffer 4kb] Well, the output of LZSS algorithm is "Deflate <5, 4>" The next step uses static huffman coding to reduce the redundancy. Here is my problem, I dont know how should i encode this pair <5, 4> with huffman.
[Edited]
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
So well, according to this table the string "Deflate " is written as 000 11 001 010 011 100 11 101. As a next step lets encode the pair (5, 4). The fixed prefix code of the length 4 according to the book "Data Compression - The Complete Reference" is 258, followed by fixed prefix code of the distance 5 (Code 4 + 1 Extra bit).
That can be summarized as:
length 4 -> 258 -> 0000010
distance 5 -> 4 + 1 extra bit -> 00100|0
So, the encoded string is written as [header: 1 01] 000 11 001 010 011 100 11 101 0000010 001000 [end-of-block: 0000000], BUT if i create a huffman tree, it is not a static huffman anymore, right?
Good day
D 000
f 001
l 010
a 011
t 100
_ 101
e 11
is not the Deflate static code. The static literal/length codes are all 7, 8, or 9 bits, and the distance codes are all 5 bits. You asked about the static codes.
'Deflate late' encoded in static deflate format as the literals 'Deflate ' and a length 4, distance 5 match in hex is:
73 49 4d cb 49 2c 49 55 00 11 00
That is broken down as follows (bits are read from the least significant part of each byte first):
011 - 01 means fixed code, 1 means last block
00101110 - D
10101001 - e
01101001 - f
00111001 - l
10001001 - a
00100101 - t
10101001 - e
00001010 - space
0100000 - length 4
00100 - distance 5 or 6 depending on one extra bit
0 - extra bit -> distance 5
0000000 - end code
0 - fill bit to byte boundary