How do I shift bits using Swift? - swift

In Objective-C the code is
uint16_t majorBytes;
[data getBytes:&majorBytes range:majorRange];
uint16_t majorBytesBig = (majorBytes >> 8) | (majorBytes << 8);
In Swift
var majorBytes: CConstPointer<UInt16> = nil
data.getBytes(&majorBytes, range: majorRange)
how aobut majorBytesBig?

The Bit-Shifting-Syntax has not changed from ObjC to Swift. Just check the chapter Advanced Operators in the Swift-Book to get a deeper understanding of what's going on here.
// as binary: 0000 0001 1010 0101 (421)
let majorBytes: UInt16 = 421
// as binary: 1010 0101 0000 0000 (42240)
let majorBytesShiftedLeft: UInt16 = (majorBytes << 8)
// as binary: 0000 0000 0000 0001 (1)
let majorBytesShiftedRight: UInt16 = (majorBytes >> 8)
// as binary: 1010 0101 0000 0001 (42241)
let majorBytesBig = majorBytesShiftedRight | majorBytesShiftedLeft

Related

Why `GO` integer `uint32` numbers are not equal after being converted to `float32`

Why is the float32 integer part inconsistent with the original uint32 integer part after uint32 integer is converted to float32, but the float64 integer part is consistent with the original uint32 after converting it to float64.
import (
"fmt"
)
const (
deadbeef = 0xdeadbeef
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Println("%f",bb)
}
The result printed is:
3735928559
3735928576.000000
The explanation given in the book is that float32 is rounded up
Analyzed the binary information of 3735928559, 3735928576
3735928559: 1101 1110 1010 1101 1011 1110 1110 1111
3735928576: 1101 1110 1010 1101 1011 1111 0000 0000
It is found that 3735928576 is the result of setting the last eight positions of 3735928559 to 0 and the ninth last position to 1.
And this case is the result of rounding off the last 8 digits
const (
deadbeef = 0xdeadbeef - 238
aa = uint32(deadbeef)
bb = float32(deadbeef)
)
func main() {
fmt.Println(aa)
fmt.Printf("%f",bb)
}
The result printed is:
3735928321
3735928320
Its binary result is
3735928321:1101 1110 1010 1101 1011 1110 0000 0000
3735928320:1101 1110 1010 1101 1011 1110 0000 0001
Why float32(deadbeef) integer part is not equal to uint32(deadbeef) integer part? no fractional part after deadbeef
Why float32(deadbeef) integer part differs from uint32(deadbeef) integer part by > 1
Why does this happen? If there is rounding up or down, what is the logic?
Because the precision range of float is not complete enough to represent the part of uint32, the real effective number of digits is actually seriously insufficient.
The precision of float32 can only provide about 6 decimal digits (after representing post-scientific notation, 6 decimal places)
The precision of float64 provides about 15 decimal digits (after representing post-scientific notation, 15 decimal places)
So it's normal for this to happen.
And this is not only the Go language that will exist, but all languages that need to deal with this will have this problem.

Safenet HSM doesn't response to the message

I'm new to HSM, I'm using TCP connection to communicate with 'safenet ProtectHost EFT' HSM. So as for a beginning i tried to call 'HSM_STATUS' method by sending following message.
full message (with header) :
0000 0001 0000 0001 0001 1001 1011 1111 0000 0000 0000 0001 0000 0001
this message can be broken down as follows (in reverse order):
message :
0000 0001 is the 1 byte long command : '01' (function code of
HSM_STATUS method is '01')
safenet header :
0000 0000 0000 0001 is the 2 byte long length of the message : '1'
(length of the function call '0000 0001' is '1')
0001 1001 1011 1111 is 2 byte long Sequence Number (An arbitrary
value in the request message which is returned with the response
message and is not interpreted by the ProtectHost EFT).
0000 0001 is the 1 byte long version number(binary 1 as in the
manual)
0000 0001 is the 1 byte long ASCII Start of Header character (Hex
01)
But the HSM does not give any output for this message.
Could anyone please tell what might be the reason for this? Am i doing something wrong in forming this message?
Even I faced the same issue and resolved it.The root cause for this is that the HSM expects the user to provide the accurate length of Input.
That is the SOH value needs to be accurately of 1 byte length,where as your input is of 4 bytes length.So the following input to HSM will give you correct output :
String command = "01"// SOH + "01"// version + "00"// arbitary value
+ "00"// arbitary value + "00"// length + "01"// length + "01" ; // function call
Hope this helps :)

print binary strings to a textfile with specific row lengths

Unix+matlab R2016b
I have a 1 dim table with numbers that I'd like to export to a 12-bit binary table in a text file.
x = [0:1:250];
a = exp(-((x*8)/350));
b = exp(-((x*8)/90));
y = a-b;
y_12bit = y*4095;
y_12bitRound = ceil(y_12bit);
y_12bitRoundBinary = de2bi(y_12bitRound,12);
fileID = fopen('expo_1.txt','w');
fprintf(fileID,'%12s',y_12bitRoundBinary);
Now, y_12bitRoundBinary looks good when i print this in the console.
y_12bitRoundBinary =
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0
0 0 1 0 0 1 1 1 1 0 0 0
0 0 0 0 1 1 0 1 0 1 0 0
0 0 1 0 0 1 1 0 1 1 0 0
0 0 1 0 0 0 0 0 0 0 1 0
This looks good, but I would like the bit order to be reversed. The MSBs are towards the right and I would like a little-endian ordering. Also, I don't know if this printing actually proves that y_12bitRoundBinary is correct.
Are there spaces in between each bit or is this just the format of the printing function?
How to change the ordering?
The first lines of the data written to file expo_1.txt:
0000 0000 0000 0000 0000 0101 0001 0001
0001 0000 0101 0001 0101 0100 0000 0100
0101 0000 0000 0100 0100 0101 0000 0100
0000 0001 0001 0101 0100 0100 0000 0100
0101 0101 0001 0101 0000 0101 0101 0100
0100 0000 0101 0001 0101 0101 0100 0101
As you see, the data is beyond recognition compared with how y_12bitRoundBinary was printed in the Matlab console.
Would anyone have any pointers on where my mistake is concerning the data written to file?
Assuming that you would like to print the numbers as ASCII strings, this will work
x = 0:250;
a = exp(-((x*8)/350));
b = exp(-((x*8)/90));
y = a-b;
% Reduce to 12 bit precision
y_12bit = y*2^12;
y_12bitRound = floor(y_12bit);
% Convert y_12bitRound to char arrays with 12 bits precision
y_12bitCharArray = dec2bin(y_12bitRound,12);
% Convert to cell array of string
y_12bitCellArray = cellstr(y_12bitCharArray);
fileID = fopen('expo_1.txt','wt');
fprintf(fileID,'%s\n', y_12bitCellArray{:});
fclose(fileID);
This will print the following to the file expo_1.txt
000000000000
000011111111
000111100100
001010101111
001101100011
010000000011
010010010000
...
The trick is to convert the char array to a cell array of strings, which is easier to print as desired, using the {:} operator to expand the cell array.

Decoding a Time and Date Format

The Olympus webserver on my camera returns dates the I cannot decode to a human readable format.
I have a couple of values to work with.
17822 is supposed to be 30.12.2014
17953 is supposed to be 01.01.2015 (dd mm yyyy)
17954 is supposed to be 02.01.2015
So I assumed this was just the number of days since xxx and it turns out this is 05.11.1965, so I guess this is wrong.
Also the time is an integer value as well.
38405 is 18:48
27032 is 13:12
27138 is 13:16
The right values are UTC+1
Maybe somebody has an idea how to decode these two formats.
They are DOS timestamps
dos timestamps are a bitfield format with the parts of the date and time encoded into adjacent bits in the number, here are some worked examples.
number hex binary
17822 0x459E = 0010 0101 1001 1110
YYYY YYYM MMMD DDDD
Y=001 0010 = 34 ( add 1980 to get 2014)
M=1100 = 12
D=1 1110 = 30
17953 0x4621 = 0010 0110 0010 0001
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0001 = 1
17954 0x4622 = 0010 0110 0010 0010
Y=001 0011 = 35 (2015)
M=0001 = 1
D=0 0010 = 2
and the times are simiilar
38405 = 0x9605 = 1001 0110 0000 0101
HHHH HMMM MMMS SSSS
H= 1 0010 = 18
M=11 0000 = 48
S= 0 0101 = 5 (double it to get 10)

add new column with data split from existing column using awk

Can this be achieved using awk?
I am looking to add a new column which has delimited data from an existing column.
My input file is a tab delimited text file.
The column with device-0, device-1 is the new column derived from phone-0-1, phone-1-2.
Input:
category1 phone-0-1 working 0000 0000 new 0
category1 phone-1-2 working 0000 0000 new 0
category1 phone-2-4 working 0000 0000 new 0
category1 phone-3-5 working 0000 0000 new 0
category1 phone-4-6 working 0000 0000 new 0
Output:
category1 device-0 phone-0-1 working 0000 0000 new 0
category1 device-1 phone-1-2 working 0000 0000 new 0
category1 device-2 phone-2-4 working 0000 0000 new 0
category1 device-3 phone-3-5 working 0000 0000 new 0
category1 device-4 phone-4-6 working 0000 0000 new 0
try this sed line, see if it works:
sed 's/\sphone-[0-9]\+/&\t&/' file
same idea with awk:
awk -F'\t' -v OFS='\t' '{sub(/phone-[0-9]+/,"&\t&",$2)}7' file
EDIT, rename to device:
sed 's/\(\s\+\)phone-\([0-9]\+\)/\1device-\2&/' file