Unusual unsigned short to bits swapping byte order - perl

I'm reading in a stream of data, 64 bytes to be exact. I want to read 16 bits starting at the 480th bit of the incoming data. Unfortunately, I do not know what the incoming data type is, it's a bunch of random characters/boxes. Reading it in as an unsigned short (v), I get the number I am looking for, which for this example is 13.
my $satt_id = unpack("x60v1"), $msgdata); #$satt_id == 13
This results in $satt_id == 13, which is 00000000 00001101.
If I pull the data as 16 bits (b or B), the string does not reflect the value of 13, but rather is byte-swapped or reversed.
my $satt_idb = unpack("x60b16", $msgdata); #satt_idb == "10110000 00000000"
my $satt_idB = unpack("x60B16", $msgdata); #satt_idB == "00001101 00000000"
Why is this occurring? I want to alter the data and resend out the message, which would be relatively easy if all of the message elements were the same size (16 bits, just pack back as it was unpacked), but some are 6, 4, 2, and 1 bits. Should I just use little-endian b and then reverse? After altering the data reverse it back to original order and then pack it back as b?
Completely separate and not perl related, but this haunted me in a different utility. I just conceded by swapping the values in the Enum designation. It worked, just wasn't very viable when the amount of bits got higher than 4 (16 different values).
Thanks!
EDIT: I'm guessing this is just related to binary notation? Apparently starts from the right? So $satt_idb is correct, if you read right to left. So to make it more user friendly, just reverse, alter, then reverse again and repack?
EDIT2: Basically I'm trying to make a user-friendly method of editing messages coming through a data stream. As I mentioned in the comments, if I want to edit a single bit from 0 to 1 (which in the message represents something as true/false), I don't want the user to have to worry about editing the octet of data received, just select from a dropdown of true/false.

If it works with v, it means the data is in little-endian byte order, which means
0b0000000000001101
is stored as
0b00001101 0b00000000
which is what you got.
Should I just use little-endian b and then reverse?
No. You are likely doing something incorrect if you are converting the numbers to a text representation (binary).
If you did somehow want the binary representation of the number, you could use
sprintf("%16b", $num)

Related

ModBus to Click PLC

Looking for help with understanding how to change value in address DS1 (400001). First the click appears to use a 6 digit Modbus so not sure how to deal with the in 2 bytes. I think I read 40001 is the same but do not see how. I am able to receive data and understand the data when the Click PLC is the master. I would like my PC to be the master and change the address.
Here is the data I am sending to the PLC. I am expecting this data to be sent to PLC slave 02 and change the data in DS1 (400001) to the value of zero.
frame(0) = 2 'Slave Address =2
frame(1) = 6 'Mode =6
frame(2) = CByte(40001 / 256) '
frame(3) = CByte(40001 Mod 256) '
frame(4) = 0 '
frame(5) = 0 '
Dim crc As Byte() = CRC(frame) ' Call CRC Calculate.
frame(6) = crc(0) '=59 Error Check Lo
frame(7) = crc(1) '=189 Error Check Hi
SerialPort1.Write(frame, 0, frame.Length)
Realize that Application Layer addressing in Modbus is different than the bytes on the wire. The leading digit in an application layer address (e.g. 4xxxx for Holding Register) is implied in the function code (e.g. Read Holding Register)
So on the wire, you drop the leading 4, and left with an offset of 1-65536 (yes, Application Layer offsets are 1-based). But on the WIRE, they are 0-based, so you then subtact 1 from the offset to get the value 0-65535.
So, sometimes you see Application Modbus HRs like 4001, 40001, or 400001, all referencing the first HR in the device. 5 digit is most common. I do see 4 digit for old RTU devices. I do see a 6 digit every once in a while where the remote device has a ton of memory (or not, like Click).
Realize that a lot of devices are implemented by people who only understand the low level protocol, so when they say something is at address 40001, it may actually be at offset 0x0001, or 0x0000 (the correct offset on the wire). I even saw one implementation that implemented the address 40001 as literally 0x9C41 on the wire (maybe 0x9C40). Yes, 6 digit Application Layer Holding Register 440001.

What is the correct behavior of C_Decrypt in pkcs#11?

I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.

How to view the actual bytes in a Data variable in Swift

I have a variable of type Data in Swift code, using Xcode 10.1, called data. I can see it in the debugger, but I don't know where the actual values are stored. It should contain a letter (one byte) and three Uint8 values, all 0-255, so it should be 4 bytes. The first _length is shown to be 6, so i don't know what else could be added in (one reason I want to see what is actually in there) (below). But I do not understand where the binary value is. The _rawValue does not seem to be it because it contains 4.5 bytes. Perhaps it is a pointer, as it says "RawPointer"?
Where are the actual bytes stored?
Edit:
By setting a new variable equal to data[i], i did figure out the number of bytes is correct (I found the code was putting things in i didn't know). My string is, for example "!C 0 21 255 17", so 6 bytes.
However, I would still love to find an answer to my question: Is there way during debug to view the elements without creating new variables to inspect?
Create an extension of Data as follows:
extension Data {
public var bytes: [UInt8]
{
return [UInt8](self)
}
}
You can view the bytes of data during debugging as:
po data.bytes
Just type po data as NSData in the debug console. You will see the hex bytes like <066465666768>

How can I store raw data with respect to time and sort it?

due to Internet communication i could have two (or more) ASCII files in RINEX format (GPS ASCII format) of the same data period, which i would like to merge to one file.
Each data set (epoch) contain more then one line (in this example 19 lines). I would like to merge those files, where it could be that they in some parts overlap each other.
here is an example of RINEX epoch data set:
09 2 21 12 59 59.9000000 0 9G31G23G11G13G32G17G14G20G19
23152606.238 121667768.06047 94806069.43545 23152606.540 23152606.521
1262.605 43.750 31.500
22765313.352 119632547.53447 93220179.18745 22765312.252 22765311.072
3252.769 46.250 32.250
20798168.896 109295128.07748 85165036.96747 20798168.642 20798168.578
-2252.493 52.750 43.250
25363206.177 133284559.23845
3776.403 32.750
20350616.203 106943239.96448 83332404.31147 20350615.386 20350616.499
-929.443 51.000 44.500
21994260.713 115580595.93348 90062809.84446 21994260.826 21994260.114
416.327 49.500 38.250
23964108.994 125932271.15846 98129049.02843 23964107.689 23964107.603
-3561.500 39.250 20.250
20225257.452 106284459.64448 82819085.85247 20225256.341 20225256.964
956.944 52.750 45.250
25623383.323 134651746.21445 104923415.17742 25623386.202 25623384.504
-3991.096 34.250 12.250
The first line contains the time info and below are the raw data for each GPS satellite.
My idea was to open each file separate and stored the raw data in some kind of array relative to time. Each time i read new epoch, i ask my array if i already have something with time so and so, and if not i place the raw data there.
My question is how to store the raw data with respect to time, since it is not one line but something dynamic that could always change.
If you have better idea, please share it with me.
Regards
To store the raw data with respect to time, I would:
Encode the time as a number (# of seconds since Unix "epoch time" or since some arbitrary start time - use microseconds instead of seconds depending on what RINEX time precision is).
Store the raw data as an array (data for each line is 1 array element - stored either as a string, an arrayref of words or a hash of values).
Store the reference to that array as a value in a hash, with the key being the time-encoded-as-a-number.

Midi Message need help

How do I interpret dwParam1 from the midiInProc delegate into midi status message like note-off, or note-on, control change?
Because as long i try dwParam1 is 254, and is not equal to note-off or anything else.
You won't necessarily receive note-offs from every input device. IIRC it is legal for a device to send a note-on with volume=0 as a substitute for note-off. Also a drum stream (from a drum machine and/or on MIDI channel 10) I believe commonly contains only note-ons, no note-offs.
Given that your question mentions dwParam1 and midiInProc, I'm assuming this is for Windows. When you receive MIM_DATA in your midiInProc, you can parse dwParam1 as follows:
For the status byte (command and channel), use LOBYTE(dwParam1).
For the first data byte, use HIBYTE(dwParam1).
If applicable, for the second data byte, use LOBYTE(HIWORD(dwParam1)).
I'm not entirely sure what you are asking, but I think you are trying to figure out how to interpret MIDI data.
I suggest this resource:
http://www.midi.org/techspecs/midimessages.php
MIDI messages related to notes are differentiated by the first 4 bits, not by the whole byte. The last four bits of the first byte specify the channel.
The answer by #Conrad Albrecht is mostly right, but I wanted to chip in with an answer (instead of a comment), as I think that the original poster is probably being confused by MIDI running status.
If you are seeing bytes which don't resemble normal MIDI status bytes, you can assume that they are of the same type as the previous byte which you received. Therefore it is not only legal, but very common, to use MIDI note on events with velocity of 0 as a substitute for MIDI note offs.
You should just interpret these bytes as the normal second two bytes of a MIDI note on event.