How to generate hex sequence in Swift? - swift

Hey guys I am trying to create a tool to calculate how many 6 bytes sequence it generates within a certain time set by me, like 10s or 1min and so on. The sequence, for example, is: 4F B0 33 47 A3 BC.
So it's hex numbers and each 6 bytes all together must be unique and would start from 00 00 00 00 00 00 to FF FF FF FF FF FF.
So the problem is that I can't figure out how I could set the counter to go like from 0 to F and do all the possible combinations.
All I know is that it can't be done randomly because it can generate duplicates during the process and as I said it must be unique 6bytes sequence.
So any one have any idea on how I could do that?

Six bytes represent positive numbers in the range from zero, inclusive, to 248, exclusive*. Each of these values can be uniquely converted to a sequence of six bytes. All these values fit in UInt64 type, so if you would like to generate all possible combinations, start at zero, and keep incrementing the counter until you reach 0xFFFFFFFFFFFF.
You can do the conversion to a hex sequence in many different ways - for example, you could use shifts and bitwise operators to "cut out" each byte, and formatting it as a hex value.
* Other interpretations are possible, too, but positive numbers in the range from 0 to 0xFFFFFFFFFFFF is good enough for the purposes of this task.

Related

CRC16 (ModBus) - computing algorithm

I am using the ModBus RTU, and I'm trying to figure out how to calculate the CRC16.
I don't need a code example. I am simply curious about the mechanism.
I have learned that a basic CRC is a polynomial division of the data word, which is padded with zeros, depending on the length of the polynomial.
The following test example is supposed to check if my basic understanding is correct:
data word: 0100 1011
polynomial: 1001 (x3+1)
padded by 3 bits because of highest exponent x3
calculation: 0100 1011 000 / 1001 -> remainder: 011
Calculation.
01001011000
1001
0000011000
1001
01010
1001
0011
Edit1: So far verified by Mark Adler in previous comments/answers.
Searching for an answer I have seen a lot of different approaches with reversing, dependence on little or big endian, etc., which alter the outcome from the given 011.
Modbus RTU CRC16
Of course I would love to understand how different versions of CRCs work, but my main interest is to simply understand what mechanism is applied here. So far I know:
x16+x15+x2+1 is the polynomial: 0x18005 or 0b11000000000000101
initial value is 0xFFFF
example message in hex: 01 10 C0 03 00 01
CRC16 of above message in hex: C9CD
I did calculate this manually like the example above, but I'd rather not write this down in binary in this question. I presume my transformation into binary is correct. What I don't know is how to incorporate the initial value -- is it used to pad the data word with it instead of zeros? Or do I need to reverse the answer? Something else?
1st attempt: Padding by 16 bits with zeros.
Calculated remainder in binary would be 1111 1111 1001 1011 which is FF9B in hex and incorrect for CrC16/Modbus, but correct for CRC16/Bypass
2nd attempt: Padding by 16 bits with ones, due to initial value.
Calculated remainder in binary would be 0000 0000 0110 0100 which is 0064 in hex and incorrect.
It would be great if someone could explain, or clarify my assumptions. I honestly did spent many hours searching for an answer, but every explanation is based on code examples in C/C++ or others, which I don't understand. Thanks in advance.
EDIT1: According to this site, "1st attempt" points to another CRC16-method with same polynomial but a different initial value (0x0000), which tells me, the calculation should be correct.
How do I incorporate the initial value?
EDIT2: Mark Adlers Answer does the trick. However, now that I can compute CRC16/Modbus there are some questions left for clearification. Not needed but appreciated.
A) The order of computation would be: ... ?
1st applying RefIn for complete input (including padded bits)
2nd xor InitValue with (in CRC16) for the first 16 bits
3rd applying RefOut for complete output/remainder (remainder maximum 16 bits in CRC16)
B) Referring to RefIn and RefOut: Is it always reflecting 8 bits for input and all bits for output nonetheless I use CRC8 or CRC16 or CRC32?
C) What do the 3rd (check) and 8th (XorOut) column in the website I am referring to mean? The latter seems rather easy, I am guessing its apllied by computing the value xor after RefOut just like the InitValue?
Let's take this a step at a time. You now know how to correctly calculate CRC-16/BUYPASS, so we'll start from there.
Let's take a look CRC-16/CCITT-FALSE. That one has an initial value that is not zero, but still has RefIn and RefOut as false, like CRC-16/BUYPASS. To compute the CRC-16/CCITT-FALSE on your data, you exclusive-or the first 16 bits of your data with the Init value of 0xffff. That gives fe ef C0 03 00 01. Now do what you know on that, but with the polynomial 0x11021. You will get what is in the table, 0xb53f.
Now you know how to apply Init. The next step is dealing with RefIn and RefOut being true. We'll use CRC-16/ARC as an example. RefIn means that we reflect the bits in each byte of input. RefOut means that we reflect the bits of the remainder. The input message is then: 80 08 03 c0 00 80. Dividing by the polynomial 0x18005 we get 0xb34b. Now we reflect all of those bits (not in each byte, but all 16 bits), and we get 0xd2cd. That is what you see as the result in the table.
We now have what we need to compute CRC-16/MODBUS, which has both a non-zero Init value (0xffff) and RefIn and RefOut as true. We start with the message with the bits in each byte reflected and the first 16 bits inverted. That is 7f f7 03 c0 00 80. Divide by 0x18005 and you get the remainder 0xb393. Reflect those bits and we get 0xc9cd, the expected result.
The exclusive-or of Init is applied after the reflection, which you can verify using CRC-16/RIELLO in that table.
Answers for added questions:
A) RefIn has nothing to do with the padded bits. You reflect the input bytes. However in a real calculation, you reflect the polynomial instead, which takes care of both reflections.
B) Yes.
C) Yes, XorOut is the what you exclusive-or the final result with. Check is the CRC of the nine bytes "123456789" in ASCII.

Hex to / from datetime stamp

I've an application running on Windows, I don't have the source code, the GUI presents the date as 22/06/2018 08:44, this date/time is written/read from a file. This file contains a Hex representation of the date, some examples below (the latter two have been edited by myself - hence the weird year).
2C 05 0A D4 01 (22/06/2018 08:44)
2C 06 0A D4 01 (22/06/2018 08:51)
2C 08 11 D4 01 (01/07/2018 06:53)
B4 AE 08 D4 01 (06/12/5671 13:13)
B4 AE 11 12 10 (31/07/5270 10:53)
I'm trying to understand the conversion from Hex to the GUI date/time, so that I could modify the Hex in the file direct and see the GUI date/time accordingly
Thanks
Edit: The hex numbers are standard Windows 64-bit values representing the number of 100-nanosecond intervals since January 1, 1601, with the three least significant bytes omitted and written as little endian (least significant byte first). For example, your first hex string, 2C 05 0A D4 01, means hex 01D4 0A05 2C00 0000 units at 100 nanos since January 1, 1601 UTC (this is precisely 22/06/2018 08:44:02.9898752 UTC, but your GUI omits seconds and fraction of second).
You can read more here: File Times on MSDN.
For the conversion from date and time to hex you may for example use http://www.silisoftware.com/tools/date.php?inputdate=2018-06-22T08%3A44%3A00%2B00%3A00&inputformat=text, enter your date as 2018-06-22T08:44:00+00:00 and get the hex out as 01D40A05:2A37C800. Round up so it ends in three zero bytes: 01D40A05:2B000000. Reorder the remaining bytes: 2B 05 0A D4 01.
Original answer
It’s not a date-time encoding scheme that I have met before. And from the data you have provided I am not able to deduct the full scheme. I believe I have found a bit of the scheme. I cannot get further.
Assuming some linear correspondence I first note by comparing the first two samples that a difference of 1 unit of the second group of hex digits (the second byte if you will) makes for a difference of 7 minutes. Or approximately: we don’t know if the times have seconds and maybe even fractions of seconds that are not displayed.
I used this information when comparing to the third sample. The third byte has increased by 7 from the first to the third sample (hex 11 - hex 0A = 7). Taking the increase on the second byte into account it would seem that one unit of the third byte approximates 1832 minutes, which is suspiciously close to 256 * 7 minutes = 1792 minutes. So it would seem that the 2nd and 3rd bytes have a “little endian” relationship, where the 3rd byte is more significant than the 2nd. Using this information we can obtain a little more accuracy: The difference in the times is 12849 minutes, and the difference on the 2nd and 3rd byte is hex 1108 - 0A05 = decimal 1795, so each unit is 7.1582 minutes (it agrees with the 7 minutes from before, only it’s more precise). Using this value I interpolated the second date-time from the hex value 2C 06 0A D4 01 and got 2018-06-22T08:51:09. It agrees. Hypothesis confirmed!
The information found so far suffices for encoding values between 09/06/2018 14:43 (2C 00 00 D4 01) and 01/05/2019 09:17 (2C FF FF D4 01) with a precision of 7 minutes. I’d be surprised if that were enough for you.
Comparing to the value in the 4th sample it would seem that one unit on the first byte corresponds to 14 128 940 minutes (26.86 years). It doesn’t divide nicely by the 7.1582 minutes from before, as we might have hoped, so I’m not sure how we might use this observation.
Comparing the last two samples it seems that the 4th and 5th byte cannot have the same little endian relationship since the 5th byte increases while the date decreases. It’s still possible, though, if we assume that at least one of the years is before the common era (“BC”) since era is not printed. Another possibility might be that the fifth byte is ignored. This leads to a unit of the fourth byte corresponding to 1 088 006 minutes. Again it bears no nice relationship to the 7.15 minutes from bytes 2 and 3, and it’s suspicously close to the unit of the first byte, so probably incorrect.
To learn more: First try to see if you get a meaningful date-time from editing (hex) 00 00 00 00 00 into your file. If you do, next try one F at a time:
F0 00 00 00 00
0F 00 00 00 00
…
00 00 00 00 0F
If this doesn’t make a pattern that is clear enough, try one bit at a time, using hex digits 1, 2, 4 and 8 instead of F.

Oddity when encoding large integers using asn.1

I have found numerous references to the encoding requirements of Integers in ASN.1
and that Integers are inherently signed objects
TLV 02 02 0123 for exmaple.
However, I have a 256 bit integer (within a certificate) encoded
30 82 01 09 02 82 01 00 d1 a5 xx xx xx… 02 03 010001
30 start
82 2 byte length
0109 265 bytes
02 Integer
82 2 byte length
0100 256 bytes
d1 a5 xxxx
The d1 is the troubling part because the leading bit is 1, meaning this 256 bit number is signed when in fact it is an unsigned number, a public rsa key infact. Does the signed constraint apply to Integers > 64 bits?
Thanks,
BER/DER uses 2s-complement representation for encoding integer values. This means the the first bit (not byte) determines whether a number is positive or negative. This means that sometimes an extra leading zero byte needs to be added to prevent the first bit from causing the integer to be interpreted as a negative number. Note that it is invalid BER/DER to have the first 9 bits all zero.
Yes, you are right. For any non negative DER/BER-encoded INTEGER - no matter its length - the MSB of the first payload byte is 0.
The program that generated such key is incorrect.
The "signed constraint" (actually, a rule) totally applies to any size integers. However, depending on a domain you might find all sorts of oddities in how domain objects are encoded. This is something that has to be learned and accounted for the hard way, unfortunately.

typecast to single in MATLAB

What does this call to typecast do in MATLAB?
y=typecast(x,'single');
what does it mean? When I run typecast(3,'single') it gives 0 2.1250.
I don't understand what that is.
I am trying to convert this to Java, how can I do that?
From the MATLAB manual:
single - Convert to single precision
Syntax
B = single(A)
Description
B = single(A) converts the matrix A
to single precision, returning that
value in B. A can be any numeric
object (such as a double). If A is
already single precision, single has
no effect. Single-precision quantities
require less storage than
double-precision quantities, but have
less precision and a smaller range.
typecast reinterprets the bytes used to represent a value of one type as if those same bytes were representing a different type. For example, the constant 3 in MATLAB is an IEEE double-precision value, meaning it takes 8 bytes to store it. Those eight bytes in this case are
40 08 00 00 00 00 00 00
A value of type single in MATLAB is an IEEE single-precision value, meaning it takes only 4 bytes to store it. So the eight bytes of the double will map to two 4-byte singles, those being
40 08 00 00, and
00 00 00 00
It turns out that 40 08 00 00 is the single-precision representation of the value 2.125, and as you might guess, 00 00 00 00 is the single-precision representation of 0. I believe they come out in reverse order due to the endian-ness of the machine, and on a big-endian machine I think you'd get 2.125 0 instead.
In C++ this would be something like a reinterpret_cast. In Java, there doesn't appear to be as direct a mapping, but the answers to this Stack Overflow question discuss some alternatives such as Serialization.
From running help typecast it looks like it changes the datatype, but keeps the bit assignment the same, whereas single( ) keeps the number the same, but changes the bit arrangement.
If I understand it, you could think of it like you have two boxes, each containing up to 8 balls. Lets say, box 1 is full, whilst box 2 contains 3 balls. We now typecast this into a system where a box holds 4 balls.
This system will need three boxes to hold our balls. So we have boxes 1 and 2 which are full. Box 3 contains 3 balls.
So you'd have [8,3] converted to [4,4,3].
Alternatively, if you converted the number into our new system in the same way as single( ) works (e.g. for changing an int8 to a single), you'd change the number of balls, not the container.

PNG file format endianness?

Im not sure if endian is the right word but..
I have been parsing through a PNG file and I have noticed that all of the integer values are in big endian. Is this true?
For example, the width and height are stored in the PNG file as 32bit unsigned integers. My image is 16x16 and in the file its stored as:
00 00 00 10
when it should be:
10 00 00 00
Is this true or is there something I am missing?
Yes, according to the specification, integers must be in network byte order (big endian):
All integers that require more than one byte shall be in network byte order: the most significant byte comes first, then the less significant bytes in descending order of significance (MSB LSB for two-byte integers, MSB B2 B1 LSB for four-byte integers). The highest bit (value 128) of a byte is numbered bit 7; the lowest bit (value 1) is numbered bit 0. Values are unsigned unless otherwise noted. Values explicitly noted as signed are represented in two's complement notation.
http://www.w3.org/TR/2003/REC-PNG-20031110/#7Integers-and-byte-order
Integers in PNG are in network byte order (big endian).
See: the spec.