Hex to / from datetime stamp - date

I've an application running on Windows, I don't have the source code, the GUI presents the date as 22/06/2018 08:44, this date/time is written/read from a file. This file contains a Hex representation of the date, some examples below (the latter two have been edited by myself - hence the weird year).
2C 05 0A D4 01 (22/06/2018 08:44)
2C 06 0A D4 01 (22/06/2018 08:51)
2C 08 11 D4 01 (01/07/2018 06:53)
B4 AE 08 D4 01 (06/12/5671 13:13)
B4 AE 11 12 10 (31/07/5270 10:53)
I'm trying to understand the conversion from Hex to the GUI date/time, so that I could modify the Hex in the file direct and see the GUI date/time accordingly
Thanks

Edit: The hex numbers are standard Windows 64-bit values representing the number of 100-nanosecond intervals since January 1, 1601, with the three least significant bytes omitted and written as little endian (least significant byte first). For example, your first hex string, 2C 05 0A D4 01, means hex 01D4 0A05 2C00 0000 units at 100 nanos since January 1, 1601 UTC (this is precisely 22/06/2018 08:44:02.9898752 UTC, but your GUI omits seconds and fraction of second).
You can read more here: File Times on MSDN.
For the conversion from date and time to hex you may for example use http://www.silisoftware.com/tools/date.php?inputdate=2018-06-22T08%3A44%3A00%2B00%3A00&inputformat=text, enter your date as 2018-06-22T08:44:00+00:00 and get the hex out as 01D40A05:2A37C800. Round up so it ends in three zero bytes: 01D40A05:2B000000. Reorder the remaining bytes: 2B 05 0A D4 01.
Original answer
It’s not a date-time encoding scheme that I have met before. And from the data you have provided I am not able to deduct the full scheme. I believe I have found a bit of the scheme. I cannot get further.
Assuming some linear correspondence I first note by comparing the first two samples that a difference of 1 unit of the second group of hex digits (the second byte if you will) makes for a difference of 7 minutes. Or approximately: we don’t know if the times have seconds and maybe even fractions of seconds that are not displayed.
I used this information when comparing to the third sample. The third byte has increased by 7 from the first to the third sample (hex 11 - hex 0A = 7). Taking the increase on the second byte into account it would seem that one unit of the third byte approximates 1832 minutes, which is suspiciously close to 256 * 7 minutes = 1792 minutes. So it would seem that the 2nd and 3rd bytes have a “little endian” relationship, where the 3rd byte is more significant than the 2nd. Using this information we can obtain a little more accuracy: The difference in the times is 12849 minutes, and the difference on the 2nd and 3rd byte is hex 1108 - 0A05 = decimal 1795, so each unit is 7.1582 minutes (it agrees with the 7 minutes from before, only it’s more precise). Using this value I interpolated the second date-time from the hex value 2C 06 0A D4 01 and got 2018-06-22T08:51:09. It agrees. Hypothesis confirmed!
The information found so far suffices for encoding values between 09/06/2018 14:43 (2C 00 00 D4 01) and 01/05/2019 09:17 (2C FF FF D4 01) with a precision of 7 minutes. I’d be surprised if that were enough for you.
Comparing to the value in the 4th sample it would seem that one unit on the first byte corresponds to 14 128 940 minutes (26.86 years). It doesn’t divide nicely by the 7.1582 minutes from before, as we might have hoped, so I’m not sure how we might use this observation.
Comparing the last two samples it seems that the 4th and 5th byte cannot have the same little endian relationship since the 5th byte increases while the date decreases. It’s still possible, though, if we assume that at least one of the years is before the common era (“BC”) since era is not printed. Another possibility might be that the fifth byte is ignored. This leads to a unit of the fourth byte corresponding to 1 088 006 minutes. Again it bears no nice relationship to the 7.15 minutes from bytes 2 and 3, and it’s suspicously close to the unit of the first byte, so probably incorrect.
To learn more: First try to see if you get a meaningful date-time from editing (hex) 00 00 00 00 00 into your file. If you do, next try one F at a time:
F0 00 00 00 00
0F 00 00 00 00
…
00 00 00 00 0F
If this doesn’t make a pattern that is clear enough, try one bit at a time, using hex digits 1, 2, 4 and 8 instead of F.

Related

Midi Hexa-Code Notation Different in one fie

I have those 3 Events in a Midi file:
00 FF 51 03 0E 15 C3 86 A6
20 FF 51 03 15 20 A5 83
5C FF 51 03 0E 15 C3
But what is, in this case, important is, that FF 51 stands for a Tempo Change and the 03 for the number of following Byte-Pairs describing the tempo. As it is "3 Byte Pairs" in Each Event Why are there 5 Byte Pairs describing the first Event, 4 describing the second, and 3 describing the third? (I hope the image helps)
How does the encoding program know, when a new Event starts? The File can be played without any Problems.
All three events have three data bytes.
The delta times between the events are encoded as variable-length quantities, so you have to continue to read bytes until the most significant bit is clear. The three times before each event are 00, 86 A6 20, and 83 5C, resulting in the decoded delta times of 0, 109344, and 476.

How to generate hex sequence in Swift?

Hey guys I am trying to create a tool to calculate how many 6 bytes sequence it generates within a certain time set by me, like 10s or 1min and so on. The sequence, for example, is: 4F B0 33 47 A3 BC.
So it's hex numbers and each 6 bytes all together must be unique and would start from 00 00 00 00 00 00 to FF FF FF FF FF FF.
So the problem is that I can't figure out how I could set the counter to go like from 0 to F and do all the possible combinations.
All I know is that it can't be done randomly because it can generate duplicates during the process and as I said it must be unique 6bytes sequence.
So any one have any idea on how I could do that?
Six bytes represent positive numbers in the range from zero, inclusive, to 248, exclusive*. Each of these values can be uniquely converted to a sequence of six bytes. All these values fit in UInt64 type, so if you would like to generate all possible combinations, start at zero, and keep incrementing the counter until you reach 0xFFFFFFFFFFFF.
You can do the conversion to a hex sequence in many different ways - for example, you could use shifts and bitwise operators to "cut out" each byte, and formatting it as a hex value.
* Other interpretations are possible, too, but positive numbers in the range from 0 to 0xFFFFFFFFFFFF is good enough for the purposes of this task.

Online CRC-32 calculator result is incorrect, wrong polynomial?

I have to say that I don't really understand the mechanics of CRC-32; but I was hoping to be able to at least calculate a CRC based on a chunk.
I have a PNG with the following information: 2px by 5px, RGBa, no interlace
The image header chunk results in:
00 00 00 0d = data is 13 bytes long
49 48 44 52 = ascii for IHDR (image header)
00 00 00 02 00 00 00 05 08 06 00 00 00 = data; dimensions, bit-depth, etc.
6f b3 3d 9c = CRC
I wanted to see if CRC could be easily calculated so I tried using:
http://depa.usst.edu.cn/chenjq/www2/wl/software/crc/CRC_Javascript/CRCcalculation.htm
The calculator's default polynomial for CRC-32 is 04C11DB7.
When I plug in "0000000d4948445200000002000000050806000000" I get 35F0A255.
I looked it up on Wikipedia and tried the other various representations used by PNG (EDB88320 & 82608EDB) and I tried leaving off the length and chunk type with the various polynomials I used before; I also tried including the information before the chunk which defines the PNG signature. I never got 6fb33d9c.
Any ideas on why I can't get the right CRC via calculator?

Storing unicode code points, high-endian or low-endian mode?

In his famous blog post The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) Joel said :
The earliest idea for Unicode encoding, which led to the myth about
the two bytes, was, hey, let's just store those numbers in two bytes
each. So Hello becomes
00 48 00 65 00 6C 00 6C 00 6F
Right? Not so fast! Couldn't it also be:
48 00 65 00 6C 00 6C 00 6F 00 ?
The second representation is faster ? why ?
How does swapping the high and low bytes affect performance ?
The sentence "Not so fast!" isn't about computing performance but a way to say "hey, don't make assumptions so fast, here's another way to look at it".
The question is Mu.

typecast to single in MATLAB

What does this call to typecast do in MATLAB?
y=typecast(x,'single');
what does it mean? When I run typecast(3,'single') it gives 0 2.1250.
I don't understand what that is.
I am trying to convert this to Java, how can I do that?
From the MATLAB manual:
single - Convert to single precision
Syntax
B = single(A)
Description
B = single(A) converts the matrix A
to single precision, returning that
value in B. A can be any numeric
object (such as a double). If A is
already single precision, single has
no effect. Single-precision quantities
require less storage than
double-precision quantities, but have
less precision and a smaller range.
typecast reinterprets the bytes used to represent a value of one type as if those same bytes were representing a different type. For example, the constant 3 in MATLAB is an IEEE double-precision value, meaning it takes 8 bytes to store it. Those eight bytes in this case are
40 08 00 00 00 00 00 00
A value of type single in MATLAB is an IEEE single-precision value, meaning it takes only 4 bytes to store it. So the eight bytes of the double will map to two 4-byte singles, those being
40 08 00 00, and
00 00 00 00
It turns out that 40 08 00 00 is the single-precision representation of the value 2.125, and as you might guess, 00 00 00 00 is the single-precision representation of 0. I believe they come out in reverse order due to the endian-ness of the machine, and on a big-endian machine I think you'd get 2.125 0 instead.
In C++ this would be something like a reinterpret_cast. In Java, there doesn't appear to be as direct a mapping, but the answers to this Stack Overflow question discuss some alternatives such as Serialization.
From running help typecast it looks like it changes the datatype, but keeps the bit assignment the same, whereas single( ) keeps the number the same, but changes the bit arrangement.
If I understand it, you could think of it like you have two boxes, each containing up to 8 balls. Lets say, box 1 is full, whilst box 2 contains 3 balls. We now typecast this into a system where a box holds 4 balls.
This system will need three boxes to hold our balls. So we have boxes 1 and 2 which are full. Box 3 contains 3 balls.
So you'd have [8,3] converted to [4,4,3].
Alternatively, if you converted the number into our new system in the same way as single( ) works (e.g. for changing an int8 to a single), you'd change the number of balls, not the container.