Why is the range of the timestamp type 4713 BC to 294276 AD? - postgresql

Postgresql has a timestamp datatype with resolution 1 microsecond and range 4713 BC to 294276 AD that takes up 8 bytes (see https://www.postgresql.org/docs/current/datatype-datetime.html).
I have calculated the total number of microseconds in that range as (294276 + 4713) × 365.25 × 24 × 60 × 60 × 1000000 = 9.435375266×10¹⁸. This is less than 2⁶⁴ = 1.844674407×10¹⁹, but also more than 2⁶³ = 9.223372037×10¹⁸.
I might be off by a few days due to calendar weirdness and leap years, but I don't think it's enough to push the number below 2⁶³.
So, why were the limits chosen like that? Why not use the full range available with 64 bits?

The internal representation of timestamps is in microseconds since 2000-01-01 00:00:00, stored as an 8-byte integer. So the maximum possible year would be something like
SELECT (2::numeric^63 -1) / 365.24219 / 24 / 60 / 60 / 1000000 + 2000;
?column?
═════════════════════════
294277.2726976055146158
(1 row)
which explains the upper limit.
The minimum is explained by a comment in src/include/datatype/timestamp.h:
/*
* Range limits for dates and timestamps.
*
* We have traditionally allowed Julian day zero as a valid datetime value,
* so that is the lower bound for both dates and timestamps.
*
* The upper limit for dates is 5874897-12-31, which is a bit less than what
* the Julian-date code can allow. For timestamps, the upper limit is
* 294276-12-31. The int64 overflow limit would be a few days later; again,
* leaving some slop avoids worries about corner-case overflow, and provides
* a simpler user-visible definition.
*/
So the minimum is taken from the lower limit on Julian dates.

Related

Understand DispatchTime on M1 machines

In my iOS project were were able to replicate Combine's Schedulers implementation and we have an extensive suit of testing, everything was fine on Intel machines all the tests were passing, now we got some of M1 machines to see if there is a showstopper in our workflow.
Suddenly some of our library code starts failing, the weird thing is even if we use Combine's Implementation the tests still failing.
Our assumption is we are misusing DispatchTime(uptimeNanoseconds:) as you can see in the following screen shot (Combine's implementation)
We know by now that initialising DispatchTime with uptimeNanoseconds value doesn't mean they are the actual nanoseconds on M1 machines, according to the docs
Creates a DispatchTime relative to the system clock that
ticks since boot.
- Parameters:
- uptimeNanoseconds: The number of nanoseconds since boot, excluding
time the system spent asleep
- Returns: A new `DispatchTime`
- Discussion: This clock is the same as the value returned by
`mach_absolute_time` when converted into nanoseconds.
On some platforms, the nanosecond value is rounded up to a
multiple of the Mach timebase, using the conversion factors
returned by `mach_timebase_info()`. The nanosecond equivalent
of the rounded result can be obtained by reading the
`uptimeNanoseconds` property.
Note that `DispatchTime(uptimeNanoseconds: 0)` is
equivalent to `DispatchTime.now()`, that is, its value
represents the number of nanoseconds since boot (excluding
system sleep time), not zero nanoseconds since boot.
so, is the test wrong or we should not use DispatchTime like this?
we try to follow Apple suggestion and use this:
uint64_t MachTimeToNanoseconds(uint64_t machTime)
{
uint64_t nanoseconds = 0;
static mach_timebase_info_data_t sTimebase;
if (sTimebase.denom == 0)
(void)mach_timebase_info(&sTimebase);
nanoseconds = ((machTime * sTimebase.numer) / sTimebase.denom);
return nanoseconds;
}
it didnt help a lot.
Edit: Screenshot code:
func testSchedulerTimeTypeDistance() {
let time1 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10000))
let time2 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10431))
let distantFuture = DispatchQueue.SchedulerTimeType(.distantFuture)
let notSoDistantFuture = DispatchQueue.SchedulerTimeType(
DispatchTime(
uptimeNanoseconds: DispatchTime.distantFuture.uptimeNanoseconds - 1024
)
)
XCTAssertEqual(time1.distance(to: time2), .nanoseconds(431))
XCTAssertEqual(time2.distance(to: time1), .nanoseconds(-431))
XCTAssertEqual(time1.distance(to: distantFuture), .nanoseconds(-10001))
XCTAssertEqual(distantFuture.distance(to: time1), .nanoseconds(10001))
XCTAssertEqual(time2.distance(to: distantFuture), .nanoseconds(-10432))
XCTAssertEqual(distantFuture.distance(to: time2), .nanoseconds(10432))
XCTAssertEqual(time1.distance(to: notSoDistantFuture), .nanoseconds(-11025))
XCTAssertEqual(notSoDistantFuture.distance(to: time1), .nanoseconds(11025))
XCTAssertEqual(time2.distance(to: notSoDistantFuture), .nanoseconds(-11456))
XCTAssertEqual(notSoDistantFuture.distance(to: time2), .nanoseconds(11456))
XCTAssertEqual(distantFuture.distance(to: distantFuture), .nanoseconds(0))
XCTAssertEqual(notSoDistantFuture.distance(to: notSoDistantFuture),
.nanoseconds(0))
}
The difference between Intel and ARM code is precision.
With Intel code, DispatchTime internally works with nanoseconds. With ARM code, it works with nanoseconds * 3 / 125 (plus some integer rounding). The same applies to DispatchQueue.SchedulerTimeType.
DispatchTimeInterval and DispatchQueue.SchedulerTimeType.Stride internally use nanoseconds on both platforms.
So the ARM code uses lower precision for calculations but full precision when comparing distances. In addition, precision is lost when converting from nanoseconds to the internal unit.
The exact formula for the DispatchTime conversions are (executed as integer operations):
rawValue = (nanoseconds * 3 + 124) / 125
nanoseconds = rawValue * 125 / 3
As an example, let's take this code:
let time1 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10000))
let time2 = DispatchQueue.SchedulerTimeType(.init(uptimeNanoseconds: 10431))
XCTAssertEqual(time1.distance(to: time2), .nanoseconds(431))
It results in the calculation:
(10000 * 3 + 124) / 125 -> 240
(10431 * 3 + 124) / 125 -> 251
251 - 240 -> 11
11 * 125 / 3 -> 458
The resulting comparison between 458 and 431 then fails.
So the main fix would be to allow for small differences (I haven't verified if 42 is the maximum difference):
XCTAssertEqual(time1.distance(to: time2), .nanoseconds(431), accuracy: .nanoseconds(42))
XCTAssertEqual(time2.distance(to: time1), .nanoseconds(-431), accuracy: .nanoseconds(42))
And there are more surprises: Other than with Intel code, distantFuture and notSoDistantFuture are equal with ARM code. It has probably been implemented like so to protect from an overflow when multiplying with 3. (The actual calculation would be: 0xFFFFFFFFFFFFFFFF * 3). And the conversion from the internal unit to nanoseconds would result in 0xFFFFFFFFFFFFFFFF * 125 / 3, a value to big to be represented with 64 bits.
Furthermore I think that you are relying on implementation specific behavior when calculating the distance between time stamps at or close to 0 and time stamps at or close to distant future. The tests rely on the fact the distant future internally uses 0xFFFFFFFFFFFFFFFF and that the unsigned subtraction wraps around and produces a result as if the internal value was -1.
I think your issue lies in this line:
nanoseconds = ((machTime * sTimebase.numer) / sTimebase.denom)
... which is doing integer operations.
The actual ratio here for M1 is 125/3 (41.666...), so your conversion factor is truncating to 41. This is a ~1.6% error, which might explain the differences you're seeing.

What is the min and max value for a double field in MongoDB?

I need to find out what is the min and max value for a double field in mongoDB, including the number of precision for it.
I found this link: http://bsonspec.org/spec.html
Because MongoDB uses BSon, I'm looking for the BSON information. There is says:
double 8 bytes (64-bit IEEE 754-2008 binary floating point)
But how do I calculate the min and max number based on that?
I ended up at this link: https://en.wikipedia.org/wiki/Double-precision_floating-point_format
But couldn't understand how to calculate the min and max values.
The wikipedia link you include has the precise answer:
min: -2^1023 * (1 + (1 − 2^−52)) (approx: -1.7976931348623157 * 10^308)
max: 2^1023 * (1 + (1 − 2^−52)) (approx: 1.7976931348623157 * 10^308)

How do I convert a decimal number representing time to actual time in Crystal Reports 2008? Ex: "1.5" represents 1:30

I'm trying to edit an existing Crystal Report that shows time allowances for work orders. Budgeted Time / Actual Time / Remaining Time type deal.
These fields show up as not properly converting time from the data field for the report. The person who made the report has some formula for it already but I'm not sure what's it doing.
Formula: Standard Time
Stringvar array Std_Time := split(replace(cstr({WOMNT_CARD.STANDARD_HOURS_DURATION}),",",""),".");
val(Std_Time[1])*60+val(Std_Time[2])
The field used in the report is Sum of #Standard Time (Number).
How do I fix this so these numbers are properly converted?
The formula that you have posted does the following:
First it converts the datatype of the {WOMNT_CARD.STANDARD_HOURS_DURATION} field to string by using the cstr function, the result is being stripped of commas with the replace function and the resulted string is being split into an array by using the dot character as the delimiter.
So, for the value 1.5 the Std_Time variable will hold the following
Std_Time[1] → 1
Std_Time[2] → 5
Finally it calculates the result by multiplying the first value of the first index with 60 and adds to it the value of the second index. The value 1.5 becomes 1 * 60 + 5 = 65
If the 1.5 must represent 1:30 then the last line must become
val(Std_Time[1]) * 60 + 60 * val(Std_Time[2]) / 10
because 60 * 5 / 10 = 30
or you can use for simplicity just the following
60 * val(replace(cstr({WOMNT_CARD.STANDARD_HOURS_DURATION}),",",""))
since 60 * 1.5 = 90

timestamp to milliseconds conversion around midnight getting messed up

I am using following function to convert timestamp in format (e.g.) 02:49:02.506 to milliseconds in perl.
sub to_millis {
my( $hours, $minutes, $seconds, $millis) = split /:/, $_[0];
$millis += 1000 * $seconds;
$millis += 1000 * 60 * $minutes;
$millis += 1000 * 60 * 60 * $hours;
return $millis;
}
I am then using the milliseconds generated from above routine to calculate the time difference between two timestamps in milliseconds. This works fine all day but gets messed up around midnight, when the timestamp changes to 00:00:00.000. So any logs generated for 1 hr (between 12am to 1am) gets me values in negative for the timestamp difference. Since my timestamp doesn't have a date in it, how do I fix this problem? I am trying to do this on a mobile device, which doesn't have many perl modules installed. So I don't have the liberty of using all the perl modules available.
If you know the ordering of your two timestamps, and if you know that they're no more than 24 hours apart, if you get a negative difference add 24 hours (86,400,000 milliseconds).
If you don't have that information, then you won't be able to distinguish between a 2-minute span and a span of 24 hours and 2 minutes.
I assume that your timestamps will never be more than 23 hours 59 minutes apart?
Let's take two time stamps A and B. I am assuming that A happens before B.
Normally, if A is less than B, I know I can get my time by subtracting A from B. However, in this case, A is bigger than B. I now have to assume that I've wrapped around midnight. What do I do?
I know that the difference between A and B is A going to midnight, PLUS B.
For example, A is 11:58:30 and B is 00:02:00
I know that A will be 90 seconds before midnight, and B will add another 120 seconds to that time. Thus, the total difference will be 90 + 120 = 210 seconds.
Using your routine:
my $midnight = to_millis( "23:59:00:000" ); # Need the time at midnight
my $a_time = to_millis( $a_timestamp );
my $b_time = to_millis( $b_timestamp );
my $time_diff;
if ( $a_time < $b_time ) { # Normal timestamp issue
$time_diff = $b_time - $a_time;
}
else { # We wrapped around midnight!
my $first_part = $midnight - $a_time; # Time from A to midnight
$time_diff = $first_part + $b_time # We add the time from midnite to B
}
You have two timestamps, A and B. If B is always conceptually "after" A but the interval from A to B could cross a date boundary, then do
if (B < A) B+=86400000
and then do the subtraction. Or equivalently
diff = B - A
if (diff < 0) diff+=86400000
If, however you are not guaranteed that B will always be "after" A, you have to decide what is the acceptable range of positive and negative values for the difference. If it's more than half a day you're out of luck, there's no way to solve the problem as you cannot tell if a negative interval represents a real negative interval or a positive one that happened to cross a day boundary.
To handle the wrap around at midnight:
$elapsed_ms = $t2_ms - $t1_ms;
if ($elapsed_ms < 0) $elapsed_ms += (24 * 60 * 60 * 1000);

53 * .01 = .531250

I'm converting a string date/time to a numerical time value. In my case I'm only using it to determine if something is newer/older than something else, so this little decimal problem is not a real problem. It doesn't need to be seconds precise. But still it has me scratching my head and I'd like to know why..
My date comes in a string format of #"2010-09-08T17:33:53+0000". So I wrote this little method to return a time value. Before anyone jumps on how many seconds there are in months with 28 days or 31 days I don't care. In my math it's fine to assume all months have 31 days and years have 31*12 days because I don't need the difference between two points in time, only to know if one point in time is later than another.
-(float) uniqueTimeFromCreatedTime: (NSString *)created_time {
float time;
if ([created_time length]>19) {
time = ([[created_time substringWithRange:NSMakeRange(2, 2)]floatValue]-10) * 535680; // max for 12 months is 535680.. uh oh y2100 bug!
time=time + [[created_time substringWithRange:NSMakeRange(5, 2)]floatValue] * 44640; // to make it easy and since it doesn't matter we assume 31 days
time=time + [[created_time substringWithRange:NSMakeRange(8, 2)]floatValue] * 1440;
time=time + [[created_time substringWithRange:NSMakeRange(11, 2)]floatValue] * 60;
time=time + [[created_time substringWithRange:NSMakeRange(14, 2)]floatValue];
time = time + [[created_time substringWithRange:NSMakeRange(17, 2)]floatValue] * .01;
return time;
}
else {
//NSLog(#"error - time string not long enough");
return 0.0;
}
}
When passed that very string listed above the result should be 414333.53, but instead it is returning 414333.531250.
When I toss an NSLog in between each time= to track where it goes off I get this result:
time 0.000000
time 401760.000000
time 413280.000000
time 414300.000000
time 414333.000000
floatvalue 53.000000
time 414333.531250
Created Time: 2010-09-08T17:33:53+0000 414333.531250
So that last floatValue returned 53.0000 but when I multiply it by .01 it turns into .53125. I also tried intValue and it did the same thing.
Welcome to floating point rounding errors. If you want accuracy two a fixed number of decimal points, multiply by 100 (for 2 decimal points) then round() it and divide it by 100. So long as the number isn't obscenely large (occupies more than I think 57 bits) then you should be fine and not have any rounding problems on the division back down.
EDIT: My note about 57 bits should be noted I was assuming double, floats have far less precision. Do as another reader suggests and switch to double if possible.
IEEE floats only have 24 effective bits of mantissa (roughly between 7 and 8 decimal digits). 0.00125 is the 24th bit rounding error between 414333.53 and the nearest float representation, since the exact number 414333.53 requires 8 decimal digits. 53 * 0.01 by itself will come out a lot more accurately before you add it to the bigger number and lose precision in the resulting sum. (This shows why addition/subtraction between numbers of very different sizes in not a good thing from a numerical point of view when calculating with floating point arithmetic.)
This is from a classic floating point error resulting from how the number is represented in bits. First, use double instead of float, as it is quite fast to use on modern machines. When the result really really matters, use the decimal type, which is 20x slower but 100% accurate.
You can create NSDate instances form those NSString dates using the +dateWithString: method. It takes strings formatted as YYYY-MM-DD HH:MM:SS ±HHMM, which is what you're dealing with. Once you have two NSDates, you can use the -compare: method to see which one is later in time.
You could try multiplying all your constants by by 100 so you don't have to divide. The division is what's causing the problem because dividing by 100 produces a repeating pattern in binary.