Why this website shows currentMiliseconds value greater than Integer.MAX_VALUE? - date

The MAX_VALUE for Integer (32-bit) is , 2_147_483_647 and this is the maximum limit of time in the future (unless we switch to 64-bit Integers).
But this website show current time in milliseconds equals to, 1_423_079_895_486, and it shows the correct time.
How come the value is way too bigger than Integer.MAX_VALUE or maximum milliseconds value in unix time ?
Am I missing something basic ?

It's probably just using 64 bits to represent the time in milliseconds.
This is unremarkable. The system I'm typing this on has a 64-bit time_t type.
Are you perhaps assuming that the C types int and time_t have to be the same size? They don't. And a 32-bit number representing milliseconds can only span a duration of just under 50 days.
We don't even know how the web site is implemented; it could well be using some scripting language with support for variable-width integers.

Related

How precise should I encode a Unix Time?

I came across this because I am working with time across multiple platforms and seems like they all differ a little bit from each other in how unix time is implemented and/or handled in their system. Thus the question.
Quoting Wikipedia page on Unix Time:
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval) or billionths (in struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.
Which seems to be the implemention in Go (UnixNano). However, in practice, there are many languages/platforms which use milliseconds (Java?) and also some platforms uses Float (to try to maintain some precision) and others mostly uses Int.
So if I'm implementing a transport format and I only have exactly 64 bits available to store a time value and no more, my question is two-fold:
Should I encode it as an integer or a floating-point value? And
Should I use seconds, milliseconds or nanosecond precision?
The main goal being to try to be as accurate as possible across as many languages and platforms as possible (without resorting to custom code in every single platform, of course).
p.s. I know this is a little subjective but I believe it's still possible to make a good, objective answer. Feel free to close if that's not the case.
It depends on what the required precision of the time value is, and its maximal range.
When storing nanoseconds in an unsigned 64bit integer, the range is about 584 years (2^64 ns), so precise and long enough for any practical application already.
Using a floating point format has the advantage that both very small and very large values can be stored, with higher absolute precision for smaller values. But with 64bit it this probably not a problem anyways.
If the time value is an absolute point in time instead of duration, the transform format would also need to define what date/time the value 0 stands for. (i.e. the Epoch)
Getting the current time on a UNIX-like system can be done using gettimeofday(), for example, which returns a struct with a seconds and microseconds value. This can then be converted into a single 64bit integer giving a value in microseconds. The Epoch for UNIX time is 1 January 1970 00:00:00 UT. (The clock() function does not measure real time, but instead the duration of time that the processor was active.)
When a time value for the same transport format is generated on another platform (for example Windows with GetSystemTime(), it would need to be converted to the same unit and epoch.
So the following things would need to be fixed for a transport protocol:
The unit of the time value (ms, us, ...), depending on required precision and range
If the time is a time point and not a duration, the Epoch (date and time of value 0)
Whether it is stored in an integer (unsigned or signed, if it is a duration that can be negative), or as a floating point
The endianess of the 64bit value
If floating point is used, the format of the floating point value (normally IEEE 754)
Because different platforms have different APIs to get the current time, probably it would always need some code to properly convert the time value, but this is trivial.
For maximum portability and accuracy, you should probably go with a type specified by POSIX. That way, the code will be portable across all Unixes and other operating systems conforming to POSIX.
I suggest that you use clock_t and the clock() function for time. This has a variety of uses, including measuring time and distance between one point in a program and another. Just make sure to cast the result to a double and divide by CLOCKS_PER_SEC afterwards to convert that time into a human-readable format.
So, to answer your question:
Use both an integer and a floating-point value
Unsure precision (the number of clock cycles between calls) but accurate enough for all non-critical applications and some more important ones

snowflake: "left shift" made that result exceeds long.max value

((timestamp - 1288834974657) << 32)
I included some more bits information, for example, total 32 bits after timestamp information needs, then the timestamp needs to be left shift 32 bits, such that the result exceeds long.max value. The result shown a negative value something like -7187691577906700288, it was wrong.
Hope I described my question correctly. Please help...
I don't know snowflake well (I assume it's a language?) I also don't know what format that timestamp is. If 1288834974657 a unix timestamp, it's in the year 42811.
The issue is that this particular timestamp is larger than 32bit. Since you move it up another 32bit, your number overflows. It looks like the long in your language might be unsigned, which means that the maximum number is probably 2^63-1. If the long were unsigned, the maximum number would probably be 2^64-1.

Why does my UILabel start counting backward at 2 billion?

I have a UILabel in my iPhone app simulator. It displays a coin count and I have an action that adds 1 hundred million to the count. I want the number to keep going up but for some reason once the count hits 2 billion, it adds a minus sign and starts counting down, then counts back up to 2 billion and back down again and so on.
I want to be able to display a much greater number of digits ie trillions and so on... Does anyone know what's going on with this and how to fix it so the label digits will keep going up as high as I want.
I'm using Xcode and Interface Builder and running through the simulator. I'm storing the number in a int variable, if that matters.
You store your coin count in an int, that's the problem. A 4 byte int can't store numbers higher than 2,147,483,647. If you add 1 to 2,147,483,647 you will get −2,147,483,648, which is the smallest possible int.
If you want to store bigger numbers you have to use a long which can store numbers between −(2^63) and 2^63−1 (or −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807).
See this for additional details.
This is occurring because, as #DrummerB pointed out, your int variable only has enough bits to store integral values in the range of -2,147,483,647 to 2,147,483,647. The reason this gets "reset" or "rolls over" back to a negative has to do with how computers store data, which is in binary.
For example, if you had an 8-bit integer (otherwise known as a byte) can store integral values from 0 to 255 if it is unsigned (meaning it can only store positive values) and -127 to 127 if it is signed (meaning it can store negative numbers). When an integer reaches its max value, it is represented in memory by all ones as you see here with the unsigned value 255:
255 = 11111111
So the maximum number that can be stored in an 8-bit int (byte) is 255. If you add 1 to this value, you end up flipping all the 1 values so that they are zeroes and since storing the value 256 would require a 9th bit you lose the 9th bit entirely and the integer value will appear to "roll over" to the minimum value.
Now.. As I stated above, the result of the addition above yields the value 256, but we only have 8 bits of storage in our integer so the most significant bit (9th bit) is lost. So you can picture it kinda like this with the pipes | marking your storage area:
only 8 bits of storage total
v
255 = 0|11111111|
+ 1 = 0|00000001|
-------------------
256 = 1|00000000|
^
9th bit is lost
In an unsigned int, the same is true, however the first bit is used to determine if the value is negative so you gain signing but you lose 1 bit of storage, resulting in your only having enough space to store the values 0 to 127 and 1 bit for signing.
Now that we understand what's going on, it should be noted that iOS is, at the time of this writing, a 32-bit operating system and while it can handle larger integers you probably don't want to use them all over the place as it's not optimized to work with these values.
If you just want to increase the range of values you can store in this variable, I would recommend changing it to an unsigned int, which can be done using the NSUInteger typedef.

Why do numbers with 17 and more digits turn EVEN automatically?

I'm testing a photo application for Facebook. I'm getting object IDs from the Facebook API, but I received some incorrect ones, which doesn't make sense - why would Facebook send wrong IDs? I investigated a bit and found out that numbers with 17 and more digits are automatically turning into even numbers!
For example, let's say the ID I'm supposed to receive from Facebook is 12345678912345679. In the debugger, I've noticed that Flash Player automatically turns it into 12345678912345678. And I even tried to manually set it back to an odd number, but it keeps changing back to even.
Is there any way to stop Flash Player from rounding the numbers? BTW the variable is defined as Object, I receive it like that from Facebook.
This is related to the implementation of data types:
int is a 32-bit number, with an even distribution of positive and
negative values, including 0. So the maximum value is
(2^32 / 2 ) - 1 == 2,147,483,647.
uint is also a 32-bit number, but it doesn't have negative values. So the
maximum value is
2^32 - 1 == 4,294,967,295.
When you use a numerical value greater than the maximum value of int or uint, it is automatically cast to Number. From the Adobe Doc:
The Number data type is useful when you need to use floating-point
values. Flash runtimes handle int and uint data types more efficiently
than Number, but Number is useful in situations where the range of
values required exceeds the valid range of the int and uint data
types. The Number class can be used to represent integer values well
beyond the valid range of the int and uint data types. The Number data
type can use up to 53 bits to represent integer values, compared to
the 32 bits available to int and uint.
53 bits have a maximum value of:
2^53 - 1 == 9,007,199,254,740,989 => 16 digits
So when you use any value greater than that, the inner workings of floating point numbers apply.
You can read about floating point numbers here, but in short, for any floating point value, the first couple of bits are used to specify a multiplication factor, which determines the location of the point. This allows for a greater range of values than are actually possible to represent with the number of bits available - at the cost of reduced precision.
When you have a value greater than the maximum possible integer value a Number could have, the least significant bit (the one representing 0 and 1) is cut off to allow for a more significant bit (the one representing 2^54) to exist => hence, you lose the odd numbers.
There is a simple way to get around this: Keep all your IDs as Strings - they can have as many digits as your system has available bytes ;) It's unlikely you're going to do any calculations with them, anyway.
By the way, if you had a value greater than 2^54-(1+2), your numbers would be rounded down to the next multiple of 4; if you had a value greater than 2^55-(1+2+4), they would be rounded down to the next multiple of 8, etc.

In Voldemort, why does the hash ring only extend to 2^31-1?

On the project voldemort design page:
http://project-voldemort.com/design.php
It is stated that the hash ring covers the interval [0, 2^31-1].
Now, the interval [0, 2^31-1] represents 2^31 total numbers, and the largest number 2^31-1 is just 31 bits all set to 1. (To convince yourself of this, consider 2^3-1. 2^3=8 and is 0x1000. 2^3-1=7 and is 0x111).
Thus, if a normal 32-bit address word is used to store the value, you have 1 bit free.
Thus, why is 2^31-1 the upper limit? Is that extra bit used for some kind of system bookkeeping?
(e.g. 1 extra bit would provide room for safely adding two valid hash addresses without overflow).
And finally, is this choice specific to voldemort, or is it seen in other consistent hashing schemes?
I think you only have 1 bit free not 2. The -1 accounts for the fact that it starts with the number '0' instead of 1 (the same reason loops count from 0 to count-1). I would guess the reason they use 2^31 instead of 2^32 is that they're using a signed integer so that last bit is the sign bit and so is not useable.
Edit:
From the page you linked:
To visualize the consistent hashing method we can see the possible
integer hash values as a ring beginning with 0 and circling around to
2^31-1.
It specifies an integer so unless you want negative hash values you're stuck with 2^31 instead of 2^32.
Answering a question little late, just by 4 years :)
The reason is Voldemort is written in Java and Java has no unsigned int. 2^31 is already very high for the partitions. Generally we recommend running with only few thousand partitions.