What type to use for timeIntervalSince1970 in ms? - swift

I need time since Epoch in ms for an API request, so I'm looking to have a function that converts my myUIDatePicker.date.timeIntervalSince1970 into milliseconds by multiplying by 1000. My question is what should the return value be?
Right now I have
func setTimeSinceEpoch(datePicker: UIDatePicker) -> Int {
return Int(datePicker.date.timeIntervalSince1970 * 1000)
}
Will this cause any issues? I need an integer, not a floating point, but will I have issues with overflow? I tested it out with print statements and it seems to work but I want to find the best way of doing this.

Looking at Apple Docs:
var NSTimeIntervalSince1970: Double { get }
There is a nice function called distantFuture. Even if you use this date in you func the result will be smaller then the max Int.
let future = NSDate.distantFuture() // "Jan 1, 4001, 12:00 AM"
print((Int(future.timeIntervalSince1970) * 1000) < Int.max) // true
So, until 4001 you're good to go. It will work perfectly on 64-bits systems.
Note: If your system supports iPhone 5 (32-bits) it's going to get an error on pretty much any date you use. Int in Iphone 5 corresponds to Int32.
Returning an Int64 is a better approach. See this.

Related

How to format the Duration object in Swift

Swift released a new Duration object that is "a representation of high precision time."
I'm using it like this:
let clock = ContinuousClock()
let duration = clock.measure {
// Code or function call to measure here
}
print("Duration: \(duration)")
If the duration really short it prints out something like this:
8.2584e-05 seconds
Instead of scientific notation, I would like to always display as seconds: 0.000082584 seconds
Does anyone know how to always keep the format in seconds?
Just like Dates, Durations support the formatted method. You can give it either the TimeFormatStyle (time) or UnitsFormatStyle (units). For your desired format, it looks like the latter is more suitable. You basically want a fractionalPart that has a very large allowed length.
Though from my experiments, it still rounds everything to nanosecond-precision, even though Duration can support higher precisions. This is perhaps because nanoseconds is the smallest supported unit in Duration.UnitsFormatStyle.Unit.
For example:
let duration: Duration = .nanoseconds(1234)
print(
duration.formatted(.units(
width: .wide,
fractionalPart: .init(lengthLimits: 1...1000)
))
)
Output:
0.000001234 seconds
By default, this will also include hours and minutes if the duration is long enough. If you don't want that, pass allowed: [.seconds] as the first parameter:
duration.formatted(.units(
allowed: [.seconds],
width: .wide,
fractionalPart: .init(lengthLimits: 1...1000)
))

Why do I need to divide the timestamp by 1 billion?

I'm using this public Postgres DB of NEAR protocol: https://github.com/near/near-indexer-for-explorer#shared-public-access
There is a field called included_in_block_timestamp whose "data type" = "numeric", and "length/precision" = 20.
This code works:
to_char(TO_TIMESTAMP("public"."receipts"."included_in_block_timestamp"/1000000000), 'YYYY-MM-DD HH:mm') as moment,
and so does this:
function convertTimestampDecimalToDayjsMoment(timestampDecimal: Decimal) {
const timestampNum = Number(timestampDecimal) / 1_000_000_000; // Why is this necessary?
console.log({ timestampNum });
const moment = dayjs.unix(timestampNum); // https://day.js.org/docs/en/parse/unix-timestamp
return moment;
}
For example, sometimes included_in_block_timestamp = 1644261932960444221.
I've never seen a timestamp where I needed to divide by 1 billion. Figuring this out just now was a matter of trial and error.
What's going on here? Is this common practice? Does this level of precision even make sense?
Timestamp units of measure in nanoseconds seems to be determined at the protocol-level as this appears in the docs here: https://docs.near.org/develop/contracts/environment/#environment-variables
and here: https://nomicon.io/RuntimeSpec/Components/BindingsSpec/ContextAPI
So yes, do take this into account before date-time conversions.

Returning exact change in SWIFT

I'm struggling with making sense of how to return the changeDue for my assignment. Trying to revise my incorrect code for class in prep for intro to programming final.
all I am trying to do is: create a method called quarters(). When I pass any double value (ChangeDue) into the method, I want to know precisely how many quarters there are as well as the partial quarter change returned.
Original code:
func getChange(Quarters: Double) -> Double {
var Change = Quarters
return Change;
}
var Quarters = 0.72;
var ChangeDue = getChange(Quarters / .25);
print(ChangeDue)
Slightly revised code which I seem to have made worse:
class changeDue {
var = quarters(.72)
func changeDue(Quarters: Double) {
var Change = Quarters
changeDue = changeDue - (quarters*.25)
}
var ChangeDue = getChange(int / .25);
print(changeDue)
}
notes/Feedback:
create a method called quarters(). When I pass any double value (ChangeDue) into the method, I want to know precisely how many quarters there are as well as the partial quarter change returned.
Create a class level variable, changeDue. This is where you will set your test input e.g. .78, 2.15.
In your method, calculate the number of quarters as the integer of changeDue/.25
Print the number of quarters.
Now you need the revised change after quarters are removed. changeDue=changeDue - (quarters*.25)
quarters = the integer of changedue/.25
changeDue is now = to the previous changeDue - (quarters times .25)
quarters(.72)
the integer of .72/.25 = 2
changedue=.72-(2 x .25) or .72 - .50 =.12
Print changeDue.
Any help would be appreciated. I've been working on this for longer than I want to admit.
Hint 1: Do not work with Double or fractional amounts. Turn dollars into pennies by multiplying everything by 100 before you start. Now you can do everything with integer arithmetic. After you get the answer, you can always divide by 100 to turn it back into dollars, if desired.
Hint 2: Do you know about the % operator? It tells you the remainder after a division.
I don't want to write your code for you (it's you who are being tested, not me, after all), but I'll just demonstrate with a different example:
51 / 7 is 7, because integer division throws away the remainder.
Sure, 7x7 is 49, with something left over. But what?
Answer: 51 % 7 is 2. Do you see?

Swift 3: func largestNumber

I do not have good skills at development so don't laugh at me. But now I make some tests to improve my skills at coding and I have some question. At Apple dev guide I can't find an answer.
So Swift 3 title "Given an integer n, return the largest number that contains exactly n digits"
All I know is it is
func largestNumber(n: Int) -> Int {
Example:
For n = 2, the output should be
largestNumber(n) = 99
Tell me please how to code it.
All the comments at the time of writing on your question have taken the view that you are asking a coding question - i.e. how do I write this in Swift - rather than an algorithm question - i.e. what is the mathematical/algorithmic solution to this problem?
As others have pointed out SO is not for getting things written for you - code or algorithms - but giving you some pointers to explore is another matter. So tackling the second problem, here are some things to explore:
If you don't know what a logarithm is find out.
Consider that log10(10) = 1, log10(100) = 2, etc. there is a pattern here. Can you relate those logarithm values to the solution to your problem in some way?
Having done that what might the log10 of the maximum Int value tell you?
Could you use what you've determined so far to test the parameter n of your largestNumber function to make sure you can produce a valid answer (i.e. one which is less than or equal to the maximum Int value)?
You state that largestNumber(2) = 99, write 99 as a formula containing a power of 10? How about 999? Spot a pattern?
Once you've an algorithm you can then code it in Swift (or Objective-C, Basic, Java, FORTRAN, Ada, Jovial, Go, Haskell, etc., etc., etc...)
HTH
I think this would work for you
func largestNumber(digits: Int) -> Int{
var numStr = "0"
for i in 0..<digits {
//since the biggest digit in the decimal system is 9 we append that to our String
numStr += "9"
}
//convert String to Int
return Int(numStr) ?? 0
}
Here is the answer:
function largestNumber(n) {
var sum="";
for (i=0; i<n; i++)
{
sum+="9";
console.log(sum);
}
return parseInt(sum);
}

Inaccurate division of doubles (Visual C++ 2008)

I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.
When I enter the expression in the debugger, the result is as accurate as I would expect.
I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss)
Furthermore it seems odd that the debugger would use a different format to do the division.
Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?
Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now.
Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to?
So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window.
Does it use a higher-precision type than double for it's results?
Adion,
If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.
You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.
For more about this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Thanks, using decimal would probably be a solution too.
For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting.
I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.
I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.
bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
if (!perfTimerInitialized) {
QueryPerformanceFrequency(&timerPerformanceFrequency);
timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
QueryPerformanceCounter(&timerPerformanceCounterStart);
perfTimerInitialized = true;
}
LARGE_INTEGER timerPerformanceCounter;
if (QueryPerformanceCounter(&timerPerformanceCounter)) {
timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
}
return (double)timeGetTime();
}