What are the approaches for writing a simple clock application? - iphone

I am writing a small program to display current time on iPhone (learning :D). I came across this confusion.
Is calling currentSystemTime ( eg: stringFromDate: ) on every second, parse it and print the time on screen is good?
Would it be more effective to call the above routine once and manually update the parsed second every tick of your timer. (Say like ++seconds; write some if loops to adjust minutes and hour).
Will the second approach result in out-of-sync of with the actual time; if the processor load increases or so?
Considering all this which will be the best approach.

I doubt that the overhead of querying the system time will be noticeable in comparison to the CPU cycles used to update the display. Set up an NSTimer to fire however often that you want to update the clock display, and update your display that way. Don't worry about optimizing it until you get the app working.

I would drop the seconds totally and just print the rest of the time then you only have to parse it once a minute.
That's if you want a clock rather than a stopwatch. Seriously, I can't remember the last time I looked at a clock without seconds and thought "Gosh, I don't know if it's 12:51:00 or 12:51:59. How will I make my next appointment?").
If you want to ensure you're relatively accurate in updating the minute, follow these steps:
Get the full time (HHMMSS).
Display down to minute resolution (HH:MM).
Subtract SS from 61 and sleep for that many seconds.
Go back to that first step.

Related

Swift Firebase first write incredibly slow

I have an app in production and am doing some updates. I use an old iPhone 5 for testing, and I've noticed that the write times for initial writes to the Firebase database are incredibly slow. When I say slow, I mean it's taking over a minute to do a single write to a single node (70-80 seconds).
Here is the write request (it's very simple):
FB.ref
.child(FB.members)
.child(currentUser.firstName)
.updateChildValues([FB.currentPoints: currentUser.currentPoints])
The issue is, this only happens on the first write. Subsequent writes are much faster (a couple of seconds at the most), and usually instant.
What I've Tried
I've read the documentation, and also this post here on SO, but it's not helping me.
I also did tests on the simulator and found that the write times for initial write is about 7 seconds. That still seems long to me, but nowhere near as long as the 70-80 seconds on the physical device.
Simulator initial write: 7 seconds (slow). Subsequent writes: instant-ish
iPhone 5 initial write: 80 seconds (extremely slow!). Subsequent writes: instant-ish
Any ideas on how to speed up that initial write? It's very problematic. For example, if a user logs in to make a quick change and then closes the app, the write will never happen (tested), or the delay will cause other inaccuracies within the app (also tested). What to do?
Is there a setting I can change on my database? Or a rule to adjust in my database rules? Or some code to include in my app?

Most efficient way to check time difference

I want to check an item in my database every x minutes for y minutes after its creation.
I've come up with two ways to do this and I'm not sure which will result in better efficiency/speed.
The first is to store a Date field in the model and do something like
Model.find({time_created > current_time - y})
inside of a cron job every x minutes.
The second is the keep a times_to_check field that keeps track of how many more times, based on x and y, the object should be checked.
Model.find({times_to_check> 0})
My thought on why these two might be comparable is because the first comparison of Dates would take longer, but the second one requires a write to the database after the object had been checked.
So either way you are going have to check the database continuously to see if it is time to query your collection. In your "second solution" you do not have a way to run your background process as you are only referencing how you are determining your collection delta.
Stick with running you unix Cron job but make sure it is fault tolerant an have controls ensuring it is actually running when you application is up. Below is a pretty good answer for how to handle that.
How do I write a bash script to restart a process if it dies?
Based on that i would ask how does your application react if your Cron job has not run for x number of minutes, hours or days. How will your application recover if this does happen?

Writing an Apple Watch Complication that predicts future values and displays time sensitive data

I am in the process of writing an Apple Watch Complication for WatchOS 2. The particular data I am trying to show is given (via web request) in intervals of time ranging from 3-6 minutes. I have a predictive algorithm that can predict what the data values will look like. This presents a problem to me.
Because I want to display the data my predictive algorithm has to offer in time travel, I would like to use getTimelineEntriesForComplication (the version that asks for data after a certain date) to supply the future values that my algorithm believes will be true to the timeline. However, when time moves forward (as it tends to do) and we reach the time that one of these predicted data points was set to occur at, the predicted value is no longer accurate.
For instance, lets say it is 12:00 PM, and I currently have an (accurate) data value of A. The predictive algorithm might predict the following data values of the next two hours:
12:30 PM | B
1:00 PM | C
1:30 PM | D
2:00 PM | E
However, when 12:30 PM actually comes around, the actualy data value might be F. In addition, the algorithm will generate a new set of predictions all the way to 2:30 PM. I understand I can use updateTimelineForComplication to indicate that the timeline has to be rebuilt, but I have two problems with this method:
I fear I will exceed the execution time limit rather quickly
updateTimelineForComplication flushes the entire timeline, which seems wasteful to me considering that all the past data is perfectly valid, its simply the next 4 or so values that need to be updated.
Is there a better way to handle this problem?
At present, there's no way to alter a specific timeline entry, without reloading the entire timeline. You could submit a feature request to Apple.
Summary
To summarize the details that follow, even though your server updates its predictions every 3-6 minutes, the complication server will only update itself at 10 minute intervals, starting at the top of an hour. Reloading the timeline is your only option, as it will guarantee that all your predictions are updated and accurate within 10 minutes.
Specific findings
What I've found in past tests involving extendTimelineForComplication: using the minimum 10-minute update interval, is that the dataSource is asked for 100 entries before and after a sliding window based on the current time.
The sliding window isn't centered on the current time. For watchOS 2.0.1, it appears to be skewed to ask for more recent future entries (after ~14-27 minutes in the future), and less recent past entries (before ~100 minutes in the past).
Reloading is the only way to update any entries that fall within the ~two hour sliding window.
Issues
In my experience, extendTimelineForComplication has been less reliable than reloading the timeline, as a timeline that that is never reloaded needs to be trimmed to discard entries. The fewer number of entries per hour, the less frequently this occurs, but once the timeline cache grows large enough, the SDK appears to aggressively discard entries from the head and tail of the cache, even if those entries fall within the 72-hour time-travel window. The worst I've seen is only being able to time-travel forward 30 entries, instead of 100.
Having provided those details, I wouldn't suggest that anyone try to take advantage of any behaviors that may change in the future.
Daily budget and battery life
As for the daily budget, it sound more ominous than it is, but I think you'd have to do some intense calculations before the complication server cuts you off. Even with ten minute updates, I never exceeded the budget. The real issue is battery use. You'll find that frequent updates can drain your battery before the day is over. This is probably the most significant reason for Apple's recommendation:
Complications should provide as much data as possible during each update cycle, so specify a date as far into the future as you can manage. Do not ask the system to update your complication within minutes. Provide data to last for many hours or for an entire day.

Persistent increment counter

I have a concept for a special albeit simple kind of clock that would display the number of seconds since a certain point in time (which would never change). What would be the best way of storing, incrementing and displaying this persistent value?
If the start point never changes, you only need to save that. Concept wise, that would be the same as Unix time.
Simply get the current system time and calculate the difference to the beginning of your epoch.
This was done via the Unix epoch. You would just need to create your own version of the epoch perhaps BobeEpoch. You could store this value somewhere that your application can retrieve it, then you would invoke the current system time. Once you had the current system time you would subtract this value from BobeEpoch and display that to the user.

Random Lookup Methodology

I have a postgres database with a table that contains rows I want to look up at pseudorandom intervals. Some I want to look up once an hour, some once a day, and some once a week. I would like the lookups to be at pseudorandom intervals inside their time window. So, the look up I want to do once a day should happen at a different time each time that it runs.
I suspect there is an easier way to do this, but here's the rough plan I have:
Have a settings column for each lookup item. When the script starts, it randomizes the epoch time for each lookup and sets it in the settings column, identifying the time for the next lookup. I then run a continuous loop with a wait 1 to see if the epoch time matches any of the requested lookups. Upon running the lookup, it recalculate when the next lookup should be.
My questions:
Even in the design phase, this looks like it's going to be a duct tape and twine routine. What's the right way to do this?
If by chance, my idea is the right way to do this, is my idea of repeating the loop with a wait 1 the right way to go? If I had 2 lookups back to back, there's a chance I could miss one but I can live with that.
Thanks for your help!
Add a column to the table for NextCheckTime. You could use either a timestamp or just an integer with the raw epoch time. Add a (non-unique) index on NextCheckTime.
When you add a row to the database, populate NextCheckTime by taking the current time, adding the base interval, and adding/subtracting a random factor (maybe 25% of the base interval, or whatever is appropriate for your situation). For example:
my $interval = 3600; # 1 hour in seconds
my $next_check = time + int($interval * (0.75 + rand 0.5));
Then in your loop, just SELECT * FROM table ORDER BY NextCheckTime LIMIT 1. Then sleep until the NextCheckTime returned by that (assuming it's not already in the past), perform the lookup, and update NextCheckTime as described above.
If you need to handle rows newly added by some other process, you might put a limit on the sleep. If the NextCheckTime is more than 10 minutes in the future, then sleep 10 minutes and repeat the SELECT to see if any new rows have been added. (Again, the exact limit depends on your situation.)
How big is your data set? If it's a few thousand rows than just randomizing the whole list and grabbing the first x rows is ok. As the size of your set grows, this becomes less and less scalable. The performance drops off at a non-linear rate. But if you only need to run this once an hour at most, then it's no big deal if it takes a minute or two as long as it doesn't kill other processes on the same box.
If you have a gapless sequence, whether there from the beginning or added on, then you can use indexes with something like:
$i=random(0,sizeofset-1);
select * From table where seqid=$i;
and get good scalability to millions and millions of rows.