Writing an Apple Watch Complication that predicts future values and displays time sensitive data - prediction

I am in the process of writing an Apple Watch Complication for WatchOS 2. The particular data I am trying to show is given (via web request) in intervals of time ranging from 3-6 minutes. I have a predictive algorithm that can predict what the data values will look like. This presents a problem to me.
Because I want to display the data my predictive algorithm has to offer in time travel, I would like to use getTimelineEntriesForComplication (the version that asks for data after a certain date) to supply the future values that my algorithm believes will be true to the timeline. However, when time moves forward (as it tends to do) and we reach the time that one of these predicted data points was set to occur at, the predicted value is no longer accurate.
For instance, lets say it is 12:00 PM, and I currently have an (accurate) data value of A. The predictive algorithm might predict the following data values of the next two hours:
12:30 PM | B
1:00 PM | C
1:30 PM | D
2:00 PM | E
However, when 12:30 PM actually comes around, the actualy data value might be F. In addition, the algorithm will generate a new set of predictions all the way to 2:30 PM. I understand I can use updateTimelineForComplication to indicate that the timeline has to be rebuilt, but I have two problems with this method:
I fear I will exceed the execution time limit rather quickly
updateTimelineForComplication flushes the entire timeline, which seems wasteful to me considering that all the past data is perfectly valid, its simply the next 4 or so values that need to be updated.
Is there a better way to handle this problem?

At present, there's no way to alter a specific timeline entry, without reloading the entire timeline. You could submit a feature request to Apple.
Summary
To summarize the details that follow, even though your server updates its predictions every 3-6 minutes, the complication server will only update itself at 10 minute intervals, starting at the top of an hour. Reloading the timeline is your only option, as it will guarantee that all your predictions are updated and accurate within 10 minutes.
Specific findings
What I've found in past tests involving extendTimelineForComplication: using the minimum 10-minute update interval, is that the dataSource is asked for 100 entries before and after a sliding window based on the current time.
The sliding window isn't centered on the current time. For watchOS 2.0.1, it appears to be skewed to ask for more recent future entries (after ~14-27 minutes in the future), and less recent past entries (before ~100 minutes in the past).
Reloading is the only way to update any entries that fall within the ~two hour sliding window.
Issues
In my experience, extendTimelineForComplication has been less reliable than reloading the timeline, as a timeline that that is never reloaded needs to be trimmed to discard entries. The fewer number of entries per hour, the less frequently this occurs, but once the timeline cache grows large enough, the SDK appears to aggressively discard entries from the head and tail of the cache, even if those entries fall within the 72-hour time-travel window. The worst I've seen is only being able to time-travel forward 30 entries, instead of 100.
Having provided those details, I wouldn't suggest that anyone try to take advantage of any behaviors that may change in the future.
Daily budget and battery life
As for the daily budget, it sound more ominous than it is, but I think you'd have to do some intense calculations before the complication server cuts you off. Even with ten minute updates, I never exceeded the budget. The real issue is battery use. You'll find that frequent updates can drain your battery before the day is over. This is probably the most significant reason for Apple's recommendation:
Complications should provide as much data as possible during each update cycle, so specify a date as far into the future as you can manage. Do not ask the system to update your complication within minutes. Provide data to last for many hours or for an entire day.

Related

PostgreSQL delete and aggregate data periodically

I'm developing a sensor monitoring application using Thingsboard CE and PostgreSQL.
Contex:
We collect data every second, such that we can have a real time view of the sensors measurements.
This however is very exhaustive on storage and does not constitute a requirement other than enabling real time monitoring. For example there is no need to check measurements made last week with such granularity (1 sec intervals), hence no need to keep such large volumes of data occupying resources. The average value for every 5 minutes would be perfectly fine when consulting the history for values from previous days.
Question:
This poses the question on how to delete existing rows from the database while aggregating the data being deleted and inserting a new row that would average the deleted data for a given interval. For example I would like to keep raw data (measurements every second) for the present day and aggregated data (average every 5 minutes) for the present month, etc.
What would be the best course of action to tackle this problem?
I checked to see if PostgreSQL had anything resembling this functionality but didn't find anything. My main ideia is to use a cron job to periodically perform the aggregations/deletions from raw data to aggregated data. Can anyone think of a better option? I very much welcome any suggestions and input.

Given a fixed (but non-linear) calendar of future equipment usage; how to calculate the life at a given date in time

I have a challenge that i'm really struggling with.
I have a table which contains a 'scenario' that a user has defined, that they will consume 'usage'. For example, how many hours a machine will be turned on for.
In month 1, they will use 300 (hours, stored as an integer in minutes), Month 2 100, Month 3 450 etc etc etc.
I then have a list of tasks which need to be performed at specific intervals of usage. Given the above scenario, how could I forecast when these tasks will be due (the date). I also need to show repeat accomplishments and the dates.
The task contains the number of consumed hours at the last point of accomplishment, the interval between accomplishments and the expected life total the next time it is due (Last Done + Interval = Next Due)
I've tried SO many different options, ideally I want this to be compiled at run time (i.e. the only things that are saved into a permanent table are the forecast and the list of tasks). I have 7-800 scenarios, plus given the number of pieces of equipment, there are 12,000 tasks to be carried out. I need to show at least the next 100 years of tasks.
At the minute, I get the scenario, cross apply a list of dates between now and the year 2118. I then (in the where clause) filter out where the period number (the month number of the year) isn't matching the date, then divide the period usage by the number of days in that period. That gives me a day to day usage over the next 100 years (~36,000 Rows) When I join on the 12000 tasks and then try to filter where the due at value matches my dates table, I can't return ANY rows, even just the date column and the query runs for 7-8 minutes.
We measure more than just hours of usage too, there are 25 different measurement points, all of which are specificed in the scenario.
I can't use linear regression, because lets say for example, we shut down over the summer, and we don't utilize the equipment, then an average usage over the year means that tasks are due, even when we are telling the scenario we're shut down.
Are there any strategies out there that I could apply. I'm not looking for a 'Here's the SQL' answer, I'm just running out of strategies to form a solid query that can deal with the volume of data I'm dealing with.
I can get a query running perfectly with one task, but it just doesn't scale... at all...
If SQL isn't the answer, then i'm open to suggestions.
Thanks,
Harry

How to investigate very slow setting of Property in SWIFT Core Data

I have a Core Data model which has an Entity TrainingDiary which contains Day entities. I have set up a couple of test Training Diaries on with 32 Days in it and the other a full import of a live Training Diary which has 5,103 days in it. Days have relationships to each other for yesterday and tomorrow.
I have a derived property on Day which takes a value from yesterday and value from today to calculate a value on today. This worked but when scrolling through the table of days it was relatively slow. Since after a day has past it would be very rare for this value to change I decided it would be better to store the value and calculate it once.
Thus I have a calculation that can be done on a selected set of days. When a diary is first imported it would be performed on all days but after that it would in the vast majority of cases only be calculated on the most recent Day.
When I check the performance of this on my test data I get the following:
Calculating all 32 days in the 32 day Training diary takes 0.4916 seconds of which 0.4866 seconds is spent setting the values on the Core Data model.
Calculating 32 days in the 5,103 day Training diary takes 47.9568 seconds of which 47.9560 seconds is spent setting the values on the Core Data model.
The Core Data model is not being saved. I had wondered whether there were loads of notifications going on so I removed the observer I had set on NSNotification.Name.NSManagedObjectContextObjectsDidChange and I removed my implementation of keyPathsForValuesAffectingValue(forKey:) implementation but it made no difference.
I can import the whole diary from JSON in far less time - that involves creating all the objects and setting every value.
I am at a loss as to why this is. Does any one have any suggestions on what it could be or on steps to investigate this ?
[I'm using XCode9 and SWIFT4]

Reporting of workflow and times

I have to start moving transnational data into a reporting database, but would like to move towards a more warehouse/data mart design, eventually leveraging Sql Server Analytics.
The thing that is being measured is the time between points of a workflow on a piece of work. How would you model that when the things that can happen, do not have a specific order. Also some work wont have all the actions, or might have the same action multiple times.
It makes me want to put the data into a typical relational design with one table the key or piece of work and a table that has all the actions and times. Is that wrong? The business is going to try to use tableau for report writing and I know it can do all kinds of sources, but again, I would like to move away from transaction into warehousing.
The work is the dimension and the actions and times are the facts?
Is there any other good online resources for modeling questions?
Thanks
It may seem like splitting hairs, but you don't want to measure the time between points in a workflow, you need to measure time within a point of a workflow. If you change your perspective, it can become much easier to model.
Your OLTP system will likely capture the timestamp of when the event occurred. When you convert that to OLAP, you should turn that into a start & stop time for each event. While you're at it, calculate the duration, in seconds or minutes, and the occurrence number for the event. If the task was sent to "Design" three times, you should have three design events, numbered 1,2,3.
If you're want to know how much time a task spent in design, the cube will sum the duration of all three design events to present a total time. You can also do some calculated measures to determine first time in and last time out.
Having the start & stop times of the task allow you to , for example, find all of tasks that finished design in January.
If you're looking for an average above the event grain, for example what is the average time in design across all tasks, you'll need to do a new calculated measure using total time in design/# tasks (not events).
Assuming you have more granular states, it is a good idea to define parent states for use in executive reporting. In my company, the operational teams have workflows with 60+ states, but management wanted them rolled up into five summary states. The rollup hierarchy should be part of your workflow states dimension.
Hope that helps.

What are the approaches for writing a simple clock application?

I am writing a small program to display current time on iPhone (learning :D). I came across this confusion.
Is calling currentSystemTime ( eg: stringFromDate: ) on every second, parse it and print the time on screen is good?
Would it be more effective to call the above routine once and manually update the parsed second every tick of your timer. (Say like ++seconds; write some if loops to adjust minutes and hour).
Will the second approach result in out-of-sync of with the actual time; if the processor load increases or so?
Considering all this which will be the best approach.
I doubt that the overhead of querying the system time will be noticeable in comparison to the CPU cycles used to update the display. Set up an NSTimer to fire however often that you want to update the clock display, and update your display that way. Don't worry about optimizing it until you get the app working.
I would drop the seconds totally and just print the rest of the time then you only have to parse it once a minute.
That's if you want a clock rather than a stopwatch. Seriously, I can't remember the last time I looked at a clock without seconds and thought "Gosh, I don't know if it's 12:51:00 or 12:51:59. How will I make my next appointment?").
If you want to ensure you're relatively accurate in updating the minute, follow these steps:
Get the full time (HHMMSS).
Display down to minute resolution (HH:MM).
Subtract SS from 61 and sleep for that many seconds.
Go back to that first step.