I am changing my deprecated function setMinimumBackgroundFetchInterval() to new one BGAppRefreshTask() but every example is using actual-time like this:
request.earliestBeginDate = Date(timeIntervalSinceNow: 15 * 60) // Fetch no earlier than 15 minutes from now
but my current source used backgroundFetchIntervalMinimum like...
UIApplication.shared.setMinimumBackgroundFetchInterval(UIApplication.backgroundFetchIntervalMinimum)
so how can I apply backgroundFetchIntervalMinimum with BGAppRefreshTask?
Setting the request’s earliestBeginDate is equivalent to the old minimum fetch interval. Just use earliestBeginDate like you have seen in those other questions. And when your app runs a background fetch, you just schedule the next background fetch with its own earliestBeginDate at that point. That is how BGAppRefreshTask works.
If you are asking what is equivalent to UIApplication.backgroundFetchIntervalMinimum, you can just set earliestBeginDate to nil. The documentation tells us that:
Specify nil for no start delay.
Setting the property indicates that the background task shouldn’t start any earlier than this date. However, the system doesn’t guarantee launching the task at the specified date, but only that it won’t begin sooner.
The latter caveat applies to backgroundFetchIntervalMinimum, too. Background fetch, regardless of which mechanism you use, is at the discretion of the OS. It attempts to balance the app’s desire to have current data ready for the user the next time they launch the app with other considerations (such as preserving the device battery, avoiding excessive use of the cellular data plan, etc.).
The earliestBeginDate (like the older backgroundFetchIntervalMinimum) is merely a mechanism whereby you can give the OS additional hints to avoid redundant fetches. E.g., if your backend only updates a resource every twelve hours, there is no point in performing backend refreshes much more frequently than that.
Related
I am new to Flutter / mobile development and using package shared_preferences in my app together with android_alarm_manager_plus. My problem is that different functions report different states of shared preferences and thus cannot cooperate as intended.
The app lets users fulfill certain tasks with task-specific frequencies (daily, weekly, ...). Users may choose a time of the day (e.g., 11 am) when they want to be reminded of current tasks. An AndroidAlarmManager.periodic with duration of 24 hours is set to that time of the day and calls a schedule() function. The schedule() function checks if it is time to schedule again and which of the tasks are due, then adds those of a list of due tasks which is used as a base for the task widget to serve those. When a task is fulfilled, it removes itself from this list.
There are several values in shared preferences being used for this:
dateOfLastScheduling: the last day scheduling has been performed (used to prevent from scheduling multiple times per day)
last_time_<Task> for each task the app knows: the last day the task has been fulfilled
scheduledTasks: list of current tasks to fulfill
My shared preferences are one variable in a file named storage.dart (handling persistence in general) which is initialized in the App's first/landing page's initState() like:
SharedPreferences.getInstance().then((loadedPrefs) async {
storage.sharedPrefs = loadedPrefs;
...
Whenever I read or write those preferences in any of my files, I use storage.sharedPrefs. I do this because I read here that having shared preferences as a "singleton" like this is preferable over loading them locally everywhere - which I did before, with the same problems.
Now, for testing purposes, I introduced a button on another Route that just prints out my current prefs after converting them to a Map, and another that actually sets values, such as dateOfLastScheduling, so I can test scheduling.
Test-printing shared preferences with this button regularly yields different results than printing them in the schedule() function. Additionally, when I change, e.g., the date of last scheduling here, the function will not have these changes, instead work with an older version of shared preferences, and thus not schedule. When it does schedule, the tasks are added to scheduledTasks, believing a local print, but these changes, in turn, are not reflected anywhere else and the Widget serving tasks also does not know of it. Sometimes, hitting the print button several times eventually leads to an update.
It seems like every function is using their own (cached?) version of shared preferences, which is sort of the opposite of shared. Even doing storage.sharedPrefs.reload() does not really seem to change anything.
Am I missing something obvious here? How do I have to use shared preferences, so that they are the same everywhere and updated immediately?
I am new to drools / fusion (7.x) and am not sure how to solve this requirement. Assume I have event objects as Event{long: timestamp, id: string} where id identifies a physical asset (like tractor) and timestamp represents the time the event fired relative to the asset. In my scenario these Events do not arrive in my system in 'real-time', meaning they can be seconds, minutes or even days late. And my rules system needs to monitor multiple assets. Given this, when rules are evaluate the clock needs to be relative to the asset being monitored, it can't be a clock that spans assets.
I'm aware of Pseudo Clock, is there a way to assign Pseudo clocks per Asset?
My assumption is that a clock must always progress forward or temporal functions will not work properly. Take for the example the following scenario:
Fact A for Asset 1 arrive at 1:00 it is inserted into memory and rules fired. Then Fact B arrives for same Asset 1 at 2:00. It too is inserted and rules fired. Now Fact Z arrives for Asset 2 at 1:30 (- 30 minutes from clock). I'm assuming I shouldn't simply progress the clock backwards and evaluate, furthermore I'd want to set the clock back to 2:00 since that was the "latest" data I received. Now assume I am monitoring thousands of assets, all sending data at different times...
The best way I can think to address this is to keep a clock per asset and then save the engine state when each assets data is evaluated. Can individual KieSession's have different clocks, or is it at a container level?
Sample rule: When Fact 1 arrives after Fact 2 for the same Asset.
You're approaching the problem incorrectly. Regardless of whether you're using a realtime or psuedo clock, you're using a clock. You can't say "Fact #1 use clock A, and Fact #2 use clock B."
Instead you should be leveraging the metadata tags for events, specifically the #timestamp tag. This tag indicates to Drools that a specific field inside of the event is actually the timestamp for the Event, rather than the actual time the fact enters working memory.
For example:
import com.example.SampleEvent
declare SampleEvent
#role( event )
// this field is actually in the object, it's not the time the fact was inserted
#timestamp( createdDateTime )
end
Not knowing anything about what your rules are actually doing, the major issue I can foresee here is that if your rules rely on the temporal operators or define an expiry (#expires), they're not going to work and you'll need to redesign them. Especially for expirations: once an event expires, it is removed from working memory; when your out-of-band events come in any previously expired events are already gone and can't be worked against.
Of course that concern would be true regardless of whether you use #timestamp or your original "different psuedo clock" plan. Either way you're going to have to manage the fact that events cannot live forever in working memory -- you will eventually run out of resources and your system will crash. Events must be evicted at some point, so you'll need to design around that in both your models and your rules.
I currently have a coreData entity called CalorieProgress, which I would like to reset all variables (calorieProgress, fatProgress) to 0, every day.
I am still quite new to SwiftUI, and the only method I thought of as of now, is to add a Date Created variable to this entity called created, and when the user opens the app, to check if that date was yesterday. If so set all values to 0 etc.
Is there a more efficient way to do this?
Thanks
Your design is good and simple, and a reasonable choice if you're getting started.
It can have trouble, however, when people move between time zones. It is even possible for people to move to previous days this way (most dramatically when they cross the date line). There is no single answer to that question. Your app has to decide what it means by "today" when strange clock events happen. (Users also sometimes change their clock, and you want to behave "reasonably" in those cases, even if it means the data is wrong.)
Having built several of these, my suggestion is to just store raw, immutable, data records, and work out things like resetting values when you're running queries. For example, to work out how many calories someone has burned "today" doesn't require that you set any value to zero. You can just perform a query for records that occur after some time and sum their calories (you can even do this with aggregate queries directly on Core Data).
Core Data can be very fast, but if these queries become too slow, you can store daily aggregation records in Core Data. But keeping the original raw data means that those are really just caches and you can throw them away and recompute any time you need to.
Assuming that a new day starts as midnight (I've worked on apps where days started "when the user wakes up in the morning" which is much more complicated...) you should also be aware of significantTimeChangeNotification which is posted at midnight (and a few other times). You can't use this to launch your app or do processing in the background, but it's very nice for updating your UI if the user has the app open.
I have a regional object store. I would like to be able to tell a particular object that I want you deleted in 5 days time from now.
How do you suggest I implement?
I don't really want to keep track of the object in a database, and based on time send delete commands as a separate process. Is there any tag that could be set to get deletion to occur at a later time (from now, not a specific time in the past)?
There's no functionality built into Google Cloud Storage to do this.
You can configure Lifecycle Management to delete objects according to a number of criteria (including age) - but deleting at a particular date in the future isn't one of the supported conditions and in fact there's no guarantee that a lifecycle condition will run the same day the condition becomes true. Instead you would have to implement this functionality yourself (e.g., in a Compute Engine or App Engine implementation).
What does "level-based" and "edge-based" mean in general?
I read "In other words, the system's behavior is level-based rather than edge-based" from kubernetes documentation:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md
with Google, I only find:
http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/
Thank you.
It also has a more general definition (at least the way we tend to use it in the documentation). A piece of logic is "level based" if it only depends on the current state. A piece of logic is "edge-based" if it depends on history/transitions in addition to the current state.
"Level based" components are more resilient because if they crash, they can come back up and just look at the current state. "Edge-based" components must store the history they rely on (or depend on some other component that stores it), so that when they come back up they can look at the current state and the history. Also, if there is some kind of temporary network partition and an edge-based component misses some of the updates, then it will compute the wrong output.
However, "level based" components are usually less efficient, because they may need to scan a lot of state in order to compute an output, rather than just reading deltas.
Many components are a mixture of the two.
Simple example: You want to build a component that reports the number of pods in READY state. A level-based implementation would fetch all the pods from etcd (or the API server) and count. An edge-based implementation would do that once at startup, and then just watch for pods entering and exiting READY state.
I'd say they explained it pretty well on the site:
When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3.
So from that statement, we know that "level base" means a PUT request does not need to be satisfied if the primary goal does not require it; it's free to skip PUT requests when seen fit.
This makes me assume that "edge based" systems require every PUT request to be satisfied, even if some requests could be skipped without altering the final result, since that would be the alternative to skipping requests.
I'm no RESTful developer (you can see by my account activity). I could not find any source of information for these things anywhere else, so I'm going based on the explanation they gave, which seems pretty straight forward.
The Kubernetes API doesn't store a history of all the changes made to an object. The controller that is responsible for that object cannot reliably observe each change; it may only observe the current state of the object.
The term "level" means the current state of the object, and the term "edge" means a transition to a new state. Controllers are "level-based" because they cannot reliably observe the transitions.
From the API conventions:
When a new version of an object is POSTed or PUT, the spec is updated and available immediately. Over time the system will work to bring the status into line with the spec. The system will drive toward the most recent spec regardless of previous versions of that stanza. For example, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the status to 3. In other words, the system's behavior is level-based rather than edge-based. This enables robust behavior in the presence of missed intermediate state changes.