I am new to Flutter / mobile development and using package shared_preferences in my app together with android_alarm_manager_plus. My problem is that different functions report different states of shared preferences and thus cannot cooperate as intended.
The app lets users fulfill certain tasks with task-specific frequencies (daily, weekly, ...). Users may choose a time of the day (e.g., 11 am) when they want to be reminded of current tasks. An AndroidAlarmManager.periodic with duration of 24 hours is set to that time of the day and calls a schedule() function. The schedule() function checks if it is time to schedule again and which of the tasks are due, then adds those of a list of due tasks which is used as a base for the task widget to serve those. When a task is fulfilled, it removes itself from this list.
There are several values in shared preferences being used for this:
dateOfLastScheduling: the last day scheduling has been performed (used to prevent from scheduling multiple times per day)
last_time_<Task> for each task the app knows: the last day the task has been fulfilled
scheduledTasks: list of current tasks to fulfill
My shared preferences are one variable in a file named storage.dart (handling persistence in general) which is initialized in the App's first/landing page's initState() like:
SharedPreferences.getInstance().then((loadedPrefs) async {
storage.sharedPrefs = loadedPrefs;
...
Whenever I read or write those preferences in any of my files, I use storage.sharedPrefs. I do this because I read here that having shared preferences as a "singleton" like this is preferable over loading them locally everywhere - which I did before, with the same problems.
Now, for testing purposes, I introduced a button on another Route that just prints out my current prefs after converting them to a Map, and another that actually sets values, such as dateOfLastScheduling, so I can test scheduling.
Test-printing shared preferences with this button regularly yields different results than printing them in the schedule() function. Additionally, when I change, e.g., the date of last scheduling here, the function will not have these changes, instead work with an older version of shared preferences, and thus not schedule. When it does schedule, the tasks are added to scheduledTasks, believing a local print, but these changes, in turn, are not reflected anywhere else and the Widget serving tasks also does not know of it. Sometimes, hitting the print button several times eventually leads to an update.
It seems like every function is using their own (cached?) version of shared preferences, which is sort of the opposite of shared. Even doing storage.sharedPrefs.reload() does not really seem to change anything.
Am I missing something obvious here? How do I have to use shared preferences, so that they are the same everywhere and updated immediately?
Related
I am changing my deprecated function setMinimumBackgroundFetchInterval() to new one BGAppRefreshTask() but every example is using actual-time like this:
request.earliestBeginDate = Date(timeIntervalSinceNow: 15 * 60) // Fetch no earlier than 15 minutes from now
but my current source used backgroundFetchIntervalMinimum like...
UIApplication.shared.setMinimumBackgroundFetchInterval(UIApplication.backgroundFetchIntervalMinimum)
so how can I apply backgroundFetchIntervalMinimum with BGAppRefreshTask?
Setting the request’s earliestBeginDate is equivalent to the old minimum fetch interval. Just use earliestBeginDate like you have seen in those other questions. And when your app runs a background fetch, you just schedule the next background fetch with its own earliestBeginDate at that point. That is how BGAppRefreshTask works.
If you are asking what is equivalent to UIApplication.backgroundFetchIntervalMinimum, you can just set earliestBeginDate to nil. The documentation tells us that:
Specify nil for no start delay.
Setting the property indicates that the background task shouldn’t start any earlier than this date. However, the system doesn’t guarantee launching the task at the specified date, but only that it won’t begin sooner.
The latter caveat applies to backgroundFetchIntervalMinimum, too. Background fetch, regardless of which mechanism you use, is at the discretion of the OS. It attempts to balance the app’s desire to have current data ready for the user the next time they launch the app with other considerations (such as preserving the device battery, avoiding excessive use of the cellular data plan, etc.).
The earliestBeginDate (like the older backgroundFetchIntervalMinimum) is merely a mechanism whereby you can give the OS additional hints to avoid redundant fetches. E.g., if your backend only updates a resource every twelve hours, there is no point in performing backend refreshes much more frequently than that.
I am writing for learning purposes a cross-platform to-do app with Flutter and Firestore. Currently, I have the following design, and I would like to know if there are better alternatives.
One of the main screens of the app shows a list of all tasks. It does this by subscribing to the corresponding Firestore collection, which we'll say is /tasks for simplicity.
FirebaseFirestore.instance.collection("tasks").snapshots()
Each tile in the ListView of tasks can be clicked. Clicking a tile opens a new screen (with Navigator.push) showing details about that specific task.
Importantly, this screen also needs to update in real-time, so it is not enough to just pass it the (local, immutable) task object from the main screen. Instead, this screen subscribes to the individual Firestore document corresponding to that task.
FirebaseFirestore.instance.collection("tasks").doc(taskId).snapshots()
This makes sense to me logically: the details page only needs to know about that specific document, so it only subscribes to it to avoid receiving unnecessary updates.
The problem is since the collection-wide subscription for the main screen is still alive while the details screen is open, if the document /tasks/{taskId} gets updated, both listeners will trigger. According to the answers in this, this and this question, this means I will get charged for two (duplicate) reads for any single update to that document.
Furthermore, each task can have subtasks. This is reflected in Firestore as a tasks subcollection for each task. For example, a nested task could have the path: /tasks/abc123/tasks/efg875/tasks/aay789. The main page could show all tasks regardless of nesting by using a collection group query on "tasks". The aforementioned details page also shows the tasks' subtasks by listening to the subcollection. This allows to make complex queries on subtasks (filtering, ordering, etc.), but again the disadvantage is getting duplicate reads for every update to a subtask.
The alternative designs that occur to me are:
Only keep a single app-wide subscription to the entire set of tasks (be it a flat collection or a collection group query) and do any and all selection, filtering, etc. on the client. For example, the details page of a task would use the same collection-wide subscription and select the appropriate task out of the set every time. Any filtering and ordering of tasks/subtasks would be done on the client.
Advantages: no duplicate reads, minimizes the Firestore cost.
Disadvantages: might be more battery intensive for the client, and code would become more complex as I'd have to select the appropriate data out of the entire set of tasks in every situation.
Cancel the collection-wide subscription when opening the details page and re-start it when going back to the main screen. This means when the details page is open, only updates to that specific task will be received, and without being duplicated as two reads.
Advantages: no duplicate reads.
Disadvantages: re-starting the subscription when going back to the main screen means reading all of the documents in the first snapshot, i.e. one read per task, which might actually make the problem worse. Also, it could be quite complicated to code.
Do any of these designs seem the best? Is there another better alternative I'm missing?
Create a TaskService or something similar in your app that handles listening to the FirebaseFirestore.instance.collection("tasks").snapshots() call, then in your app, subscribe to updates to that service rather than Firebase itself (you can create two Stream objects, one for global updates, one for specific updates).
Then, you've only one read going on in your Firebase collection. Everything is handled app side.
Pseudo-code:
class TaskService {
final List<Task> _tasks = [];
final StreamController<List<Task>> _signalOnTasks = StreamController.broadcast();
final StreamController<Task> _signalOnTask = StreamController.broadcast();
get List<Task> allTasks => _tasks;
Stream<List<Task>> get onTasks => _signalOnTasks.stream;
Stream<List<Task>> get onTask => _signalOnTask.stream;
void init() {
FirebaseFirestore.instance.collection("tasks").snapshots().listen(_onData);
}
void _onData(snapshot) {
/// get/update our tasks (maybe check for duplicates or whatever)
_tasks.addAll(snapshot.documents);
/// dispatch our signal streams
_signalOnTasks.add(snapshot.documents);
for(final task in snapshot.documents) {
_signalOnTask.add(task);
}
}
}
You can make TaskService and InheritedWidget to get access to it wherever (or use the provider package), the add your listeners to whatever stream you're interested in. You'll need just to check in your listener to onTask that it's the correct task before doing anything with it.
I have a regional object store. I would like to be able to tell a particular object that I want you deleted in 5 days time from now.
How do you suggest I implement?
I don't really want to keep track of the object in a database, and based on time send delete commands as a separate process. Is there any tag that could be set to get deletion to occur at a later time (from now, not a specific time in the past)?
There's no functionality built into Google Cloud Storage to do this.
You can configure Lifecycle Management to delete objects according to a number of criteria (including age) - but deleting at a particular date in the future isn't one of the supported conditions and in fact there's no guarantee that a lifecycle condition will run the same day the condition becomes true. Instead you would have to implement this functionality yourself (e.g., in a Compute Engine or App Engine implementation).
Is watchman capable of posting to the configured command, why it's sending a file to that command?
For example:
a file is new to a folder would possibly be a FILE_CREATE flag;
a file that is deleted would send to the command the FILE_DELETE flag;
a file that's modified would send a FILE_MOD flag etc.
Perhaps even when a folder gets deleted (and therefore the files thereunder) would send a FOLDER_DELETE parameter naming the folder, as well as a FILE_DELETE to the files thereunder / FOLDER_DELETE to the folders thereunder
Is there such a thing?
No, it can't do that. The reasons why are pretty fundamental to its design.
The TL;DR is that it is a lot more complicated than you might think for a client to correctly process those individual events and in almost all cases you don't really want them.
Most file watching systems are abstractions that simply translate from the system specific notification information into some common form. They don't deal, either very well or at all, with the notification queue being overflown and don't provide their clients with a way to reliably respond to that situation.
In addition to this, the filesystem can be subject to many and varied changes in a very short amount of time, and from multiple concurrent threads or processes. This makes this area extremely prone to TOCTOU issues that are difficult to manage. For example, creating and writing to a file typically results in a series of notifications about the file and its containing directory. If the file is removed immediately after this sequence (perhaps it was an intermediate file in a build step), by the time you see the notifications about the file creation there is a good chance that it has already been deleted.
Watchman takes the input stream of notifications and feeds it into its internal model of the filesystem: an ordered list of observed files. Each time a notification is received watchman treats it as a signal that it should go and look at the file that was reported as changed and then move the entry for that file to the most recent end of the ordered list.
When you ask Watchman for information about the filesystem it is possible or even likely that there may be pending notifications still due from the kernel. To minimize TOCTOU and ensure that its state is current, watchman generates a synchronization cookie and waits for that notification to be visible before it responds to your query.
The combination of the two things above mean that watchman result data has two important properties:
You are guaranteed to have have observed all notifications that happened before your query
You receive the most recent information for any given file only once in your query results (the change results are coalesced together)
Let's talk about the overflow case. If your system is unable to keep up with the rate at which files are changing (eg: you have a big project and are very quickly creating and deleting files and the system is heavily loaded), the OS can't fit all of the pending notifications in the buffer resources allocated to the watches. When that happens, it blows those buffers and sends an overflow signal. What that means is that the client of the watching API has missed some number of events and is no longer in synchronization with the state of the filesystem. If that client is maintains state about the filesystem it is no longer valid.
Watchman addresses this situation by re-examining the watched tree and synthetically marking all of the files as being changed. This causes the next query from the client to see everything in the tree. We call this a fresh instance result set because it is the same view you'd get when you are querying for the first time. We set a flag in the result so that the client knows that this has happened and can take appropriate steps to repair its own state. You can configure this behavior through query parameters.
In these fresh instance result sets, we don't know whether any given file really changed or not (it's possible that it changed in such a way that we can't detect via lstat) and even if we can see that its metadata changed, we don't know the cause of that change.
There can be multiple events that contribute to why a given file appears in the results delivered by watchman. We don't them record them individually because we can't track them with unbounded history; imagine a file that is incrementally being written once every second all day long. Do we keep 86400 change entries for it per day on hand and deliver those to our clients? What if there are hundreds of thousands of files like this? We'd have to truncate that data, and at that point the loss in the data reduces how well you can reason about it.
At the end of all of this, it is very rare for a client to do much more than try to read a file or look at its metadata, and generally speaking, they want to do that only when the file has stopped changing. For this use case, watchman-wait, watchman-make and trigger all have the concept of a settle period that causes the change notifications to be delayed in delivery until after the filesystem has stopped changing.
I have a relatively large spreadsheet (300 rows, 30 columns) that I color based on the values in the spreadsheet. I'm doing accessing the API minimally using only two accesses:
getValues(...) to access all the values of the data range.
setBackgrounds(...) to set all the backgrounds of the data range.
This runs in about half a second or less. However, it gets in the way if I make it run on every edit using onEdit(), but I also don't want it to be updated at regular time intervals when I'm not editing it, seems like a waste. Is there a good way to make the script run in a "delayed" way, updating at regular time intervals while I'm editing?
Firstly, I would say you should look at Google Sheets' conditional formatting (Format > Conditional formatting menu item in Sheets) -- you may be able to do much of what you need without involving Apps Script at all.
Failing that, you can set up a regular time-based trigger to check for edits and change the backgrounds appropriately. You can support this trigger with a separate onEdit() trigger to record what has changed internally. The flow goes like this:
A change is made and onEdit() triggers
The onEdit() trigger only records the changed cell locations to a local variable or Cache
A time-based trigger fires every minute/hour/whenever
The time-based trigger checks the cache for edited cells, alters their backgrounds, then clears them from the cache
That said, depending on your workflow this approach may not be much better than simply using a time trigger to change the cells directly.