Mobile Services Offline PullAsync only retrieves data where updatedAt date > the latest record - azure-mobile-services

I am using offline data sync in Mobile Services and the following function only retrieves data where UpdatedAt > the largest updatedAt in the userTable:
await userTable.PullAsync(userTable.Where(a => a.UserName == userName));
The first time this function is executed, it retrieves my data correctly. The second time the function executes, whith a different username, it will only retrieve data where UpdatedAt is greater than the greatest UpdatedAt datetime that is already present in my SQLite db. If I change an UpdatedAt field in the backend (by setting it to DateTime.Now), this record is retrieved. Is this by design or am I missing something?

For anybody else having issues with this: I have started another thread here where you will find the complete answer
Basically what it comes down to is this:
This will retrieve all records from the backend where username is donna:
await itemTable.PullAsync(itemTable.Where(a => a.UserName == "donna"));
This will retrieve all records where username is "donna" the first time, and after that only updated records. (incremental)
await itemTable.PullAsync("itemtabledonna", itemTable.Where(a => a.UserName == "donna"));
The first parameter is the queryKey. This is used to track your requests to the backend. A very important thing to know is that there is a restriction on this queryKey:
^[a-zA-Z][a-zA-Z0-9]{0,24}$
Meaning: alphanumeric characters, max 25 characters long. So no hyphens either (at the time of this writing). If your queryKey does not match this regex, no recrods will be returned. There is currently no exception thrown and no documentation on this.

PullAsync() is supposed to use an incremental sync (getting only records what have a newer date than the last record it retrieved) when you pass in a query key. If not, it should execute your query and pull down all matching records.
It sounds like a bug is occurring if you are getting that behavior without passing in a query key.
Also, in the incremental sync case, it is not the latest updated at in the SQLite DB but a cached version from the last time PullAsync() was ran (that was cached under the given query key)
Your updatedAt column also by default will have a trigger that causes its timestamp to update whenever the row is modified, so you shouldn't have to take any additional actions when using incremental sync.
If the above is not what you are seeing, I recommend submitting a github bug against azure-mobile-services so it can be reviewed.

Related

How to handle future dated records in postgress using Ef core

I am working on microservices architecture for payroll application.
ORM -EF core
I have Employee table ,where employee details are stored as jsonb column(firstname,lastname,department etc) in postgress .
one of the use case is, I may receive request for future dated changes.Example- Employee designation gets changed next month but I receive request for those change in current month.
I have two approachs to handle this scenario.
Approach 1 :
when I get future dated record(effective date > current date), I will store those records in separate table not in employee master table.
I will create one console application which runs on everyday (cron) and picks up the correct record(effectivedate == currentdate) and update the employee master table.
Approach 2:
almost same as approach 1, instead of using a table for storing future dated record, I will update the record in employee master table.
If I go with approach 2,
I need to delete existing record when effective date becomes current date
when I do get request I should get only current record not future record - to achieve this, I need to add condition for checking effective date. All employee details are stored in jsonb column so I need to fetch entire records with current and future dated record and filter only the current record.
I feel approach 1 is better.Please help me on this. I would like to know another approaches which may fit for this use case.
Thanks,
Revathi

Ordering Firebase posts Chronologically Swift

I have added posts to firebase and I am wondering how I can pull the posts chronologically based on when the user has posted them.
My Database is set up like below
The first node after comments is the User ID and then the posts are underneath that. Obviously, these posts are in order, however if a new user posts something in between "posting" and "another 1" ,for example, how would I pull that so it shows up in between.
Is there a way to remove the autoID and just use the userID as a key? The problem I am running into is the previous post is overwritten then.
I am accepting the answer as it is the most thorough. What I did to solve my problem was just create the unique key as the first node and then use the UID as a child and the comment as a child. Then I pull the unique key's as they are in order and find the comment associated with the uid.
The other answers all have merit but a more complete solution includes timestamping the post and denormalizing your data so it can be queried (assuming it would be queried at some point). In Firebase, flatter is better.
posts
post_0
title: "Posts And Posting"
msg: "I think there should be more posts about posting"
by_uid: "uid_0"
timestamp: "20171030105500"
inv_timestamp: "-20171030105500"
uid_time: "uid_0_ 20171030105500"
uid_inv_time: "uid_0_-20171030105500"
comments:
comment_0
for_post: "post_0"
text: "Yeah, posts about posting are informative"
by_uid: "uid_1"
timestamp: "20171030105700"
inv_timestamp: "-20171030105700"
uid_time: "uid_1_20171030105700"
uid_inv_time: "uid_1_-20171030105700"
comment_1
for_post: "post_0"
text: "Noooo mooooore posts, please"
by_uid: "uid_2"
timestamp: "20171030110300"
inv_timestamp: "-20171030110300"
uid_time: "uid_2_20171030110300"
uid_inv_time: "uid_2_-20171030110300"
With this structure we can
get posts and their comments and order them ascending or descending
query for all posts within the last week
all comments or posts made by a user
all comments or posts made by a user within a date range (tricky, huh)
I threw a couple of other key: value pairs in there to round it out a bit: compound values, query-ing ascending and descending, timestamp.
You can not use the userID as key value instead of the autoID, because the key must be unique, thats why Firebase just updates the value and does not add another one with the same key. Normally Firebase nodes are ordered chronologically by default, so if you pull the values, those should be in the right order. However if you wanna make sure about that, you can add a timestamp value and set a server timestamp. After pulling the data you can order it by that timestamp (I think there is actually a timestamp saved automatically by firebase that you can access somehow, but you need to look that up in the documentation). If I got it right, in order to accomplish what you want, you need to change the structure of your database. For example you could maybe use the autoID but save the userID you wanted to use as key as a value if you need that. Hope I got your idea right, if not just be more precise and I will try to help.
Firebase keys are chronological by default - it's built into their key generation algorithm. I think you need to restructure/rethink your data.
Your POSTS database should (possibly) have the comments listed with each post, and then you can duplicate on the user record if needed for faster retrieval if they need to be accessed by user. So something like:
POSTS
- post (unique key)
- title (text)
- date (timestamp)
- comments
- comment (unique key)
- text (text)
- user_id (user key)
- date (timestamp)
When you pull the comments, you shouldn't be pulling them from a bunch of different users. That could result it a lot of queries and a ton of load time. Instead, the comments could be added (chronologically of course) to the post object itself, and also to the user if you want to keep a reference there. Unlike in MySQL, NoSQL databases can have quite a bit of this data duplication.

MongoDB - Getting first set of $lt

I'm using MongoDB to store data and when retrieving some, I need a subset which I'm uncertain how to obtain.
The situation is this; items are created in batches, spanning about a month between. When a new batch is added, the previous batch has a deleted_on date set.
Now, depending on when a customer is created, they can always retrieve the current (not deleted) set of items, and all items in the one batch that wasn't deleted when they registered.
Thus, I want to retrieve records that have deleted_on as either null, or all items that have the deleted_on on the closest date in the future from the customer.added_on-date.
In all of my solutions, I run into one of the below problems:
I can get all items that were deleted before the customer was created - but they include all batches - not just the latest one.
I can get the first item that was deleted after the customer was created, but nothing else from the same batch.
I can get all items, but I have to modify the result set afterwards to remove all items that don't apply.
Having to modify the result afterwards is fine, I guess, but undesirable. What's the best way to handle this?
Thanks!
PS. The added_on (on the customer) and deleted_on on the items have indexes.

How to delete only a subset of posted data in firebase

Posting data to firebase generate new items similar to the following example:
log
-K4JVL1PZUpMc0r0xYcw
-K4jVRhOeL7fH6CoNNI8
-K4Jw0Uo0gUcxZ74MWBO
I struggle to find how to e.g. delete entries that is older than x days - preferably with the REST API. Suggestions appreciated.
You can do a range query.
This technique requires you to have a timestamp property for each record.
Using orderBy and endAt you can retrieve all of the items before a specified date.
curl https://<my-firebase-app>.firebaseio.com/category/.json?orderBy="timestamp"&endAt=1449754918067
Then with the result, you can delete each individual item.

Select new or updated records com table in Postgresql

It's possible to get all the new or update records from one table in postgresql by
specified date?
something like this:
Select NEW OR UPDATED FROM anyTable WHERE dt_insert or dt_update = '2015-01-01'
tks
You can only do this if you added a trigger-maintained field that keeps track of the last change time.
There is no internal row timestamp in PostgreSQL, so in the absence of a trigger-maintained timestamp for the row, there's no way to find rows changed/added after a certain time.
PostgreSQL does have internal information on the transaction ID that wrote a row, stored in the xmin hidden column. There's no record of what transaction ID committed when, though, until PostgreSQL 9.5 which keeps track of this if and only if the new track_commit_timestamps setting is turned on. Additionally, PostgreSQL eventually clears the creator transaction ID information from a tuple because it re-uses transaction IDs, so it only works for fairly recent transactions.
In other words: it's kind-of possible in a rough way, if you understand the innards of the database, but should really only be used for forensic and recovery purposes.