CloudKit record exists (added days ago), does not get returned by query but can be fetched by ID - cloudkit

I have a situation where a CloudKit record is not being returned by a query (and also doesn't appear in the CloudKit dashboard), but it exists when fetching it by it's ID.
The record was added days ago, so it really should have been indexed.
The record ID metadata index has always been set for the record type.
The query I use is fetching all records for a particular record type in a custom zone.
Strangely, when I search for the record in the dashboard and select it, the record name field is blank on the detail panel.
It's very peculiar, and very concerning!
Anyone have any ideas?

Related

How to handle future dated records in postgress using Ef core

I am working on microservices architecture for payroll application.
ORM -EF core
I have Employee table ,where employee details are stored as jsonb column(firstname,lastname,department etc) in postgress .
one of the use case is, I may receive request for future dated changes.Example- Employee designation gets changed next month but I receive request for those change in current month.
I have two approachs to handle this scenario.
Approach 1 :
when I get future dated record(effective date > current date), I will store those records in separate table not in employee master table.
I will create one console application which runs on everyday (cron) and picks up the correct record(effectivedate == currentdate) and update the employee master table.
Approach 2:
almost same as approach 1, instead of using a table for storing future dated record, I will update the record in employee master table.
If I go with approach 2,
I need to delete existing record when effective date becomes current date
when I do get request I should get only current record not future record - to achieve this, I need to add condition for checking effective date. All employee details are stored in jsonb column so I need to fetch entire records with current and future dated record and filter only the current record.
I feel approach 1 is better.Please help me on this. I would like to know another approaches which may fit for this use case.
Thanks,
Revathi

Update and Insert vs Upsert in PostgreSQL

Say, I am building a camera app. Every time the user clicks a photo, the image is stored on the cloud. Because I want to restrict how many images are stored on the cloud, the app gets 10 URLs in an array called listURLs when it is initialised.
The first 10 clicks get PUT into the cloud, exhausting listURLs. Then, every time a click happens, a coin toss determines whether the latest click replaces an existing click on the cloud. Typical numbers would be 50 clicks, first 10 clicks get assigned a URL, and of the remaining 40 clicks, 20 of them overwrite an existing URL.
I store records of each app session in a Postgres DB. Each session will have an ID and all instances of clicks (which may or may not have a corresponding url). I also need to know the url corresponding to each click, if one exists. So, if there are 30 clicks, I will need to know which 10 of these have a corresponding url.
I can think of two ways of storing this data.
tblClicksURLs as a Table that has click_id, url and url_active as its fields. Every time a click_id and non-null url need to be inserted, update all other records with the same url to have url_active as false.
Two tables tblClicks and tblURLs. tblURLs has a click_id foreign key. Every time a click_id and non-null url need to be inserted, the click_id gets inserted into tblClicks and click_id and url get upserted into tblURLs. The upsert is based on whether the url already exists in tblURLs. So, for a given url, there will only be one click_id in tblURLs
So, in Case 1, I will have an UPDATE of url_active followed by INSERT on the same table. In Case 2, I will have an INSERT into one table and an UPSERT into another. I will need indexing on click_id, but not on url.
If you are looking at writes of > 10k rows per second, maybe even more, which of these two would be more efficient? Assume that the numbers per session are similar to the one quoted above (50 clicks, etc.)
I could also register a created_at datetime for each record in Case 1, and just use the first non-null url ordered reverse-chronologically. But, I am trying to avoid this, unless the performance benefits are enormous.
After thinking about it for a while, I decided to use UPSERT along with a Unique Constraint on tblClicksURLs. The constraint is on (url, url_active). Every time a new (click_id, url, url_active) record needs to be added, this record is UPSERT'ed. On conflict url_active is set to null. So, all records with this url will have their url_active set to null.
I then use
RETURNING (xmax = 0) AS inserted
as discussed here to check if the record was inserted or upserted. If the record was upserted, I update the url_active field of the latest record to true.
In this way, if a url is being inserted for the first time, it happens in a single transaction. If the url exists, it happens via two transactions.

oracle form ''you cannot update this record''

I have a procedure in which I get values from different tables and calculate a certain decimal number. After that i try to post it on a form text-field which is a database item (update and insert allowed on the settings of block and item). everything works fine but the result wont show on the item and won't save in the database field. I get the error
"you cannot update this record".
Can someone help? i have been working on it for two days now and can't find anything.
Did you check if your user has update access on the table?
Check also if there are database triggers on the table that prevents you from updating the record.

Mobile Services Offline PullAsync only retrieves data where updatedAt date > the latest record

I am using offline data sync in Mobile Services and the following function only retrieves data where UpdatedAt > the largest updatedAt in the userTable:
await userTable.PullAsync(userTable.Where(a => a.UserName == userName));
The first time this function is executed, it retrieves my data correctly. The second time the function executes, whith a different username, it will only retrieve data where UpdatedAt is greater than the greatest UpdatedAt datetime that is already present in my SQLite db. If I change an UpdatedAt field in the backend (by setting it to DateTime.Now), this record is retrieved. Is this by design or am I missing something?
For anybody else having issues with this: I have started another thread here where you will find the complete answer
Basically what it comes down to is this:
This will retrieve all records from the backend where username is donna:
await itemTable.PullAsync(itemTable.Where(a => a.UserName == "donna"));
This will retrieve all records where username is "donna" the first time, and after that only updated records. (incremental)
await itemTable.PullAsync("itemtabledonna", itemTable.Where(a => a.UserName == "donna"));
The first parameter is the queryKey. This is used to track your requests to the backend. A very important thing to know is that there is a restriction on this queryKey:
^[a-zA-Z][a-zA-Z0-9]{0,24}$
Meaning: alphanumeric characters, max 25 characters long. So no hyphens either (at the time of this writing). If your queryKey does not match this regex, no recrods will be returned. There is currently no exception thrown and no documentation on this.
PullAsync() is supposed to use an incremental sync (getting only records what have a newer date than the last record it retrieved) when you pass in a query key. If not, it should execute your query and pull down all matching records.
It sounds like a bug is occurring if you are getting that behavior without passing in a query key.
Also, in the incremental sync case, it is not the latest updated at in the SQLite DB but a cached version from the last time PullAsync() was ran (that was cached under the given query key)
Your updatedAt column also by default will have a trigger that causes its timestamp to update whenever the row is modified, so you shouldn't have to take any additional actions when using incremental sync.
If the above is not what you are seeing, I recommend submitting a github bug against azure-mobile-services so it can be reviewed.

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.