Fetch large set using CKQueryOperation and CKReference - cloudkit

I have a CKReference attached to 4000 records linked to one CKRecord. They have no action (CKReferenceActionNone).
I want to be able to fetch all these records using, CKQueryOperation using NSPredicate as follows:
CKReference* recordToMatch = [[CKReference alloc] initWithRecordID:backupRecord.recordID action:CKReferenceActionNone];
NSPredicate* predicate = [NSPredicate predicateWithFormat:#"parentRecord == %#", recordToMatch];
Normally when I use CKQueryOperation I can loop it using CKQueryCursor to fetch over 100 records. But using CKReference, it works the first 100 and then 200, but after than no cursor is returned.
Is there any way to fetch 4000 records using CKReference?
Thanks a lot!

I’m unsure if this is the exact cause of your cursor issue, but there is a limit of 750 references per record.
There is a hard limit to the number of references any one record can have. This limit is set to 750 references. Any attempt to exceed this limit will result in an error from the server.
You can see that documented here: https://developer.apple.com/documentation/cloudkit/ckrecord/reference

Related

Couchbase N1QL Query getting distinct on the basis of particular fields

I have a document structure which looks something like this:
{
...
"groupedFieldKey": "groupedFieldVal",
"otherFieldKey": "otherFieldVal",
"filterFieldKey": "filterFieldVal"
...
}
I am trying to fetch all documents which are unique with respect to groupedFieldKey. I also want to fetch otherField from ANY of these documents. This otherFieldKey has minor changes from one document to another, but I am comfortable with getting ANY of these values.
SELECT DISTINCT groupedFieldKey, otherField
FROM bucket
WHERE filterFieldKey = "filterFieldVal";
This query fetches all the documents because of the minor variations.
SELECT groupedFieldKey, maxOtherFieldKey
FROM bucket
WHERE filterFieldKey = "filterFieldVal"
GROUP BY groupFieldKey
LETTING maxOtherFieldKey= MAX(otherFieldKey);
This query works as expected, but is taking a long time due to the GROUP BY step. As this query is used to show products in UI, this is not a desired behaviour. I have tried applying indexes, but it has not given fast results.
Actual details of the records:
Number of records = 100,000
Size per record = Approx 10 KB
Time taken to load the first 10 records: 3s
Is there a better way to do this? A way of getting DISTINCT only on particular fields will be good.
EDIT 1:
You can follow this discussion thread in Couchbase forum: https://forums.couchbase.com/t/getting-distinct-on-the-basis-of-a-field-with-other-fields/26458
GROUP must materialize all the documents. You can try covering index
CREATE INDEX ix1 ON bucket(filterFieldKey, groupFieldKey, otherFieldKey);

How to fetch all documents from a collection using Firestore API?

https://firebase.google.com/docs/firestore/use-rest-api#making_rest_calls
Hi,
I want to fetch all the documents from my collection using REST for reporting purposes.
I tried using list method in API explorer but I am only getting max 30 documents at a time and for next page I have to use the nextPageToken.
I have even tried giving the pageSize to 100, even then it is returning only 30 documents as it is for maximum number of documents to return. Is there any way I can fetch all documents?
I have around 3-4k simple documents.
The example here works for me: https://stackoverflow.com/a/48889822/2441655
Example:
https://firestore.googleapis.com/v1/projects/YOUR_PROJECT/databases/(default)/documents/YOUR_DOC_PATH?pageSize=300
You can use paging by finding the "nextPageToken" at the end of the json, then inserting it like so:
https://firestore.googleapis.com/v1/projects/YOUR_PROJECT/databases/(default)/documents/YOUR_DOC_PATH?pageSize=300&pageToken=NEXT_PAGE_TOKEN_HERE
However it still limits the max pageSize to 300 for me. (odd that it limits it to 30 for you)

Get records from db within a limit range

Assume that my database returns 1000 records based on a query that I have.
What I wish to do is using the same query, get the first 100 records, then get the next 100 and so on until I have all the 1000.
That is, I do not want all the 100 records in one go. I need them in batches of 100.
So something like this perhaps:
query = {
'$from': 0,
'$to': 100
}
with the first request followed by
query = {
'$from': 100,
'$to': 200
}
for the next request and so on.
I don't want all 1000 results at once. I wish to be able to specify the start and end counts so that I get the result in batches - is this possible in mongodb?
You could use skip and limit for your queries.
For example..
db.myCollection.find().limit(100) //Get the first 100 records
db.myCollection.find().skip(100).limit(100) //Get the next 100 records
This is an expensive method though, I would much rather get all 1000 and "separate" them client-side.
Here's a link to both method's docs
http://docs.mongodb.org/manual/reference/method/cursor.skip/
http://docs.mongodb.org/manual/reference/method/cursor.limit/

how to obtain a record of adjacent records

Such as to obtain a post before an after a record time field created
Try to use the following statement to obtain articles
# Created is the time of the creation of the current article
# Before a
prev_post = db.Post.find ({'created': {'$ lt': created}}, sort = [('created', -1)], limit = 1)
# After a
next_post = db.Post.find ({'created': {'$ gt': created}}, sort = [('created', 1)], limit = 1)
The result turn to be discontinuous,sometimes skip several records.I don't know why,maybe I misunderstand the FIND?
Help please.
It may seem a strange behaviour indeed, but MongoDB does not guarantee you the order of stored records, unless you're querying an array (in which records are kept in the insertion order). I believe what MongoDB does - it reaches the first document that matches your query and returns it.
Bottomline: if the logic requires neighbour records, use arrays.
I think it's your created. the time type has a problem ,if the record time is same ,the system don't know which is the choosing one.you can use _id , have a try.

Possible to limit results returned by NSArray filteredArrayUsingPredicate

I'm using [NSArray filteredArrayUsingPredicate] to get an array of objects that match certain conditions within an original array of 20,000 objects. Is it possible to limit the number of results that this method returns in order to speed it up?
This would be similar to the LIMIT clause in SQL. Thanks.
No, with predicates not. But you could make an extension to NSArray which filters and limits the result for you.
- (NSArray*)filteredArrayUsingPredicate:(NSPredicate*)predicate
resultLimit:(NSUInteger)limit;