DropboxServerException - Invalid cursor - dropbox-api

I'm getting the following back when fetching updates for one our users.
DropboxServerException (nginx): 400 Bad Request (Invalid "cursor"
parameter: this cursor is for a different user.)
How can this be remedied? I don't understand how the cursor is tied to a particular user. Can the cursor be refetched/refreshed to be valid?

Related

Using github.com/icza/minquery to directly query page 3 value

I wanna confirm the right way to get skip(3) values using minquery, 1. foreach skip, get 1,2,3 page data, then return the 3rd value? or 2. use a way to get the cursor of skip(3). if the 2rd is right, how to get the cursor of skip(3) page? Thanks.
You can't skip documents directly using github.com/icza/minquery. The purpose of minquery is to not have to use Query.Skip() (because that becomes less efficient when the number of "skippable" documents grows). The only way to skip 3 documents is to query for more than 3, and throw away the first 3.
minquery is for cases where you don't have to skip the initial documents. minquery requires you to iterate over the documents, and acquire the cursor that encodes the index entry of the last returned document (this cursor is returned to you by MinQuery.All()). When you need the next page, you have to use the cursor you acquired in the previous query, and then it can list subsequent documents without having to skip anything, because the encoded index entry can be used to jump right where the last query finished listing documents.
Think of GMail: you can always jump just to the next (and previous) page of emails, but you have no way of "magically" jumping to the 10th or 100th page: GMail uses the same mechanism under the hood.

How to get item without hitting db twice iqueryable

For jquerydatatable, I need to prodive a json like this:
{
rusults:[],
count: 100
}.
Before actually hitting db, I have an iqueryable. Because I only get a specific number of per page, I have to use skip and take to get result (one request) and use another request to get item count for pagination.
Please tell me how could I avoid second request?

Pymongo : insert_many + unique index

I want to insert_many() documents in my collection. Some of them may have the same key/value pair (screen_name in my example) than existing documents inside the collection. I have a unique index set on this key, therefore I get an error.
my_collection.create_index("screen_name", unique = True)
my_collection.insert_one({"screen_name":"user1", "foobar":"lalala"})
# no problem
to_insert = [
{"screen_name":"user1", "foobar":"foo"},
{"screen_name":"user2", "foobar":"bar"}
]
my_collection.insert_many(to_insert)
# error :
# File "C:\Program Files\Python\Anaconda3\lib\site-packages\pymongo\bulk.py", line 331, in execute_command
# raise BulkWriteError(full_result)
#
# BulkWriteError: batch op errors occurred
I'd like to :
Not get an error
Not change the already existing documents (here {"screen_name":"user1", "foobar":"lalala"})
Insert all the non-already existing documents (here, {"screen_name":"user2", "foobar":"bar"})
Edit : As someone said in comment "this question is asking how to do a bulk insert and ignore unique-index errors, while still inserting the successful records. Thus it's not a duplicate with the question how do I do bulk insert". Please reopen it.
One solution could be to use the ordered parameter of insert_many and set it to False (default is True):
my_collection.insert_many(to_insert, ordered=False)
From the PyMongo documentation:
ordered (optional): If True (the default) documents will be inserted
on the server serially, in the order provided. If an error occurs all
remaining inserts are aborted. If False, documents will be inserted on
the server in arbitrary order, possibly in parallel, and all document
inserts will be attempted.
Although, you would still have to handle an exception when all the documents couldn't be inserted.
Depending on your use-case, you could decide to either pass, log a warning, or inspect the exception.

How can a mongodb client save the operation id sent to database to kill it afterwards

I want to cancel an operation sent to mongodb:
I've tried to add a $comment in the query and get with:
db.currentOp({"query.filter.$comment" : "127.0.0.1"})
Example of currentOp output:
(...)
"query" : {
"find" : "collectionName",
"filter" : {
"$comment" : "127.0.0.1",
"field1" : "example of field1 value"
}
(...)
But if the query is too large, with many fields or big strings, the mongo db.currentOp() command don't work.
"query" : {
"$msg" : "query not recording (too large)"
}
I know that slow queries can be canceled, but what i want to do is to kill operations did by an user, but i need to save in client side the opids sent to mongodb.
stumbled upon your question as I'm trying to figure out the same thing. After some research, it seems that it's basically impossible to do easily.
Taking a look at the Wire Protocol, you can see that when executing a query the only response the client will get is a OP_REPLY. However, this does not contain the opId (and is also only sent after processing).
The fields contained in OP_REPLY do include the standard message header which has a responseTo property. The value of this property is set to the requestID value which the client originally sent to the server. As this value is up to the client to generate (the NodeJS driver uses just an incrementing integer) it is different from the opId you are looking for.
The only way it seems to be possible to identify the opId for a query you executed might be to do some correlation using the output of $currentOp.

How to insert new document only if it doesn't already exist in MongoDB

I have a collection of users with the following schema:
{
_id:ObjectId("123...."),
name:"user_name",
field1:"field1 value",
field2:"field2 value",
etc...
}
The users are looked up by the user.name, which must be unique. When a new user is added, I first perform a search and if no such user is found, I add the new user document to the collection. The operations of searching for the user and adding a new user, if not found, are not atomic, so it's possible, when multiple application servers are connect to the DB server, for two add_user requests to be received at the same time with the same user name, resulting in no such user being found for both add_user requests, which in turn results with two documents having the same "user.name". In fact this happened (due to a bug on the client) with just a single app server running NodeJS and using Async library.
I was thinking of using findAndModify, but that doesn't work, since I'm not simply updating a field (that exists or doesn't exist) of a document that already exists and can use upsert, but want to insert a new document only if the search criteria fails. I can't make the query to be not equal to "user.name", since it will find other users.
First of all, you should maintain a unique index on the name field of the users collection. This can be specified in the schema if you are using Mongoose or by using the statement:
collection.ensureIndex('name', {unique: true}, callback);
This will make sure that the name field remains unique and will solve the problem of concurrent requests as you have specified in your question. You do not require searching when this index is set.