I start using the new MongoDB text search but then I realized it only supports "limit" and there's no "skip".
Is there anyway to get a "skip" behavior?
I think as of V2.4.0 you will need to implement this in your application code rather than trying to get Mongo DB to do this as part of a query request. My understanding is that the text search does not return a cursor.
Check the Jira for this issue, https://jira.mongodb.org/browse/SERVER-9063
One solution although not very efficient is to increase the limit size for each new page and then change where you start reading from accordingly.
basic idea below.
request 1
query.limit(20)
request 2
query.limit(40)
start reading from the last position +1
Not very elegant or efficient I'm afraid
Related
I have installed Solr as a docker image and can create cores and upload some test JSON.
I then want to search on this, but I can't, in this case I want to find the id where it matches. I enter the q filter text and then hit execute but it still lists all the json results! what am I missing?
I can query it in a url as follows though!? so seems like the front end is messed up somehow?
I believe the issue was in my docker-compose.yml file, I had these extra lines which seem to cause this issue, redoing the instance without this, I can happily query the data.
command:
- solr-precreate
- gettingstarted
We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com
In the offical site of Elastic Watcher, they said
Watcher is a plugin for Elasticsearch that provides alerting and notification based on changes in your data
The relevant data or changes in data can be identified with a periodic Elasticsearch query
What I want is a function like Trigger of MySQL, that is when a record is updated, a action is triggered.
But I didn't find a example or document to address this use case, can anybody tell me how to do this?
You define an input of type search and using body, indices (mainly) you define which indices to look at (indices) and what is the actual query (body). If you need other settings, there are many more things to configure. After this, you define a condition and an action to complete the flow.
Make an attempt and create a watch. If you have difficulties, provide details of what you tried in a different SO post (you realize your current post is not appropriate for SO since you ask for complete code without you trying anything).
I am using mgo driver for MongoDB under Go.
My application asks for a task (with just a record select in Mongo from a collection called "jobs") and then registers itself as an assignee to complete that task (an update to that same "job" record, setting itself as assignee).
The program will be running on several machines, all talking to the same Mongo. When my program lists the available tasks and then picks one, other instances might have already obtained that assignment, and the current assignment would have failed.
How can I get sure that the record I read and then update does or does not have a certain value (in this case, an assignee) at the time of being updated?
I am trying to get one assignment, no matter which one, so I think I should first select a pending task and try to assign it, keeping it just in the case the updating was successful.
So, my query should be something like:
"From all records on collection 'jobs', update just one that has assignee=null, setting my ID as the assignee. Then, give me that record so I could run the job."
How could I express that with mgo driver for Go?
This is an old question, but just in case someone is still watching at home, this is nicely supported via the Query.Apply method. It does run the findAndModify command as indicated in another answer, but it's conveniently hidden behind Go goodness.
The example in the documentation matches pretty much exactly the question here:
change := mgo.Change{
Update: bson.M{"$inc": bson.M{"n": 1}},
ReturnNew: true,
}
info, err = col.Find(M{"_id": id}).Apply(change, &doc)
fmt.Println(doc.N)
I hope you saw the comments on the answer you selected, but that approach is incorrect. Doing a select and then update will result in a round trip and two machines and be fetching for the same job before one of them can update the assignee. You need to use the findAndModify method instead: http://www.mongodb.org/display/DOCS/findAndModify+Command
The MongoDB guys describe a similar scenario in the official documentation: http://www.mongodb.org/display/DOCS/Atomic+Operations
Basically, all you have to do, is to fetch any job with assignee=null. Let's suppose you get the job with the _id=42 back. You can then go ahead and modify the document locally, by setting assignee="worker1.example.com" and call Collection.Update() with the selector {_id=42, assignee=null} and your updated document. If the database is still able to find a document that matches this selector, it will replace the document atomically. Otherwise you will get a ErrNotFound, indicating that another thread has already claimed the task. If that's the case, try again.
I'm using meteor. I'm wondering if theres a shorthand way to do batch updates before the DOM is updated.
for instance I want to update some records,more than one (All at once):
Collection.update(id1,{..})
Collection.update(id2,{..})
Collection.update(id3,{..})
The problem is there are 3 items being updated separately. So when the DOM in my case was being redrawn 3 times instead of once (with all 3 updated records).
Is there a way to hold off the ui updating until all of them are updated?
Mongo's update can modify more than one document at a time. Just give it a selector that matches more than one document, and set the multi option. In your case, that's just a list of IDs, but you can use any selector.
Collection.update({_id: {$in: [id1, id2, id3]}}, {...}, {multi:true});
This will run a single DB update and a single redraw.
Execute them on the server instead, that way they might be synchronously done such that they are less likely to cause multiple DOM updates on the client.
See the first two and last interesting code bits, which explain how to protect your clients from messing with the database as well as how to define methods on the server and call them from the client.