Lucene.net search range by order - paging - lucene.net

I got a lucene.net index with bunch of documents. I pull these with MVC request and return to client as JSON. I want to return only top N documents starting from index I want. I need that to minimize data flow between server and client.
What I need is something like:
1) First query- Get top 20 docs
2) Second query - Get top 20 docs beginning from 20 - would be 21 - 41
3) .... and so on
Lucene allows me to set top items. But it only count those from the beginning from the index. Is there a build-in possibility to set start index for that ? Probably some advanced Indexer I am missing in lucene.net or something..
Thanks!

Take a look at this blog that explains pagination in lucene.
The crux of it is this:
int start = 20; int pageSize = 20;
Query query = qp.parse(searchTerm);
TopDocs hits = searcher.search(query, maxNumberOfResults);
for (int i = start; i < start + pageSize && i < hits.Length(); i++) {
int docId = hits.scoreDocs[i].doc;
}

Related

Script is taking 11 - 20 seconds to lookup up an item in an 18,000 row data set

I have two Google sheets workbooks.
One is the "master" source of lookup data with a key based on manufacturer item #, which could be anything from 1234 to A-01/234-Name_1. This sheet, referenced via SpreadsheetApp.openByUrl, has 18,000 rows and 13 columns. The key column has been converted to plain text and the sheet is sorted by this column.
The second is the "template" where people enter item #s that they need to look up against the master, typically 20 - 1500 items at a time.
The script is in the template. It is very slow and routinely times out after 30 minutes. It was written by someone else and I am new to App Script, but I think I've managed to understand what the script is doing and where the bottleneck is occurring.
It does a bunch of stuff, but this is the meat of the lookup:
var numrows = master.getDataRange().getNumRows();
var masterdata = master.getDataRange().getValues();
var itemnumberlist = template.getDataRange().getValues();
var retreiveddata = [];
// iterate through the manf item number list to find all matches in the
// master and return those matches to another sheet
for (i = 1; i < template.getDataRange().getValues().length; i++) {
for (j = 0; j < numrows; j++) {
if (masterdata[j][1].toString() === itemnumberlist[i][1].toString()) {
retreiveddata.push(data[j]);
anothersheet.appendRow(data[j]);
}
}
}
I used Logger.log() to determine that each time through the i loop is taking 11 - 19 seconds, which just seems insane.
I've been doing some google searching and I've tried a couple of different things...
First I tried moving the writing of found data out of the for loop so the script would be doing all of its reading first and then writing in one big chunk, but I couldn't get it exactly right. My two attempts are below.
var mycounter = 0;
for (i = 0; i < template.getDataRange().getValues().length; i++) {
for (j = 0; j < numrows; j++) {
if (masterdata[j][0].toString() === itemnumberlist[i][0].toString()) {
retreiveddata.push(masterdata[j]);
mycounter = mycounter + 1;
}
}
}
// Attempt 1
// var myrange = retreiveddata.length;
// for(k = 0; k < myrange; k++) {
// anothersheet.appendRow(retreiveddata.pop([k]);
// }
//Attempt 2
var myotherrange = anothersheet.getRange(2,1,myothercounter, 13)
myotherrange.setValues(retreiveddata);
I can't remember for sure, because this was on Friday, but I think both attempts resulted in the script trying to write the entire master file into "anothersheet".
So I temporarily set this aside and decided to try something else. I was trying to recreate the issue in a couple of sample spreadsheets, but I was unable to do so. The same script is getting through my 15,000 row sample "master" file in less than 1 second per lookup. The only thing I can think of is that I used a random number as my key instead of a weird text string.
That led me to think that maybe I could use a hash algorithm on both the master data and the values to be looked up, but this is presenting a whole other set of issues.
I borrowed these functions from another forum post:
function GetMD5Hash(value) {
var rawHash = Utilities.computeDigest(Utilities.DigestAlgorithm.MD5,
value);
var txtHash = '';
for (j = 0; j <rawHash.length; j++) {
var hashVal = rawHash[j];
if (hashVal < 0)
hashVal += 256;
if (hashVal.toString(16).length == 1)
txtHash += "0";
txtHash += hashVal.toString(16);
Utilities.sleep(100);
}
return txtHash;
}
function RangeGetMD5Hash(input) {
if (input.map) { // Test whether input is an array.
return input.map(GetMD5Hash); // Recurse over array if so.
Utilities.sleep(100);
} else {
return GetMD5Hash(input)
}
}
It literally took me all day to get the hash value for all 18,000 item #s in my master spreadsheet. Neither GetMD5Hash nor RangeGetMD5Hash will return a value consistently. I can only do a few rows at a time. Sometimes I get "Loading..." indefinitely. Sometimes I get "#Name" with a message about GetMD5Hash being undefined (despite the fact that it worked on the previous row). And sometimes I get "#Error" with a message about an internal error.
This method actually reduces the lookup time of each item to 2 - 3 seconds (much better, but not great). However, I can't get the hash function to consistently work on the input data.
At this point I'm so frustrated and behind on my other work that I thought I'd reach out to the smart people on these forums and hope for some sort of miracle response.
To summarize, I'm looking for suggestions on these three items:
What am I doing wrong in my attempt to move the write out of the for loop?
Is there a way to get my hash value faster or utilize a different method to accomplish the same goal?
What else can I try to help speed up the script?
Any suggestions you can offer would be greatly appreciated!
-Mandy
It sounds like you hit on the right approach with attempting to move the appendRow() call out of the loop. Anytime you are reading or writing to a spreadsheet you can expect the individual call to take 1 to 2 seconds, so this will eat up a lot of time when you get matches. Storing the matches in an array and writing them all at once is the way to go.
Another thing I notice is that your script calls getValues() in the actual for loop condition statement. The condition statement is executed each time on each iteration of the loop, so this is potentially wasting a lot of time even when you don't have matches.
A final tweak that may be helpful depending on your desired behaviour. You can stop the inner for loop after it finds the first match, which, if you only care about the first match or know there will only be one match, will save you a lot of iterations. To do this, put "break" immediately after the retreiveddata.push(masterdata[j]); line.
To fix the getValues issue, Change:
for (i = 1; i < template.getDataRange().getValues().length; i++) {
To:
for (i = 1; i < itemnumberlist.length; i++) {
And that fix along with the appendRow issue, and including the break call:
for (i = 1; i < itemnumberlist.length; i++) {
for (j = 0; j < numrows; j++) {
if (masterdata[j][0].toString() === itemnumberlist[i][0].toString()) {
retreiveddata.push(masterdata[j]);
break; //stop searching after first match, move on to next item
}
}
}
//make sure you have data to write before trying to write it.
if(retreiveddata.length > 0){
var myotherrange = anothersheet.getRange(2,1,retreiveddata.length, retreiveddata[0].length);
myotherrange.setValues(retreiveddata);
}
If you are re-using the same sheet for "anothersheet" on each execution, you may also want to call anothersheet.clear() to erase any existing data before you write your fresh results.
I would pass on the hashing approach altogether, comparing strings is comparing strings, so whether they are hashes or actual part numbers I wouldn't expect a significant difference.

MongoDB : Slow text search when searching a very frequent term

I have a collection of about 1 million documents (movies mainly), I created a text index on a field. All works fine for almost all searches : less than 20ms to have a result. The exception is when one search for a very frequent term, it can lasts up to 3000 ms !
For example,
if I search for 'pulp' in the collection (only 40 documents have it), it lasts 1ms
if I search for 'movie' (750 000 documents have it), it lasts 3000ms.
When profiling the request, the explain('executionStats') show that all 'movies' documents are scanned. I tried many indexing, sorting + limiting and hinting but all 750 000 documents are still scanned and the result is still slow to come...
Is there a strategy to be able to search very frequent term in a database faster ?
I ended to do my own stop words list by coding something like this :
import pymongo
from bson.code import Code
# NB max occurences of a word in a collection after what it is considerated as a stop word.
NB_MAX_COUNT = 20000
STOP_WORDS_FILE = 'stop_words.py'
db = connection to the database...
mapfn = Code("""function() {
var words = this.field_that_is_text_indexed;
if (words) {
// quick lowercase to normalize per your requirements
words = words.toLowerCase().split(/[ \/]/);
for (var i = words.length - 1; i >= 0; i--) {
// might want to remove punctuation, etc. here
if (words[i]) { // make sure there's something
emit(words[i], 1); // store a 1 for each word
}
}
}
};""")
reducefn = Code("""function( key, values ) {
var count = 0;
values.forEach(function(v) {
count +=v;
});
return count;
};""")
with open(STOP_WORDS_FILE,'w') as fh:
fh.write('# -*- coding: utf-8 -*-\n'
'stop_words = [\n')
result = db.mycollection.map_reduce(mapfn,reducefn,'words_count')
for doc in result.find({'value':{'$gt':NB_MAX_COUNT}}):
fh.write("'%s',\n" % doc['_id'])
fh.write(']\n')

Office.js Word Add-In: Performance Issue with Updating Values in Large Tables

Summary:
Updating values in large Word tables (larger than 10 by 10) is very slow.
Performance gets exponentially worse with table size.
I'm using myTable.values = arrNewValues. I've also tried
myTable.addRows("end", rows, arrNewValues). Where arrNewValues is a
2D array.
I've also tried using updating via getOoxml() and
insertOoxml(), but ran into other issues I haven't been able to
resolve, but has good performance.
Slow performance seems to be caused by "ScreenUpdating" (same issue exists in VBA and is solved via ScreenUpdating=false). I believe it is critically important to add the ability to temporarily turn off ScreenUpdating.
Is there another way to improve table updating performance?
Background:
My add-in (https://analysisplace.com/Solutions/Document-Automation) performs document automation (updates content in a variety of Word docs). Many customers want to be able to update text in largish tables. Some documents have dozens of tables (appendices). I have run into the issue where updating these documents is unacceptably slow (well over a minute) due to the table updates.
Update time by table size:
2 rows by 10 columns: .33 seconds
4 rows by 10 columns: .52 seconds
8 rows by 10 columns: 1.5 seconds
16 rows by 10 columns: 5.5 seconds
32 rows by 10 columns: 20.8 seconds
64 rows by 10 columns: 88 seconds
Sample Office.js Code (Script Lab):
function updateTableCells() {
Word.run(function (context) {
var arrValues = context.document.body.tables.getFirst().load("values");
return context.sync().then(
function () {
var rows = arrValues.values.length;
var cols = arrValues.values[0].length;
console.log(getTimeElapsed() + "rows " + rows + "cols " + cols);
var arrNewValues = [];
for (var row = 0; row < rows; row++) {
arrNewValues[row] = [];
for (var col = 0; col < cols; col++) {
arrNewValues[row][col] = 'r' + row + ':c' + col;
}
}
console.log(getTimeElapsed() + 'Before setValues ') ;
context.document.body.tables.getFirst().values = arrNewValues;
return context.sync().then(
function () {
console.log(getTimeElapsed() + "Done");
});
});
})
.catch(OfficeHelpers.Utilities.log);
}
Sample Word VBA Code:
VBA performance is similar to the Office.js performance without ScreenUpdating = False. With ScreenUpdating = False, performance is instant.
Sub PopulateTable()
Application.ScreenUpdating = False
Dim nrRow As Long, nrCol As Long
Dim tbl As Word.Table
Set tbl = ThisDocument.Tables(1)
For nrRow = 1 To 32
For nrCol = 1 To 10
tbl.Cell(nrRow, nrCol).Range.Text = "c" & nrRow & ":" & nrCol
Next nrCol
Next nrRow
End Sub
Article explaining slow performance: see "Improving Performance When Automating Tables": https://msdn.microsoft.com/en-us/library/aa537149(v=office.11).aspx?cs-save-lang=1&cs-lang=vb#code-snippet-3
Posts indicating there is no "ScreenUpdating = False" in Office.js: ScreenUpdating Office-js taskpane and Equivalent to Application.ScreenUpdating Property in office-js Excel add-in
Sounds like we won't see it any time soon.
Post related to the updating tables via getOoxml() and insertOoxml(): Word Office.js: issues with updating tables in ContentControls using getOoxml() and insertOoxml()
This is probably not the answer you're looking for, but I have been working with a word add in for validation of software, and we are talking about updating 500-1000 rows with lots of little formatting changes.
Anyway one thing I found that helped is to scroll somewhere else in the document before you make the changes to the table. Just the act of looking at it will slow it down 10-20x. It's not always instant but near.

search in limited number of record MongoDB

I want to search in the first 1000 records of my document whose name is CityDB. I used the following code:
db.CityDB.find({'index.2':"London"}).limit(1000)
but it does not work, it return the first 1000 of finding, but I want to search just in the first 1000 records not all records. Could you please help me.
Thanks,
Amir
Note that there is no guarantee that your documents are returned in any particular order by a query as long as you don't sort explicitely. Documents in a new collection are usually returned in insertion order, but various things can cause that order to change unexpectedly, so don't rely on it. By the way: Auto-generated _id's start with a timestamp, so when you sort by _id, the objects are returned by creation-date.
Now about your actual question. When you first want to limit the documents and then perform a filter-operation on this limited set, you can use the aggregation pipeline. It allows you to use $limit-operator first and then use the $match-operator on the remaining documents.
db.CityDB.aggregate(
// { $sort: { _id: 1 } }, // <- uncomment when you want the first 1000 by creation-time
{ $limit: 1000 },
{ $match: { 'index.2':"London" } }
)
I can think of two ways to achieve this:
1) You have a global counter and every time you input data into your collection you add a field count = currentCounter and increase currentCounter by 1. When you need to select your first k elements, you find it this way
db.CityDB.find({
'index.2':"London",
count : {
'$gte' : currentCounter - k
}
})
This is not atomic and might give you sometimes more then k elements on a heavy loaded system (but it can support indexes).
Here is another approach which works nice in the shell:
2) Create your dummy data:
var k = 100;
for(var i = 1; i<k; i++){
db.a.insert({
_id : i,
z: Math.floor(1 + Math.random() * 10)
})
}
output = [];
And now find in the first k records where z == 3
k = 10;
db.a.find().sort({$natural : -1}).limit(k).forEach(function(el){
if (el.z == 3){
output.push(el)
}
})
as you see your output has correct elements:
output
I think it is pretty straight forward to modify my example for your needs.
P.S. also take a look in aggregation framework, there might be a way to achieve what you need with it.

Random Sampling from Mongo

I have a mongo collection with documents. There is one field in every document which is 0 OR 1. I need to random sample 1000 records from the database and count the number of documents who have that field as 1. I need to do this sampling 1000 times. How do i do it ?
For people coming to the answer, you should now use the new $sample aggregation function, new in 3.2.
https://docs.mongodb.org/manual/reference/operator/aggregation/sample/
db.collection_of_things.aggregate(
[ { $sample: { size: 15 } } ]
)
Then add another step to count up the 0s and 1s using $group to get the count. Here is an example from the MongoDB docs.
For MongoDB 3.0 and before, I use an old trick from SQL days (which I think Wikipedia use for their random page feature). I store a random number between 0 and 1 in every object I need to randomize, let's call that field "r". You then add an index on "r".
db.coll.ensureIndex(r: 1);
Now to get random x objects, you use:
var startVal = Math.random();
db.coll.find({r: {$gt: startVal}}).sort({r: 1}).limit(x);
This gives you random objects in a single find query. Depending on your needs, this may be overkill, but if you are going to be doing lots of sampling over time, this is a very efficient way without putting load on your backend.
Here's an example in the mongo shell .. assuming a collection of collname, and a value of interest in thefield:
var total = db.collname.count();
var count = 0;
var numSamples = 1000;
for (i = 0; i < numSamples; i++) {
var random = Math.floor(Math.random()*total);
var doc = db.collname.find().skip(random).limit(1).next();
if (doc.thefield) {
count += (doc.thefield == 1);
}
}
I was gonna edit my comment on #Stennies answer with this but you could also use a seprate auto incrementing ID index here as an alternative if you were to skip over HUGE amounts of record (talking huge here).
I wrote another answer to another question a lot like this one where some one was trying to find nth record of the collection:
php mongodb find nth entry in collection
The second half of my answer basically describes one potential method by which you could approach this problem. You would still need to loop 1000 times to get the random row of course.
If you are using mongoengine, you can use a SequenceField to generate an incremental counter.
class User(db.DynamicDocument):
counter = db.SequenceField(collection_name="user.counters")
Then to fetch a random list of say 100, do the following
def get_random_users(number_requested):
users_to_fetch = random.sample(range(1, User.objects.count() + 1), min(number_requested, User.objects.count()))
return User.objects(counter__in=users_to_fetch)
where you would call
get_random_users(100)