GtkTreeView insert/update performance penalty because of sorting - gtk

I'm having performance problems when inserting many rows into a GTK treeview (using PyGTK) - or when modifying many rows. The problem is that the model seems to get resorted after each change (insert/modification). This causes the GUI to hang for multiple seconds. Leaving the model unsorted by commenting out model.set_sort_column_id(SOME_ROW_INDEX, gtk.SORT_ASCENDING) eliminates these problems.
Therefore, I would like to disable the sorting while I'm inserting or modifying the model, and re-enable it afterwards. Unfortunately, sorting can't be disabled with model.set_sort_column_id(-1, gtk.SORT_ASCENDING). Using the freeze/thaw functions doesn't work either:
treeview.freeze_child_notify()
try:
for row in model:
# ... change something in row ...
finally:
treeview.thaw_child_notify()
So, how can I disable the sorting? Or is there a better method for bulk inserts/modifications?
Solution
Thanks to the information and links bobince provided in his answer, I checked out some of the alternatives:
1) Dummy sorting
tv.freeze_child_notify()
sortSettings = model.get_sort_column_id()
model.set_default_sort_func(lambda *unused: 0) # <-- can also use None but that is slower!
# model.set_default_sort_func(lambda *unused: 1) <-- slow
# model.set_default_sort_func(lambda *unused: -1) <-- crash (access violation in gtk_tree_store_move_after?!)
model.set_sort_column_id(-1, gtk.SORT_ASCENDING)
# change rows
model.set_sort_column_id(*sortSettings)
tv.thaw_child_notify()
This brought the time down from about 11 seconds to 2 seconds. Wow! But could be better, this was only for 1000 rows.
2) Removing model while updating
tv.set_model(None)
# change rows
tv.set_model(model)
No noticable difference, 11 seconds.
3) Dummy sorting and the cool generator trick from the PyGTK FAQ
def gen():
tv.freeze_child_notify()
sortSettings = model.get_sort_column_id()
model.set_default_sort_func(lambda *unused: 0)
model.set_sort_column_id(-1, gtk.SORT_ASCENDING)
i = 0
for row in rowsToChange:
i += 1
# change something
if i % 200 == 0:
# freeze/thaw not really necessary here as sorting is wrong because of the
# default sort function
yield True
model.set_sort_column_id(*sortSettings)
tv.thaw_child_notify()
yield False
g = gen()
if g.next(): # run once now, remaining iterations when idle
gobject.idle_add(g.next)
The result: The same estimated 2 seconds as in solution 1), but the GUI reacts during this time. I prefer this method. The modulo 200 can be tweaked to make the GUI more or less reactive if needed.
Maybe it's even possible to subclass gtk.TreeStore to get better results? Haven't tried that yet, but option 3) is good enough for now.

Sounds like you're nearly there. See the FAQ for further notes. In particular, you should also set the default_sort_order (you can now use None as well as the dummy compare lambda in that example, for better performance) to ensure there is no sorting, and remove the model from the treeview for the duration of the operations.
If it's a lot of changes you may be better off creating and setting a complete new model.

Related

Seeking understanding of the positional operator in the case of a single update of non-periodic time series segment

I'm experimenting with a schema for recording some non-periodic time series data using pymongo. There's a decent amount of stuff out there about periodic time series, but not too much for non-periodic. Taking a leaf from this post.
In this schema, the collection documents each contain fixed count of data/time points. I'm initializing a segment with something like:
nonTime = datetime.fromordinal(1) #create a time we can use as an "invalid" time entry
segment = {
'times': [nonTime] * 64,
'values': [float('nan')]}
collection.insert(segment)
Armed with a beginners understanding of the $ (positional) operator, I concocted the following update:
collection.update({'times': nonTime}, {'$set': {'times.$': now(), 'values.$': newValue}})
This works like a charm. I'm not sure how expensive the linear seek for the match is, but my data isn't so steady that I'm terribly worried about that part. It keeps the segments small and nice.
What I would like to do though is avoid adding newValue when the previous element is the same value. For example, if my segment looks like
{
'times': [t1, t2, t3, nonTime, ... nonTime],
'values': [42.0, 13.0, 42.0, nan, ... nan]}
I want to add a new t4/newValue only if newValue is not equal to that last 42.0 at slot 2 (zero indexed).
I tried the following largish expression on a lark:
collection.update({'$and': [{'times': nonTime}, {'values.$-1': {'$ne': newValue}}]}, {same_set_as_above})
Amazingly, it didn't blow up. But then... it didn't work either. It appended the duplicate 42.0 anyway. Am I just expecting too much of the positional operator? Is there a different way to do this? I was trying to keep it as a single transaction.

Spark caching strategy

I have a Spark driver that goes like this:
EDIT - earlier version of the code was different & didn't work
var totalResult = ... // RDD[(key, value)]
var stageResult = totalResult
do {
stageResult = stageResult.flatMap(
// Some code that returns zero or more outputs per input,
// and updates `acc` to number of outputs
...
).reduceByKey((x, y) => x.sum(y))
totalResult = totalResult.union(stageResult)
} while(stageResult.count() > 0)
I know from properties of my data that this will eventually terminate (I'm essentially aggregating up the nodes in a DAG).
I'm not sure of a reasonable caching strategy here - should I cache stageResult each time through the loop? Am I setting up a horrible tower of recursion, since each totalResult depends on all previous incarnations of itself? Or will Spark figure that out for me? Or should I put each RDD result in an array and take one big union at the end?
Suggestions will be welcome here, thanks.
I would rewrite this as follows:
do {
stageResult = stageResult.flatMap(
//Some code that returns zero or more outputs per input
).reduceByKey(_+_).cache
totalResult = totalResult.union(stageResult)
} while(stageResult.count > 0)
I am fairly certain(95%) that the stageResult DAG used in the union will be the correct reference (especially since count should trigger it), but this might need to be double checked.
Then when you call totalResult.ACTION, it will put all of the cached data together.
ANSWER BASED ON UPDATED QUESTION
As long as you have the memory space, then I would indeed cache everything along the way as it stores the data of each stageResult, unioning all of those data points at the end. In fact, each union does not rely on the past as that is not the semantics of RDD.union, it merely puts them together at the end. You could just as easily change your code to use a val due to RDD immutability.
As a final note, maybe the DAG visualization will help understand why there would not be recursive ramifications:

Get all top level entry values from NotesView

What is the easiest way to get all top level entry values from a notes view? I have found the property "TopLevelEntryCount" returning the number of alle categories, but I want the values, too.
To iterate the view entries and column values need too much time.
Set ecl = view.Allentries
Set ve = ecl.Getfirstentry()
While Not(ve Is Nothing)
If IsArray(ve.Columnvalues(0)) Then
If flag = "" Then
arr = ve.Columnvalues(0)
Else
arr = ArrayUnique(ArrayAppend(arr, ve.Columnvalues(0)))
End If
Else
'error if arr is not already an array
arr = ArrayUnique(ArrayAppend(arr, ve.Columnvalues(0)))
End If
flag = "1"
Set ve = ecl.Getnextentry(ve)
Wend
Anyone know a faster way? It must not be written in LotusScript, actually I would prefer Java, JS or SSJS.
Idea 1: You can use the NotesViewNavigator class, and call the getFirst() and getNextCategory() methods. This should be faster than walking all the entries.
Idea 2: You can use NotesSession.Evaluate() with a formula that does an #Unique on #DbColumn for the first column of view. That should bring you back an array, and you can get the ubound of the array. Formulas tend to be very fast, but evaluate() has to compile them first, so I don't know if this will be faster or not. The disadvantage of this approach is that for very large views, this could exceed formula language's size limits. But if it does prove to be faster, you could catch the exception and fall-back to the slower method of iterating.
Richard's response is perfect!
For a java version it's almost the same:
see the Domino help for getNextCategory the given example below ViewNavigator really answer your need: link here
The evaluate method is also available in java Session

COBOL add 0 to a Variable in COMPUTE

I ran into a strange statement when working on a COBOL program from $WORK.
We have a paragraph that is opening a cursor (from DB2), and the looping over it until it hits an EOT (in pseudo code):
... working storage ...
01 I PIC S9(9) COMP VALUE ZEROS.
01 WS-SUB PIC S9(4) COMP VALUE 0.
... code area ...
PARA-ONE.
PERFORM OPEN-CURSOR
PERFORM FETCH-CURSOR
PERFORM VARYING I FROM 1 BY 1 UNTIL SQLCODE = DB2EOT
do stuff here...
END-PERFORM
COMPUTE WS-SUB = I + 0
PERFORM CLOSE-CURSOR
... do another loop using WS-SUB ...
I'm wondering why that COMPUTE WS-SUB = I + 0 line is there. My understanding is that I will always at least be 1, because of the perform block above it (i.e., even if there is an EOT to start with, I will be set to one on that initial iteration).
Is that COMPUTE line even needed? Is it doing some implicit casting that I'm not aware of? Why would it be there? Why wouldn't you just MOVE I TO WS-SUB?
Call it stupid, but with some compilers (with the correct options in effect), given
01 SIGNED-NUMBER PIC S99 COMP-5 VALUE -1.
01 UNSIGNED-NUMBER PIC 99 COMP-5.
...
MOVE SIGNED-NUMBER TO UNSIGNED-NUMBER
DISPLAY UNSIGNED-NUMBER
results in: 255. But...
COMPUTE UNSIGNED-NUMBER = SIGNED-NUMBER + ZERO
results in: 1 (unsigned)
So to answer your question, this could be classified as a technique used cast signed numbers into unsigned numbers. However, in the code example you gave it makes no sense at all.
Note that the definition of "I" was (likely) coded by one programmer and of WS-SUB by another (naming is different, VALUE clause is different for same purpose).
Programmer 2 looks like "old school": PIC S9(4), signed and taking up all the digits which "fit" in a half-word. The S9(9) is probably "far over the top" as per range of possible values, but such things concern Programmer 1 not at all.
Probably Programmer 2 had concerns about using an S9(9) COMP for something requiring (perhaps many) fewer than 9999 "things". "I'll be 'efficient' without changing the existing code". It seems to me unlikely that the field was ever defined as unsigned.
A COMP/COMP-4 with nine digits does have a performance penalty when used for calculations. Try "ADD 1" to a 9(9) and a 9(8) and a 9(10) and compare the generated code. If you can have nine digits, define with 9(10), otherwise 9(8), if you need a fullword.
Programmer 2 knows something of this.
The COMPUTE with + 0 is probably deliberate. Why did Programmer 2 use the COMPUTE like that (the original question)?
Now it is going to get complicated.
There are two "types" of "binary" fields on the Mainframe: those which will contain values limited by the PICture clause (USAGE BINARY, COMP and COMP-4); those which contain values limited by the field size (USAGE COMP-5).
With BINARY/COMP/COMP-4, the size of the field is determined from the PICture, and so are the values that can be held. PIC 9(4) is a halfword, with a maxiumum value of 9999. PIC S9(4) a halfword with values -9999 through +9999.
With COMP-5 (Native Binary), the PICture just determines the size of the field, all the bits of the field are relevant for the value of the field. PIC 9(1) to 9(4) define halfwords, pic 9(5) to 9(9) define fullwords, and 9(10) to 9(18) define doublewords. PIC 9(1) can hold a maximum of 65535, S9(1) -32,768 through +32,767.
All well and good. Then there is compiler option TRUNC. This has three options. STD, the default, BIN and OPT.
BIN can be considered to have the most far-reaching affect. BIN makes BINARY/COMP/COMP-4 behave like COMP-5. Everything becomes, in effect, COMP-5. PICtures for binary fields are ignored, except in determining the size of the field (and, curiously, with ON SIZE ERROR, which "errors" when the maxima according to the PICture are exceeded). Native Binary, in IBM Enterprise Cobol, generates, in the main, though not exclusively, the "slowest" code. Truncation is to field size (halfword, fullword, doubleword).
STD, the default, is "standard" truncation. This truncates to "PICture". It is therefore a "decimal" truncation.
OPT is for "performance". With OPT, the compiler truncates in whatever way is the most "performant" for a particular "code sequence". This can mean intermediate values and final values may have "bits set" which are "outside of the range" of the PICture. However, when used as a source, a binary field will always only reflect the value specified by the PICture, even if there are "excess" bits set.
It is important when using OPT that all binary fields "conform to PICture" meaning that code must never rely on bits which are set outside the PICture definition.
Note: Even though OPT has been used, the OPTimizer (OPT(STD) or OPT(FULL)) can still provide further optimisations.
This is all well and good.
However, a "pickle" can readily ensue if you "mix" TRUNC options, or if the binary definition in a CALLing program is not the same as in the CALLed program. The "mix" can occur if modules within the same run-unit are compiled with different TRUNC options, or if a binary field on a file is written with one TRUNC option and later read with another.
Now, I suspect Programmer 2 encountered something like this: Either, with TRUNC(OPT) they noticed "excess bits" in a field and thought there was a need to deal with them, or, through the "mix" of options in a run-unit or "across file usage" they noticed "excess bits" where there would be a need to do something about it (which was to "remove the mix").
Programmer 2 developed the COMPUTE A = B + 0 to "deal" with a particular problem (perceived or actual) and then applied it generally to their work.
This is a "guess", or, better, a "rationalisation" which works with the known information.
It is a "fake" fix. There was either no problem (the normal way that TRUNC(OPT) works) or the correct resolution was "normalisation" of the TRUNC option across modules/file use.
I do not want loads of people now rushing off and putting COMPUTE A = B + 0 in their code. For a start, they don't know why they are doing it. For a continuation it is the wrong thing to do.
Of course, do not just remove the "+ 0" from any of these that you find. If there is a "mix" of TRUNCs, a program may stop "working".
There is one situation in which I have used "ADD ZERO" for a BINARY/COMP/COMP-4. This is in a "Mickey Mouse" program, a program with no purpose but to try something out. Here I've used it as a method to "trick" the optimizer, as otherwise the optimizer could see unchanging values so would generate code to use literal results as all values were known at compile time. (A perhaps "neater" and more flexible way to do this which I picked up from PhilinOxford, is to use ACCEPT for the field). This is not the case, for certain, with the code in question.
I wonder if a testing version of the sources ever had
COMPUTE WS-SUB = I + 0
ON SIZE ERROR
DISPLAY "WS-SUB overflow"
STOP RUN
END-COMPUTE
with the range test discarded when the developer was satisfied and cleaning up? MOVE doesn't allow declarative SIZE statements. That's as much of a reason as I could see. Or perhaps developer habit of using COMPUTE to move, as a subtle reminder to question the need for defensive code at every step? And perhaps not knowing, as Joe pointed out, the SIZE clause would be just as effective without the + 0? Or a maintainer struggled with off by one errors and there was a corrective change from 1 to 0 after testing?

Sum of DOM elements using XPath

I am using MSXML v3.0 in a VB 6.0 application. The application calculates sum of an attribute of all nodes using for each loop as shown below
Set subNodes = docXML.selectNodes("//Transaction")
For Each subNode In subNodes
total = total + Val(subNode.selectSingleNode("Amount").nodeTypedValue)
Next
This loop is taking too much time, sometime it takes 15-20 minutes for 60 thousand nodes.
I am looking for XPath/DOM solution to eliminate this loop, probably
docXML.selectNodes("//Transaction").Sum("Amount")
or
docXML.selectNodes("Sum(//Transaction/Amount)")
Any suggestion is welcomed to get this sum faster.
// Open the XML.
docNav = new XPathDocument(#"c:\books.xml");
// Create a navigator to query with XPath.
nav = docNav.CreateNavigator();
// Find the sum
// This expression uses standard XPath syntax.
strExpression = "sum(/bookstore/book/price)";
// Use the Evaluate method to return the evaluated expression.
Console.WriteLine("The price sum of the books are {0}", nav.Evaluate(strExpression));
source: http://support.microsoft.com/kb/308333
Any solution that uses the XPath // pseudo-operator on an XML document with 60000+ nodes is going to be quite slow, because //x causes a complete traversal of the tree starting at the root of the document.
The solution can be speeded up significantly, if a more exact XPath expression is used, that doesn't include the // pseudo-operator.
If you know the structure of the XML document, always use a specific chain of location steps -- never //.
If you provide a small example, showing the specific structure of the document, then many people will be able to provide a faster solution than any solution that uses //.
For example, if it is known that all Transaction elements can be selected using this XPath expression:
/x/y/Transaction
then the evaluation of
sum(/x/y/Transaction/Amount)
is likely to be significantly faster than Sum(//Transaction/Amount)
Update:
The OP has revealed in a comment that the structure of the XML file is quite simple.
Accordingly, I tried with a 60000 Transaction nodes XML document the following:
/*/*/Amount
With .NET XslCompiledTransform (Yes, I used XSLT as the host for the XPath engine) this took 220ms (milliseconds), that means 0.22 seconds, to produce the sum.
With MSXML3 it takes 334 seconds.
With MSXML6 it takes 76 seconds -- still quite slow.
Conclusion: This is a bug in MSXML3 -- try to upgrade to another XPath engine, such as the one offered by .NET.