CP-SAT is slow when it creates the goal function - or-tools

Could you explain to me what CP-SAT does when it created the goal function? With big models this action is very slow.
the function is the following:
sum(sum(self.suggested_decisions[r][d]*(
((int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S1'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S1'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S1'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S1'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S1'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_1'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S2'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S2'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S2'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S2'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S2'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_2'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S3'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S3'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S3'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S3'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S3'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_3'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S4'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S4'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S4'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S4'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S4'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_4'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S5'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S5'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S5'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S5'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S5'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_5'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CA_S6'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FA']*self.scenario.dataset['BandA'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CB_S6'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FB']*self.scenario.dataset['BandB'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CC_S6'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FC']*self.scenario.dataset['BandC'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CD_S6'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FD']*self.scenario.dataset['BandD'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]))+
(int(self.scenario.dataset['CAPITAL'][r])*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_6'][r]*int(self.scenario.constants['scaling_factor']*self.scenario.constants['recovery_rate_CE_S6'])*self.scenario.decisions[d].attributes['max_capital_recoverable']-int(self.scenario.constants['scaling_factor']*self.scenario.decisions[d].attributes['FE']*self.scenario.dataset['BandE'][r]*self.scenario.dataset['SCORE_CLASS_6'][r])))
*(1*self.scenario.constants['scaling_factor']-int(self.scenario.constants['scaling_factor']*self.scenario.dataset['SCORE'][r]))
)for r in self.all_records) for d in self.all_decisions)
Where all self.all_records=60.000 and self.all_decisions=7
The language is python.

the sum() API creates an unbalanced syntax tree that is very expensive to parse for large objectives.
I suggest using cp_model.LinearExpr.ScalProd(variable_list, coefficient_list) (see doc entry)

Related

Try to MatchAll a string in Power Apps

I've been programming in Power Apps for some time now (but I'm not a specialist). I'm trying to pass an array from Power Automate to Power Apps.
I was able to do that with a string and after a few more steps I have the following text:
[{"Messung":1;"Tiegel leer":"16;5469";"Tiegel Probe":"27;6659";"Tiegel Asche":"17;6549"}]";"[{"Messung":2;"Tiegel leer":"16;9654";"Tiegel Probe":"26;5476";"Tiegel Asche":"17;6549"}]";"[{"Messung":3;"Tiegel leer":"17;0256";"Tiegel Probe":"27;9862";"Tiegel Asche":"18;2235"}"]
I would like to convert this back to a data table with a MatchAll command, unfortunately I do not understand the Microsoft explanation for this command at all. I can convert the semicolons in the values ​​back into commas later, but I needed the semicolons (language dependency, programming in German)
My plan was to do the conversion using a ClearCollect(MatchAll....) command.
I seem to be able to get these individually for my individual columns
(; <Tiegel_leer>;<Tiegel_Probe> & <Tiegel_Asche>).
ClearCollect(ResultAG;MatchAll(Label40_1.Text;"(""Messung"":(?[^""]))"));;
ClearCollect(ResultAG;MatchAll(Label40_1.Text;"(""Tiegel Probe"":(?<Tiegel_Probe>[^""]))"));;
ClearCollect(ResultAG;MatchAll(Label40_1.Text;"(""Tiegel leer"":(?<Tiegel_leer>[^""]))"));;
ClearCollect(ResultAG;MatchAll(Label40_1.Text;"(""Tiegel Asche"":(?<Tiegel_Asche>[^""]))"))
But I can't connect these lines with each other and I still don't really understand it. These lines were also created more by trial and error than by understanding.
Would be glad if someone could help me with this.
Thanks for helping
Using MatchAll approach is very old and complex approach to parse the array of objects (data) sent from Power automate to Power apps.
I would suggest you to use the recently released ParseJSON function in Power apps instead of MatchAll function for this.
For detailed information and tutorial, check:
Power Fx: Introducing ParseJSON
ParseJSON function in Power Apps (experimental)
ParseJSON function in Power Apps - Video

API for getting language server info from extension (bracket pairs, function begin/end, ...)

I'm currently writing an extension for VSCode which needs to have some good knowledge about the currently shown code in the editor and I'm wondering if there is some API available which can give me the needed information (e.g. from the current language server) or if I have to do the heavy lifting myself by implementing all the needed code parsing etc.
What I need in detail is the following:
Given is a position in code (line + col no)
What I'd like to know about the given position:
Is pos inside a function and if so, where does the function start & end?
Is pos inside a string and if so, where does the string start & end?
The extension is going to provide some kind of "vim selection light".
You can have only half of that via VS Code APIs.
Is pos inside a function and if so, where does the function start & end?
Using the vscode.executeDocumentSymbolProvider command, you can gather all functions from a file and check if the current position is inside one of the functions.
Something like this to retrieve the functions:
const symbolsToFind = [SymbolKind.Function, SymbolKind.Method, SymbolKind.Constructor];
const docSymbols = await commands.executeCommand(
'vscode.executeDocumentSymbolProvider',
window.activeTextEditor.document.uri
) as DocumentSymbol[];
const docSymbolsFunctionsMethods = docSymbols
? docSymbols.filter(symbol => symbolsToFind.includes(symbol.kind))
: undefined;
Each Symbol provides you with a Range, which defines the start and end of the function declaration and body.
Be aware that you will probably need a recursive approach (each Symbol can contain other Symbols). A complete sample is available on my Separators extension (https://github.com/alefragnani/vscode-separators/blob/b6d515847bbaccf6395b24f9fdf82c373cb24fd7/src/symbols.ts#L51)
Is pos inside a string and if so, where does the string start & end?
Unfortunately, there is no API for that, as VS Code does not expose language tokens or the AST. So, you will have to deal with it yourself, maybe using regex.
Hope this helps

Exporting the output of MATLAB's methodsview

MATLAB's methodsview tool is handy when exploring the API provided by external classes (Java, COM, etc.). Below is an example of how this function works:
myApp = actxserver('Excel.Application');
methodsview(myApp)
I want to keep the information in this window for future reference, by exporting it to a table, a cell array of strings, a .csv or another similar format, preferably without using external tools.
Some things I tried:
This window allows selecting one line at a time and doing "Ctrl+c Ctrl+v" on it, which results in a tab-separated text that looks like this:
Variant GetCustomListContents (handle, int32)
Such a strategy can work when there are only several methods, but not viable for (the usually-encountered) long lists.
I could not find a way to access the table data via the figure handle (w/o using external tools like findjobj or uiinspect), as findall(0,'Type','Figure') "do not see" the methodsview window/figure at all.
My MATLAB version is R2015a.
Fortunately, methodsview.m file is accessible and allows to get some insight on how the function works. Inside is the following comment:
%// Internal use only: option is optional and if present and equal to
%// 'noUI' this function returns methods information without displaying
%// the table. `
After some trial and error, I saw that the following works:
[titles,data] = methodsview(myApp,'noui');
... and returns two arrays of type java.lang.String[][].
From there I found a couple of ways to present the data in a meaningful way:
Table:
dataTable = cell2table(cell(data));
dataTable.Properties.VariableNames = matlab.lang.makeValidName(cell(titles));
Cell array:
dataCell = [cell(titles).'; cell(data)];
Important note: In the table case, the "Return Type" column title gets renamed to ReturnType, since table titles have to be valid MATLAB identifiers, as mentioned in the docs.

How to get triggers from .nxe files with FieldTrip Toolbox

I'm trying to analyse TMS-EEG data from Nexstim with FieldTrip Toolbox. I want to make a trial matrix from my raw .nxe data. But how I know which triggers to assign for cfg.trialdef.eventvalue, when cfg is the output variable. I'm trying to mimic the same kind of code as you can find from the tutorial: http://www.fieldtriptoolbox.org/tutorial/tms-eeg
I came up with a solution to the problem. With a command event = ft_read_event('filename.nxe') I got a struct with fields: type, value, sample, duration and offset and this is all I need.

Sum of DOM elements using XPath

I am using MSXML v3.0 in a VB 6.0 application. The application calculates sum of an attribute of all nodes using for each loop as shown below
Set subNodes = docXML.selectNodes("//Transaction")
For Each subNode In subNodes
total = total + Val(subNode.selectSingleNode("Amount").nodeTypedValue)
Next
This loop is taking too much time, sometime it takes 15-20 minutes for 60 thousand nodes.
I am looking for XPath/DOM solution to eliminate this loop, probably
docXML.selectNodes("//Transaction").Sum("Amount")
or
docXML.selectNodes("Sum(//Transaction/Amount)")
Any suggestion is welcomed to get this sum faster.
// Open the XML.
docNav = new XPathDocument(#"c:\books.xml");
// Create a navigator to query with XPath.
nav = docNav.CreateNavigator();
// Find the sum
// This expression uses standard XPath syntax.
strExpression = "sum(/bookstore/book/price)";
// Use the Evaluate method to return the evaluated expression.
Console.WriteLine("The price sum of the books are {0}", nav.Evaluate(strExpression));
source: http://support.microsoft.com/kb/308333
Any solution that uses the XPath // pseudo-operator on an XML document with 60000+ nodes is going to be quite slow, because //x causes a complete traversal of the tree starting at the root of the document.
The solution can be speeded up significantly, if a more exact XPath expression is used, that doesn't include the // pseudo-operator.
If you know the structure of the XML document, always use a specific chain of location steps -- never //.
If you provide a small example, showing the specific structure of the document, then many people will be able to provide a faster solution than any solution that uses //.
For example, if it is known that all Transaction elements can be selected using this XPath expression:
/x/y/Transaction
then the evaluation of
sum(/x/y/Transaction/Amount)
is likely to be significantly faster than Sum(//Transaction/Amount)
Update:
The OP has revealed in a comment that the structure of the XML file is quite simple.
Accordingly, I tried with a 60000 Transaction nodes XML document the following:
/*/*/Amount
With .NET XslCompiledTransform (Yes, I used XSLT as the host for the XPath engine) this took 220ms (milliseconds), that means 0.22 seconds, to produce the sum.
With MSXML3 it takes 334 seconds.
With MSXML6 it takes 76 seconds -- still quite slow.
Conclusion: This is a bug in MSXML3 -- try to upgrade to another XPath engine, such as the one offered by .NET.