I would like to find out how much time a query took for getting executed. I plan to log this for audit and support purposes.
I have found this in the documentation:
q)\t log til 100000 / milliseconds for log of first 100000 numbers
3
But the above method actually evaluates the query again and tells us the time. It doesn't return the results of the query. So if I use this it will actually be like running every query twice, once for getting the results and once for knowing the execution time.
Is there any other method someone is aware of?
You could also capture the time before/after the query runs to figure out the execution time.
Execute on one line:
q)start:.z.p;result:log til 100000;exectime:.z.p-start
q)exectime
0D00:00:00.297268000
q)result
-0w 0 0.6931472 1.098612 1.386294 ...
This method will give you nano-second precision but can easily be adapted to return the same as \t.
q)res:system"t a:{st:.z.p;log til 10000000;.z.p-st}[]"
q)`long$`time$a /convert to Ms
297
q)res
297
You can use a "system t" call (the equivalent of \t) to store the result and the time in one go.
b:system"t a:log til 100000"
It's not very general or functional though so it mightn't suit your needs to have the commands inside a string.
Expanding on Connor's idea, you can wrap this in a function that will return the value and print the time taken to stdout:
time:{ t0:.z.t; r:eval x; 0N!.z.t-t0; :r }
And then send the parse tree of your function as the argument:
q)a:time (log;til 100000)
00:00:00.003
q)a
-0w 0 0.6931472 1.098612 1.386294 1.609438 1.791759 1.94591 2.079442 2.197225..
Related
I don't know if I'm posting to the right place or what but I was wondering if someone could help me with a Nightbot command I want to make.
I have the !uptime command and I have a !rage command which only retrieves one value. Now I'd like to combine the two into a command that would retrieve 5 or 6 different values (stages of rage in this situation) depending on what value !uptime would retrieve. So basically if I have been streaming for an hour !rage would say minimum but if for 5 hours it would say critical or something.
How is this possible? Someone pls help
Go to https://nightbot.tv/commands/custom and simply add these two commands
command : message : alias
!hour : $(eval var index = $(1); const options=["min","avg","max","babyRage"]; options[index] )
!rage : $(urlfetch https://beta.decapi.me/twitch/uptime/itsashawe) : !hour
NOTE: Remember to use your channel name instead of mine (itsashawe). Make sure you add !hour as alias in the !rage command.
P.S.
I've used an api to make it simpler. The rage command calls the hour command which gets the hours, minutes and seconds. The rage command uses $(1) i.e. the first argument i.e. the hour and uses it as an index to get values from options which you can change according to your use.
From the below code, I was attempting to retrieve 250 observations rather than 177. The gap is due to the fact that the call only considers trading days which is fine to me.
s='SX5E INDEX';
f='LAST_PRICE'
t= datestr(today()-250,'mm/dd/yy');
T= datestr(today(),'mm/dd/yy');
[dt,~]=history(con,s,f,t,T)
However, is there a way of retrieving the last 250 observations from today(), whatever the starting date t is ?
Best
EDIT
#Daniel : Based on your suggestion, and Going forward with while loop, I've ended up with the below way around which is free from any Matlab default calendar setting. Thanks
while l~=p
n=p-l;
t=t-n;
[dt,~]=history(con,s,f,t,T);
l=length(dt);
end
Maybe my idea to use isbusday wasn't well explained in the comments. Here is what I would try:
n=250;
m=n;
while(sum(isbusday(today()-n:today()))<m)
missing=m-sum(isbusday(today()-n:today()));
n=n+missing;
end
Count the number of missing days, add the missing days and check again (in case you added a holiday)
You should end up with n the total number of days you have to query.
(Lacking the toolbox, I was unable to test the code)
I'm trying to create an RHQ plugin to gather some measurements. It seems relativity easy to create a plugin that return a value for the present moment. However, I need to collect these measurements from files. These files are created on a schedule, for example one per hour, but they contain much finer measurements, for example a measurement for every minute. The file may look something like below:
18:00 20
18:01 42
18:02 39
...
18:58 12
18:59 15
Is it possible to create a RHQ plugin that can return many values with timestamps for a measurement?
I think you can within org.rhq.core.pluginapi.measurement.MeasurementFacet#getValues return as many values as you want within the MeasurementReport.
So basically open the file, seek to the last known position (if the file is always appended to), read from there and for each line you go
MeasurementData data = new MeasurementDataNumeric(timeInFile, request, valueFromFile);
report.add(data);
Of course alerting on this (historical) data is sort of questionable, as if you only read the file one hour later, the alert can not be retroactively fired at the time the bad value happened :->
Yes it is surely possible .
#Override
public void getValues(MeasurementReport report, Set<MeasurementScheduleRequest> metrics) throws Exception {
for (MeasurementScheduleRequest request : metrics) {
Double result = SomeReadUtilClass.readValueFromFile();
MeasurementData data = new MeasurementDataNumeric(request, result)
report.addData(data );
}
}
SomeReadUtilClass is a utility class to read the file and readValueFromFile is the function, you can write you login to read the value from file.
result is the Double variable that is more important, this result value you can calculate from database or read file. And then this result value you have to provide to MeasurementDataNumeric function MeasurementDataNumeric(request, result));
When fields are nested, there is a problem.
foreach (Word.Field field in this.Application.ActiveDocument.Fields)
{
field.Update();
text = field.Result.Text;
}
The above code does not work.
The process starts, but winds up in an endless loop or some other process that hangs the system.
Thinking about it, I can surmise that when you update a field, it might have an effect on the fields collection - thus, the loop fails.
Does anyone have any ideas on implementing this?
P.S. I know there is a Document.UpdateFields() method to update ALL fields. However, there are reasons why I cannot use this and need to only update specific field types.
My apologies! I was going to give an example of a nested field but was trying to test some more before sending anyone (Jack) on a goose-chase.
I waited and waited and waited, and after a good 2 or 3 minutes, it finished. After the last field, it crashed with this message:
Object has been deleted.
The error was generated from the following line inside the loop:
string text = field.Code.Text;
The template is being tested on mergefields that are not being found because I am testing without database connectivity. It would be odd, but explainable, that it goes through all the fields and then, at the end of the day, the very OUTER IF field's result is "Error! Reference source not found." But I still don't get why this could happen.
Nor do I understand why looping takes 3 minutes while a call to document.Fields.Update() will do the same thing in about 1 second and NOT result in the error described above.
Again, my apologies. I never considered updating inside a loop would be vastly slower that a call to doc.fields.update().
I am using MSXML v3.0 in a VB 6.0 application. The application calculates sum of an attribute of all nodes using for each loop as shown below
Set subNodes = docXML.selectNodes("//Transaction")
For Each subNode In subNodes
total = total + Val(subNode.selectSingleNode("Amount").nodeTypedValue)
Next
This loop is taking too much time, sometime it takes 15-20 minutes for 60 thousand nodes.
I am looking for XPath/DOM solution to eliminate this loop, probably
docXML.selectNodes("//Transaction").Sum("Amount")
or
docXML.selectNodes("Sum(//Transaction/Amount)")
Any suggestion is welcomed to get this sum faster.
// Open the XML.
docNav = new XPathDocument(#"c:\books.xml");
// Create a navigator to query with XPath.
nav = docNav.CreateNavigator();
// Find the sum
// This expression uses standard XPath syntax.
strExpression = "sum(/bookstore/book/price)";
// Use the Evaluate method to return the evaluated expression.
Console.WriteLine("The price sum of the books are {0}", nav.Evaluate(strExpression));
source: http://support.microsoft.com/kb/308333
Any solution that uses the XPath // pseudo-operator on an XML document with 60000+ nodes is going to be quite slow, because //x causes a complete traversal of the tree starting at the root of the document.
The solution can be speeded up significantly, if a more exact XPath expression is used, that doesn't include the // pseudo-operator.
If you know the structure of the XML document, always use a specific chain of location steps -- never //.
If you provide a small example, showing the specific structure of the document, then many people will be able to provide a faster solution than any solution that uses //.
For example, if it is known that all Transaction elements can be selected using this XPath expression:
/x/y/Transaction
then the evaluation of
sum(/x/y/Transaction/Amount)
is likely to be significantly faster than Sum(//Transaction/Amount)
Update:
The OP has revealed in a comment that the structure of the XML file is quite simple.
Accordingly, I tried with a 60000 Transaction nodes XML document the following:
/*/*/Amount
With .NET XslCompiledTransform (Yes, I used XSLT as the host for the XPath engine) this took 220ms (milliseconds), that means 0.22 seconds, to produce the sum.
With MSXML3 it takes 334 seconds.
With MSXML6 it takes 76 seconds -- still quite slow.
Conclusion: This is a bug in MSXML3 -- try to upgrade to another XPath engine, such as the one offered by .NET.