Is it possible to create an RHQ plugin that collects historic measurements from files? - plugins

I'm trying to create an RHQ plugin to gather some measurements. It seems relativity easy to create a plugin that return a value for the present moment. However, I need to collect these measurements from files. These files are created on a schedule, for example one per hour, but they contain much finer measurements, for example a measurement for every minute. The file may look something like below:
18:00 20
18:01 42
18:02 39
...
18:58 12
18:59 15
Is it possible to create a RHQ plugin that can return many values with timestamps for a measurement?

I think you can within org.rhq.core.pluginapi.measurement.MeasurementFacet#getValues return as many values as you want within the MeasurementReport.
So basically open the file, seek to the last known position (if the file is always appended to), read from there and for each line you go
MeasurementData data = new MeasurementDataNumeric(timeInFile, request, valueFromFile);
report.add(data);
Of course alerting on this (historical) data is sort of questionable, as if you only read the file one hour later, the alert can not be retroactively fired at the time the bad value happened :->

Yes it is surely possible .
#Override
public void getValues(MeasurementReport report, Set<MeasurementScheduleRequest> metrics) throws Exception {
for (MeasurementScheduleRequest request : metrics) {
Double result = SomeReadUtilClass.readValueFromFile();
MeasurementData data = new MeasurementDataNumeric(request, result)
report.addData(data );
}
}
SomeReadUtilClass is a utility class to read the file and readValueFromFile is the function, you can write you login to read the value from file.
result is the Double variable that is more important, this result value you can calculate from database or read file. And then this result value you have to provide to MeasurementDataNumeric function MeasurementDataNumeric(request, result));

Related

How to use DISP=SHR on an fopen

With code like:
fopen("DD:LOGLIBY(L1234567)", "w");
and JCL like:
//LOGTEST EXEC PGM=LOGTEST
//LOGLIBY DD DSN=MYUSER.LOG.LIBY,DISP=SHR
I can create PDS(E) members while at the same time browsing the PDS(E) to look at existing members, as expected with DISP=SHR.
If instead I code:
fopen("//'MYUSER.LOG.LIBY(L1234567)'", "w");
The fopen fails if I am browsing the PDS(E) at the time, or the browse of the PDS(E) fails while I have the file open. In other words there is no DISP=SHR. According to the fopen() documentation, DISP=SHR is the default when using file modes of "r" etc, but not "w".
How can I provide DISP=SHR in the second example?
There are two possibilities...
For anyone not familiar with the internal structure of partitioned datasets (that is, PDS or PDS/E), these datasets are logically divided into two parts: a "directory" having pointers to all the individual members, and a "data" area containing the actual records for the individual members:
PDS: <DIRECTORY BLOCKS>
<MEMBER1>: ADDRESS OF DATA FOR MEMBER1 (xxx)
<MEMBER2>: ADDRESS OF DATA FOR MEMBER2 (yyy)
...
<DIRECTORY FREESPACE)
...
<EOF - END OF THE PDS DIRECTORY>
<DATA PORTION>
+xxx = DATA FOR MEMBER1
...
<EOF - END OF MEMBER1>
+yyy = DATA FOR MEMBER2
...
<EOF - END OF MEMBER2>
...
FREE SPACE (ALLOCATED, BUT UNUSED)
...
END OF PDS
Throughout the next few paragraphs, keep in mind that you can open either the entire PDS/PDSE, and this enables you to read/write whatever members you like, or you can allocate and open a single member, and that gets processed like any other sequential file.
First, if you actually to have a DD statement coded as you show in the question, then you may simply need to change your open from fopen(dsn,...) to fopen(dd:ddname,...). If you're running under the UNIX Shell or you do something that results in your process running in a different address space (such as fork()), then this might not work, but it may be worth a try. If you do this with the JCL you show, the challenge would be managing the PDS/E directory - you'd need to issue your own "STOW" when you create/update a new member since the JCL allocates the entire dataset, not just a single member. The sequence would be:
Open the DD for output.
Write your data.
Update the PDS or PDS/E directory with new member information (this is where the STOW function comes in - it updates the directory of the PDS/PDSE to reflect the member you created or updated).
Close the file
If you also need to read members, you'd need to issue FIND (or BLDL/POINT - which can be fseek() in C) to point to the correct member, then read the member. I'm sure it sounds like a hassle, but the advantage of this approach is that you can allocate/open the file once, and process as many individual members as you like.
A second workaround might be to dynamically allocate the file yourself, and then open it using DD:ddname syntax...if you only infrequently access the file, this is probably easier to code. The gory details of dynamic allocation are fully described here: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.ieaa800/reqsvc.htm.
There are several ways to invoke dynamic allocation: you can write a small assembler program, you can use the z/OS UNIX Services BPXWDYN callable service, or you can use the C runtime "dynalloc()" or "svc99()" functions. The dynalloc() function is easy to use, but it only exposes a subset of what dynamic allocation can do...svc99() is more cumbersome to use, but it exposes more functionality.
However you do it, dynamic allocation takes "text units" that roughly correspond to the parameters you find on JCL DD statements. What you're describing sounds like you'd just need to pass DSN and DISP text units, and maybe DDNAME (you can either pass your own DDNAME, or let the system generate one for you).
The C runtime functions make all this easy, but be aware that there are a few oddities, such as the need to pad the parameters to their maximum length. For example, a DSN needs to be 44 characters and padded on the right with blanks - not a C-style null-terminated string.
Here's a small code snippet as an example:
#include <dynit.h>
. . .
int allocate(ddn, dsn, mem)
{
__dyn_t ip; // Parameters to dynalloc()
. . .
// Prepare the parameters to dynalloc()
dyninit(&ip); // Initialize the parameters
ip.__ddname = ddn; // 8-char blank-padded
ip.__dsname = dsn; // 44-char blank-padded
ip.__status = __DISP_SHR; // DISP=(SHR)
ip.__normdisp = __DISP_KEEP; // DISP=(...,KEEP)
ip.__misc_flags = __CLOSE; // FREE=CLOSE
if (*mem) // Optional PDS, PDS/E member
ip.__member = mem; // 8-char blank-padded
// Now we can call dynalloc()...
if (dynalloc(&ip)) // 0: Success, else error
{
// On error, the errcode/infocode explain why - values
// are detailed in z/OS Authorized Services Reference
printf("SVC99: Can't allocate %s - RC 0x%x, Info 0x%x\n",
dsn, ip.__errcode, ip.__infocode);
return FALSE;
}
// If dynalloc works, you can open the file with fopen("DD:ddname",...)
}
Don't forget that when you're done with the file, you generally need to deallocate it.
The code snippet above uses "FREE=CLOSE" - this means that when the file is closed, z/OS will automatically free the allocation...if you only open and process the dataset once, this is a convenient approach. If you need to repeatedly open and close the file, then you wouldn't use FREE=CLOSE, but instead call dynamic allocation a second time after you're done with your processing and want to free the file.
If you need to concurrently access multiple files, beware that you'll need to generate multiple unique DDNAMEs. You can either do this in your own code, or you can use the form of dynamic allocation that automatically builds and returns a usable DDNAME (of the form "SYSnnnnn").
Also, don't forget that updating a dataset under DISP=SHR can be dangerous in some situations, especially if the dataset involved can be a conventional PDS as well as a PDS/E. The big danger is that two applications open the dataset for output concurrently...both will write data into the same place, and the result will likely be a damaged PDS directory.
There are some other oddities in the UNIX Services environment, particularly if you use fork() or exec() and expect file handles to work in subprocesses since allocations are generally tied to a particular z/OS address space. Services like spawn() can let the child process run in the same address space, so this is one possibility.

Scala Gatling keep inserting user until end of feed file is reached

I'm trying to inject users to a scenario in such a way that it will keep inserting user until every single entry of the feed file is used since the feed file contains log in information. I would like all the users in the feed file to log in. Right now all I could think of is two possible approaches.
Here I insert the number of rows in the feedfile at once.
scenario("Verified_Login")
.exec(LoginScenario.scn)
.inject(atOnceUsers(number_of_entries_in_feedfile))
Here I insert a very high time duration, for example, 100 seconds and then make the feedfile circular.
scenario("Verified_Login")
.exec(LoginScenario.scn)
.inject(atOnceUsers(1),constantUsersPerSec(1) during(100 seconds)
The problem with the first approach is I have to find the number of entries in the feed file which can be tedious as there could be thousands there. The problem with the second is that entries could and probably will be repeated. So is there a way to keep injecting users till feed file runs out of entries?
According to this source, from last year, Stéphane Landelle - who is the leading contributor of gatling, says that you must provide enough data for a simulation to complete using this method.
The post I linked from Stéphane does suggest to simply read the length of the file and use that to drive the amount of users, as you have already mentioned in your question.
I suggest you read the post as it will give you an alternate method to achieving what you want. Seems to be as close as you will ever get unless things have changed.
Here is their code.
val systemsIdentifier = jdbcFeeder(databaseUrl, databaseUser, databasePassword, sql_systemsIdentifier)
val count = new AtomicInteger(systemsIdentifier.records.size).asLongAs(_ => count.getAndIncrement < systemsIdentifier.records.size)
val comScn = scenario("My scenario")
.repeat(systemsIdentifier.records.size / count) {
feed(systemsIdentifier)
.exec(performActionsChain)
}
setUp(comScn.inject(rampUsers(count) over (60 seconds))).protocols(httpConf)

updating existing data on google spreadsheet using a form?

I want to build kind of an automatic system to update some race results for a championship. I have an automated spreadsheet were all the results are shown but it takes me a lot to update all of them so I was wondering if it would be possible to make a form in order to update them more easily.
In the form I will enter the driver name and the number o points he won on a race. The championship has 4 races each month so yea, my question is if you guys know a way to update an existing data (stored in a spreadsheet) using a form. Lets say that in the first race, the driver 'X' won 10 points. I will insert this data in a form and then call it from the spreadsheet to show it up, that's right. The problem comes when I want to update the second race results and so on. If the driver 'X' gets on the second race 12 points, is there a way to update the previous 10 points of that driver and put 22 points instead? Or can I add the second race result to the first one automatically? I mean, if I insert on the form the second race results can it look for the driver 'X' entry and add this points to the ones that it previously had. Dunno if it's possible or not.
Maybe I can do it in another way. Any help will be much appreciated!
Thanks.
Maybe I missed something in your question but I don't really understand Harold's answer...
Here is a code that does strictly what you asked for, it counts the total cumulative value of 4 numbers entered in a form and shows it on a Spreadsheet.
I called the 4 questions "race number 1", "race number 2" ... and the result comes on row 2 so you can setup headers.
I striped out any non numeric character so you can type responses more freely, only numbers will be retained.
form here and SS here (raw results in sheet1 and count in Sheet2)
script goes in spreadsheet and is triggered by an onFormSubmit trigger.
function onFormSubmit(e) {
var sh = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Sheet2');
var responses = []
responses[0] = Number(e.namedValues['race number 1'].toString().replace(/\D/g,''));
responses[1] = Number(e.namedValues['race number 2'].toString().replace(/\D/g,''));
responses[2] = Number(e.namedValues['race number 3'].toString().replace(/\D/g,''));
responses[3] = Number(e.namedValues['race number 4'].toString().replace(/\D/g,''));
var totals = sh.getRange(2,1,1,responses.length).getValues();
for(var n in responses){
totals[0][n]+=responses[n];
}
sh.getRange(2,1,1,responses.length).setValues(totals);
}
edit : I changed the code to allow you to change easily the number of responses... range will update automatically.
EDIT 2 : a version that accepts empty responses using an "if" condition on result:
function onFormSubmit(e) {
var sh = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Sheet2');
var responses = []
responses[0] = Number((e.namedValues['race number 1']==null ? 0 :e.namedValues['race number 1']).toString().replace(/\D/g,''));
responses[1] = Number((e.namedValues['race number 2']==null ? 0 :e.namedValues['race number 2']).toString().replace(/\D/g,''));
responses[2] = Number((e.namedValues['race number 3']==null ? 0 :e.namedValues['race number 3']).toString().replace(/\D/g,''));
responses[3] = Number((e.namedValues['race number 4']==null ? 0 :e.namedValues['race number 4']).toString().replace(/\D/g,''));
var totals = sh.getRange(2,1,1,responses.length).getValues();
for(var n in responses){
totals[0][n]+=responses[n];
}
sh.getRange(2,1,1,responses.length).setValues(totals);
}
I believe you can found everything you want here.
It's a form url, when you answer this form you'll have the url of the spreadsheet where the data are stored. One of the information stored is the url to modify your response, if you follow the link it will open the form again and update the spreadsheet in consequence. the code to do this trick is in the second sheet of the spreadsheet.
It's a google apps script code that need to be associated within the form and triggered with an onFormSubmit trigger.
It may be too late now. I believe we need a few things (I have not tried it)
A unique key to map each submitted response, such as User's ID or email.
Two Google Forms:
a. To request the unique key
b. To retrieve relevant data with that unique key
Create a pre-filled URL (See http://www.cagrimmett.com/til/2016/07/07/autofill-google-forms.html)
Open the URL from your form (See Google Apps Script to open a URL)

Manipulating form input values after submission causes multiple instances

I'm building a form with Yii that updates two models at once.
The form takes the inputs for each model as $modelA and $modelB and then handles them separately as described here http://www.yiiframework.com/wiki/19/how-to-use-a-single-form-to-collect-data-for-two-or-more-models/
This is all good. The difference I have to the example is that $modelA (documents) has to be saved and its ID retrieved and then $modelB has to be saved including the ID from $model A as they are related.
There's an additional twist that $modelB has a file which needs to be saved.
My action code is as follows:
if(isset($_POST['Documents'], $_POST['DocumentVersions']))
{
$modelA->attributes=$_POST['Documents'];
$modelB->attributes=$_POST['DocumentVersions'];
$valid=$modelA->validate();
$valid=$modelB->validate() && $valid;
if($valid)
{
$modelA->save(false); // don't validate as we validated above.
$newdoc = $modelA->primaryKey; // get the ID of the document just created
$modelB->document_id = $newdoc; // set the Document_id of the DocumentVersions to be $newdoc
// todo: set the filename to some long hash
$modelB->file=CUploadedFile::getInstance($modelB,'file');
// finish set filename
$modelB->save(false);
if($modelB->save()) {
$modelB->file->saveAs(Yii::getPathOfAlias('webroot').'/uploads/'.$modelB->file);
}
$this->redirect(array('projects/myprojects','id'=>$_POST['project_id']));
}
}
ELSE {
$this->render('create',array(
'modelA'=>$modelA,
'modelB'=>$modelB,
'parent'=>$id,
'userid'=>$userid,
'categories'=>$categoriesList
));
}
You can see that I push the new values for 'file' and 'document_id' into $modelB. What this all works no problem, but... each time I push one of these values into $modelB I seem to get an new instance of $modelA. So the net result, I get 3 new documents, and 1 new version. The new version is all linked up correctly, but the other two documents are just straight duplicates.
I've tested removing the $modelB update steps, and sure enough, for each one removed a copy of $modelA is removed (or at least the resulting database entry).
I've no idea how to prevent this.
UPDATE....
As I put in a comment below, further testing shows the number of instances of $modelA depends on how many times the form has been submitted. Even if other pages/views are accessed in the meantime, if the form is resubmitted within a short period of time, each time I get an extra entry in the database. If this was due to some form of persistence, then I'd expect to get an extra copy of the PREVIOUS model, not multiples of the current one. So I suspect something in the way its saving, like there is some counter that's incrementing, but I've no idea where to look for this, or how to zero it each time.
Some help would be much appreciated.
thanks
JMB
OK, I had Ajax validation set to true. This was calling the create action and inserting entries. I don't fully get this, or how I could use ajax validation if I really wanted to without this effect, but... at least the two model insert with relationship works.
Thanks for the comments.
cheers
JMB

Sum of DOM elements using XPath

I am using MSXML v3.0 in a VB 6.0 application. The application calculates sum of an attribute of all nodes using for each loop as shown below
Set subNodes = docXML.selectNodes("//Transaction")
For Each subNode In subNodes
total = total + Val(subNode.selectSingleNode("Amount").nodeTypedValue)
Next
This loop is taking too much time, sometime it takes 15-20 minutes for 60 thousand nodes.
I am looking for XPath/DOM solution to eliminate this loop, probably
docXML.selectNodes("//Transaction").Sum("Amount")
or
docXML.selectNodes("Sum(//Transaction/Amount)")
Any suggestion is welcomed to get this sum faster.
// Open the XML.
docNav = new XPathDocument(#"c:\books.xml");
// Create a navigator to query with XPath.
nav = docNav.CreateNavigator();
// Find the sum
// This expression uses standard XPath syntax.
strExpression = "sum(/bookstore/book/price)";
// Use the Evaluate method to return the evaluated expression.
Console.WriteLine("The price sum of the books are {0}", nav.Evaluate(strExpression));
source: http://support.microsoft.com/kb/308333
Any solution that uses the XPath // pseudo-operator on an XML document with 60000+ nodes is going to be quite slow, because //x causes a complete traversal of the tree starting at the root of the document.
The solution can be speeded up significantly, if a more exact XPath expression is used, that doesn't include the // pseudo-operator.
If you know the structure of the XML document, always use a specific chain of location steps -- never //.
If you provide a small example, showing the specific structure of the document, then many people will be able to provide a faster solution than any solution that uses //.
For example, if it is known that all Transaction elements can be selected using this XPath expression:
/x/y/Transaction
then the evaluation of
sum(/x/y/Transaction/Amount)
is likely to be significantly faster than Sum(//Transaction/Amount)
Update:
The OP has revealed in a comment that the structure of the XML file is quite simple.
Accordingly, I tried with a 60000 Transaction nodes XML document the following:
/*/*/Amount
With .NET XslCompiledTransform (Yes, I used XSLT as the host for the XPath engine) this took 220ms (milliseconds), that means 0.22 seconds, to produce the sum.
With MSXML3 it takes 334 seconds.
With MSXML6 it takes 76 seconds -- still quite slow.
Conclusion: This is a bug in MSXML3 -- try to upgrade to another XPath engine, such as the one offered by .NET.