As I understand in log we can see only discrete values and can not see table of values in series
with any aggregation functions
batch
|query('''SELECT sum("gauge") *** ''')
***
|mean('sum_gauge')
|log()
log() return Kapacitor point with value
but without:
batch
|query('''SELECT sum("gauge") *** ''')
.period(1h)
.every(10s)
.align()
.groupBy(time(15m),'host')
.fill(0)
|log()
show only
2018-05-10T13:19:20.084Z
kapacitor
begin batch
2018-05-10T13:19:20.084Z
kapacitor
batch point
2018-05-10T13:19:20.084Z
kapacitor
batch point
2018-05-10T13:19:20.084Z
kapacitor
batch point
according https://github.com/influxdata/chronograf/blob/1.4.4.2/ui/src/kapacitor/components/LogsTableRow.js#L44
we will see that only "msg" field displayed in chronograf UI
but log has more info (you can see it using kapacitor watch <task_id>), for example:
ts=2018-05-10T14:50:40.011Z lvl=info msg="batch point" service=kapacitor task_master=main task=14860f8d-8b6d-48d4-a7fc-b5cbea717b37 node=log3 prefix= name=cpu group=host=*** tag_host=*** field_*=*** time=2018-05-10T14:50:00Z
Maybe someone know method or instrument how to debug such queries (batch)?
Because in other monitoring stack it's possible to create graph with preprocessed points in alert
example #bosun
"Alert rule builder" in chronograf looks like what I need but it has very limited functionality and you can not create sophisticated alerts (e.g with joins)
chronograf >= 1.5.0 support | log() for batch query
https://github.com/influxdata/chronograf/pull/3423
Related
Working on a side project where I use a set of views to identify contention of records within an iSeries set of physical files.
What I would like to do once identified is pull the user profile locking the record, and then send a break message to their terminal as an informational break message.
What I have found is the QEZSNDMG API. Simple enough to use interactively, but I'm trying to put together a command that would be used in conjunction with QCMDEXC API to issue the call to QEZSNDMG and alert the user that they are locking a record.
Reviewing the IBM documentation of the QEZSNDMG API, I see that there are two sets of option parameters, but nothing as required (which seems odd to me, but another topic for another day). But I continue to receive the error "Parameters passed on CALL do not match those required."
Here are some examples that I have tried from the command line so far:
CALL PGM(QEZSNDMG) PARM('*INFO' '*BREAK' 'TEST' '4' 'DOUGLAS' '1' '1' '-4')
CALL PGM(QEZSNDMG) PARM('*INFO' '*BREAK' 'TEST' '4' 'DOUGLAS')
CALL PGM(QEZSNDMG) PARM('*INFO' '*BREAK' 'TEST' '4' 'DOUGLAS' '1')
Note: I would like to avoid using a CL or RPG program if possible but understand it may come to that using one of many examples I found before posting. Just want to exhaust this option before going down that road.
Update
While logged in, I used WRKMSGQ to see the message queues assigned to my station. There were two: QSYS/DOUGLAS and QUSRSYS/DOUGLAS. I then issued SNDBRKMSG with no affect on my workstation (IE, the message didn't break my session):
SNDBRKMSG MSG(TESTING) TOMSGQ(QSYS/DOUGLAS)
SNDBRKMSG MSG(TESTING) TOMSGQ(QUSRSYS/DOUGLAS)
I realized if I provide the workstation session name in the TOMSG parameter it worked:
SNDBRKMSG MSG(TESTING) TOMSGQ(*LIBL/QPADEV0003)
Using SNDBRKMSG was what I was looking for.
Some nudging in the right direction lead me to realize that the workstation session ID is located within QSYS2.RCD_LOCK in field JOB_NAME (job number/username/workstation).
Extracting the workstation ID allowed me to create a correctly formatted SNDBRKMSG command to QCMDEXC and alert the user that they are locking a record needed by another process.
I am looking to trigger a series of processes, and I want to tell if each one succeeds or fails before starting the subsequent ones.
I am using tSSH (on Talend 6.4.1) to trigger a process and I only want the job to continue if it is a success. The tSSH "component" doesn't appear to fail if it receives a non-zero return code, so I have tried using an assert. However, even if the assert fails, it doesn't appear to prevent the component and subjob being "OK" which is a bit odd, so I can't use on-(component|subjob)-ok to link to the next job.
I don't seem to be able to find any conditional evaluation components which will allow me to stop the continuation of the job or subjob based on the evaluation result.
The only way I can find is to have
tSSH1 --IF globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> tSSH2...
--IF !globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> (failure logging subjob)
which means coding the test twice with negation.
Am I missing something, or are there no such conditional components?
you can put a if condition on tSSH component for success /failure using global variable of tSSH component i.e.
((String)globalMap.get("tSSH_1_STDERR")) and ((String)globalMap.get("tSSH_1_STDOUT")).
if condition you can check is :
if(((String)globalMap.get("tSSH_1_STDERR")) != null) than call error log
else call tSSH2.
Hope this helps...
I have a RESTful api I am trying to consume using talend
in order to get data 2 api calls are needed, the first generates an ID for your report, which you then use to make a consecutive api call using that ID to get your data results
the issue is if the requested report in the 2nd api call has not yet completed it will return
[{data:{string:"Requested report ### has not finished processing yet, please try again later"}}]
so, i put a tJava to thread(5000) to stagger the 1st api call (tRestClient2) from the 2nd api call (tRestClient1), but I could forsee this being an issue
what i want to do is evaluate the 2nd tRest request result (tFileOutputJSON_3), and if it equals "Requested report...", then requeue the 2nd tRest request until the data is ready
here is a screenshot of my job
Like always there is ton of solutions.
But you are not far from what you want. This following design should respect your expectation :
I kept your components as I don't really know what you are doing inside(I retired tLogrow to make things concise). But I reorganized scheduling links.
tJava
|onSubjobOk
tRestClient -- >
tFileOutputJSON
|onSubjobOk
tFileInputJSON-- >
tExtractJSONFields -- > tJavaRow
|onSubjobOk
tSetGlobalVar1
|onSubjobOk
tLoop2 -- iterate (order1)-- > tRestClient -- >
tHashOutput3
| -- iterate
(order2)-- > tHashInput4 -- > tJavaRow5
| -- iterate
(order3)-- > tSleep6
|onSubjobOk
tHashInput7 -- > tFileOutputJSON
1:Use a variable to manage the loop.
2:Use a While loop. Leave declaration and iteration blank (""), and put your condition using the previously initialized variable.3:Do not use Append as you want to fetch new result at each loop.
4:Link it to your HashOutput and do not clear cache.
5:Do your work here. Do not forget to update the global variable.
6:Can be placed before the call if most calls required time before report getting ready.
7:Link it to your HashOutput too, you will be able to fetch the data that made a end to the loop.
I am using REXX to invoke JOBTRAC programmatically which works however I am unable to pass JOBNAME arguments using this approach. Can this be done using REXX?
The idea is to find the history of the job run using the program jobtrac. We use jobtrac's schedule to find the history of when job runs happened. We invoke jobtrac using
‘TSO JOBTRAC’ AND SUPPLY history command ‘H XXXXXX’ in the command line (XXXXX – jobname)
I was thinking to route the jobtrac info to a flat file and parse it so that I can do some reporting real time during the job run. The above problem is also linked to this following scenario:
If I give dslist 'DSLIST A.B.C.*'’ in the ISPF panel
It gives the series of datasets ...
A.B.C.A,
A.B.C.D
A.B.C.E
When I give
"SAVE ORANGE"
it stores this list under
MYUSERID.ORANGE.DATASETS.
I know this can be automated pro grammatically and I have seen that . But I don’t have the code base to do that right now. This is much similar to the jobtrack issue I have.
Here is some REXX CODE to help with understanding. I know this code is wrong…we cannot use outtrap for this as it is used to get console output.
say 'No. of month end jobs considered for history :'jobnames.0
if jobnames.0 > 0 then do
do i = 1 to jobnames.0
say jobnames.i
jobname = Word(jobnames.i,1);
say 'jobname under consideration is ' jobname;
tsocmd="JOBTRAC;ADDLOC=000;H "|| strip(jobname);
say 'tso command is ' tsocmd;
y = outtrap(jobdetails.)
Address TSO "'tsocmd'" ------------------> wrong…I believe I have to use ispexec
say 'job details are ' jobdetails.6;
end;
I have a task whose command in 'run' is the same except for a single value. This value would out of a list of potential values. What I would like to do is create a task which would use this list of values to define the task and then use that same value in the command defined in 'run'. The point is that it would be great to define the task in such a way where I don't have to repeat nearly identical task definitions for each value.
For example: I want a task that will get the status of a single program from a list of programs that I have defined in an array. I would like to define task to be something like this:
set programs = %w["postfix", "nginx", "pgpool"]
programs.each do |program|
desc "#{program} status"
task :#{program} do
run "/etc/init.d/#{program} status"
end
end
This obviously doesn't work, but hopefully it shows what I am attempting here.
Thoughts?
Well, I answered my own question... with a little trial and error. I also did the same thing with namespace so the control of services is nice and elegant. It works quite nicely!
set :programs, %w[postfix nginx pgpool]
set :init_commands, %w[status start stop]
# init.d service control
init_commands.each do |init_command|
namespace :"#{init_command}" do
programs.each do |program|
desc "#{program} #{init_command}"
task :"#{program}" do
run "/etc/init.d/#{program} #{init_command}"
end
end
end
end