How to get a list workflows based on some filtering, like timeout/workflowType - cadence-workflow

From https://github.com/uber/cadence/issues/3820
It's command that people want to get a list of workflows based on some filtering.
For example to get a list of workflows to reset or any operation.

In WebUI, you can select by the dropdown for "timeout" workflows.
With CLI, you can use command:
./cadence --domain sample-domain wf list --status timeout
you add filtering on workflow type:
./cadence --domain sample-domain wf list --wt "MyWorkflow" --status timeout
If you have advanced visibility, you and have more customize query:
./cadence --domain sample-domain wf list --query "CloseStatus=5 AND SOME_OTHER_FILTERS"
Feel free to explore more options in using List command with ``:
./cadence --domain sample-domain wf list --help
NAME:
cadence workflow list - list open or closed workflow executions
USAGE:
cadence workflow list [command options] [arguments...]
DESCRIPTION:
list one page (default size 10 items) by default, use flag --pagesize to change page size
OPTIONS:
--print_raw_time, --prt Print raw timestamp
--print_datetime, --pdt Print full date time in '2006-01-02T15:04:05Z07:00' format
--print_memo, --pme Print memo
--print_search_attr, --psa Print search attributes
--print_full, --pf Print full message without table format
--print_json, --pjson Print in raw json format
--open, --op List for open workflow executions, default is to list for closed ones
--earliest_time value, --et value EarliestTime of start time, supported formats are '2006-01-02T15:04:05+07:00', raw UnixNano and time range (N<duration>), where 0 < N < 1000000 and duration (full-notation/short-notation) can be second/s, minute/m, hour/h, day/d, week/w, month/M or year/y. For example, '15minute' or '15m' implies last 15 minutes.
--latest_time value, --lt value LatestTime of start time, supported formats are '2006-01-02T15:04:05+07:00', raw UnixNano and time range (N<duration>), where 0 < N < 1000000 and duration (in full-notation/short-notation) can be second/s, minute/m, hour/h, day/d, week/w, month/M or year/y. For example, '15minute' or '15m' implies last 15 minutes
--workflow_id value, --wid value, -w value WorkflowID
--workflow_type value, --wt value WorkflowTypeName
--status value, -s value Closed workflow status [completed, failed, canceled, terminated, continuedasnew, timedout]
--query value, -q value Optional SQL like query for use of search attributes. NOTE: using query will ignore all other filter flags including: [open, earliest_time, latest_time, workflow_id, workflow_type]
--more, -m List more pages, default is to list one page of default page size 10
--pagesize value, --ps value Result page size (default: 10)

Related

How to set up kdb RDB to only subscribe to certain tables in tickerplant

I want to set up 2 RDB instances using the same script where one instance subscribes to 2 tables and the other instance subscribes to a separate table in the tickerplant. I am trying to manipulate .u.sub but with no success
if[system"p"=RDB_INSTANCE_1;
.u.rep .(hopen `$":",.u.x 0)"({.u.sub[x;`]} each `trade`quote;`.u `i`L)";
];
if[system"p"=RDB_INSTANCE_2;
.u.rep .(hopen `$":",.u.x 0)"(.u.sub[`aggTradeStats;`];`.u `i`L)";
];
or
?[system"p"=RDB_INSTANCE_1;..u.rep .(hopen `$":",.u.x 0)"({.u.sub[x;`]} each `trade`quote;`.u `i`L)"; .u.rep .(hopen `$":",.u.x 0)"(.u.sub[`aggTradeStats;`];`.u `i`L)"];
Any idea how I can achieve this?
The vanilla tick scripts aren't that flexible when it comes to subscription options, but without going too far down the rabbit hole, the few quick changes below should allow you to do what you asked (I haven't tested the below further than initiating the instances)
.u.sub changes to the below to allow subscription to all tables (`), a list of tables or single table
{$[x~`;.z.s[;y]each t;1<count x;.z.s[;y]each x;[if[not x in t;'x];del[x].z.w;add[x;y]]]};
r.q changes to the below where the required tables are passed in via a command line flag. I've removed references to .z.x and parsed the various flags. The last line builds out the list of tables in string format (not very elegant but it's a quick solution).
/q tick/r.q [host]:port[:usr:pwd] [host]:port[:usr:pwd]
/2008.09.09 .k ->.q
if[not "w"=first string .z.o;system "sleep 1"];
upd:insert;
args:.Q.opt .z.x;
/ get the ticker plant and history ports, defaults are 5010,5012
/.u.x:.z.x[0],(count .z.x 0)_(":5010";":5012");
/ end of day: save, clear, hdb reload
.u.end:{t:tables`.;t#:where `g=attr each t#\:`sym;.Q.hdpf[`$"::",first args`hdb;`:.;x;`sym];#[;`sym;`g#] each t;};
/ init schema and sync up from log file;cd to hdb(so client save can run)
.u.rep:{if[0>type first x;x:enlist x];(.[;();:;].)each x;if[null first y;:()];-11!y;system "cd ",1_-10_string first reverse y};
/ HARDCODE \cd if other than logdir/db
/ connect to ticker plant for (schema;(logcount;log))
.u.rep .(hopen `$"::",first args`tp)"(.u.sub[`",("`" sv args`tabs),";`];`.u `i`L)";
Then start up RDBs as follows
ec2-user#/home/ec2-user $ ## RDB 1 table
ec2-user#/home/ec2-user $ q tick/r.q -tp 5010 -hdb 6000 -tabs trade -q
tables[]
,`trade
ec2-user#/home/ec2-user $ # RDB 2 tables
ec2-user#/home/ec2-user $ q tick/r.q -tp 5010 -hdb 6000 -tabs trade quote -q
tables[]
`quote`trade
ec2-user#/home/ec2-user $ ## RDB all tables
ec2-user#/home/ec2-user $ q tick/r.q -tp 5010 -hdb 6000 -q
tables[]
`s#`other`quote`trade
Check subscription dict back on tp
q).u.w
other| ,(9i;`)
quote| ((8i;`);(9i;`))
trade| ((7i;`);(8i;`);(9i;`))
Hope this helps, as I said, I haven't tested beyond this point
Jason

Redis Hashes store with new line key value

I want to store data in Redis Hashes. Data is as below (Key = Value):
30.2.25=REF_IP
30.2.24=MY_HOST_IP
30.2.32=PEER_IP
30.2.32=IM_USER_MY_HOST
30.2.2=23992
Easy way to store this info in redis is below :
hmset info 30.2.25 REF_IP 30.2.24 MY_HOST_IP 30.2.32 PEER_IP 30.2.32 IM_USER_MY_HOST 30.2.2 23992
Considering I have 1000's key value and want to change few (actually so many) values in one go so searching and editing value in above command is too painful.
i want some way to execute command in below manner, that is nice formatted command with new line after every key value :
hmset info
30.2.25 REF_IP
30.2.24 MY_HOST_IP
30.2.32 PEER_IP
30.2.32 IM_USER_MY_HOST
30.2.2 23992
Is it possible to do so ?
Currently when i copy above formatted command and paste, it ignore test after new line and giving below error which is obvious because argument is wrong due to new line.
hmset info
(error) ERR wrong number of arguments for 'hmset' command
Can anyone help please. Thanks.
Assuming you are talking about using redis-cli, there is no way to support this at the moment. There is an open issue for this. See https://github.com/antirez/redis/issues/3474
As per Redis 4.0.0, HMSET is considered deprecated. You should use HSET instead. https://redis.io/commands/hset
You can use a transaction if you want to ensure all HSETs are done at the same time, and still enter them one line at a time.
MULTI
HSET info 30.2.25 REF_IP
HSET info 30.2.24 MY_HOST_IP
...
EXEC
The commands will be sent to the server one line at a time, but they are queued and only executed at the EXEC command.
You may use another client, say in Python, and then do something fancier as well to condense your field-value hsets into one command.

Get Maximum and Minimum value from rrd file generated by Cacti

I have an rrd file in which traffic_in and out stats of interfaces are stored.
What i want is that i want Max and min values in certain time period.
I'm trying this command but it is giving me error ERROR: invalid rpn expression in: v,MAX
rrdtool graph -s 1537466100 -e 1537552237 DEF:v=lhr-spndc-7609_traffic_in_612.rrd:traffic_in:MAX CDEF:vm=v,MAX PRINT:vm:%lf
Can you please help to enter correct command & achieving desired Functionality?
You should be using VDEF for the definition of vm, not CDEF.
A CDEF is for transforming one or more data series created by either a DEF or CDEF into another series, ready for graphing or summarising.
A VDEF is for transforming a single data series into a single value via a consolodation function, such as to get the maximum value of a series over the entire graph. This is different from the function specified in a DEF, which only specifies how to consolodate a higher-granularity series into a lower-granularity series.

HTCondor job submission tags

I want to run different batches of jobs on our HTCondor pool. Let's say 10 jobs of Type1, 20 jobs of Type2 and so on. Each of these job types should get new jobs when the current jobs are finished.
With just one type I use a simply query if all jobs are finished or if the time limit for the whole job batch passed. If one of these requirements is fulfilled the next iteration of x jobs is submitted to the cluster.
This is done by a small function (written in Lua, which is not really important for the question):
function WaitForSims(CheckupDelay)
while io.popen([[condor_q -format "%d\n" clusterid]]):read('*all'):len()~=0 do
os.execute("echo Checkup timestamp: "..os.date("%x %X"))
os.execute(string.format("timeout %d 1>nul",CheckupDelay))
end
end
Is there a possibility to separate the jobs of Type1, Type2 and Type3 and check them independently? Currently it checks for all jobs as my current user.
Adding a tag or something to the jobs would be ideal, as I could simply change the checkup call. In the documentation I couldn't find anything which is easy to add, I could remember the JobID-s, but then I'll have to store those adding more complexity.
Linked Answer
Solution could be found in another answer, I didn't find where it is described in the documentation though.
In the job.sub file add:
+YourCustomVarName = 1
+YourCustomStringName = "String"
For checking against it use:
condor_q -constraint 'YourCustomVarName == 1' -f "%s" JobStatus
or
condor_q -constraint "YourCustomStringName == \"String\"" -f "%s" JobStatus
(handling of quotations could vary)

What is the best way to ensure the correctness of data returned by a SNMP query?

I am working on code which uses the snmp->get_bulk_request() method to make SNMP queries to get interface table details from a network device.
The problem I am facing is that sometimes, the data I receive from the query is missing some detail. This is a transient issue.
I believe that placing a set number of retries will reduce the probability of error. But, as I go through the documentation for snmp->get_bulk_request(), I find a parameter called
maxrepetitions. It is not clear to me from the documentation what this parameter does.
I am trying to figure out what effect the maxrepetitions parameter has when used with the get_bulk_request call method. I have gone through the documentation in "get_bulk_request() - send a SNMP get-bulk-request to the remote agent" and found this:
$result = $session->get_bulk_request(
[-callback => sub {},] # non-blocking
[-delay => $seconds,] # non-blocking
[-contextengineid => $engine_id,] # v3
[-contextname => $name,] # v3
[-nonrepeaters => $non_reps,]
[-maxrepetitions => $max_reps,]
-varbindlist => \#oids,
);
The default value for get-bulk-request -maxrepetitions is 0. The maxrepetitions value specifies the number of successors to be returned for the remaining variables in the variable-bindings list.
Specifically, my questions are:
Is adding maxrepetitions equivalent to adding retries for the query?.
Is retrying the right way to ensure the data is most probably correct?
If not, what is the best method to ensure the probability error is low in data returned by SNMP query?
From the man page:
Set the max-repetitions field in the GETBULK PDU. This specifies the maximum number of iterations over the repeating
variables.
Example
snmpbulkget -v2c -Cn1 -Cr5 -Os -c public zeus system ifTable
will retrieve the variable system.sysDescr.0 (which is the lexicographically next object to system) and the first 5 objects in
the ifTable:
sysDescr.0 = STRING: "SunOS zeus.net.cmu.edu 4.1.3_U1 1 sun4m"
ifIndex.1 = INTEGER: 1
ifIndex.2 = INTEGER: 2
ifDescr.1 = STRING: "lo0"
et cetera.