I have a task whose command in 'run' is the same except for a single value. This value would out of a list of potential values. What I would like to do is create a task which would use this list of values to define the task and then use that same value in the command defined in 'run'. The point is that it would be great to define the task in such a way where I don't have to repeat nearly identical task definitions for each value.
For example: I want a task that will get the status of a single program from a list of programs that I have defined in an array. I would like to define task to be something like this:
set programs = %w["postfix", "nginx", "pgpool"]
programs.each do |program|
desc "#{program} status"
task :#{program} do
run "/etc/init.d/#{program} status"
end
end
This obviously doesn't work, but hopefully it shows what I am attempting here.
Thoughts?
Well, I answered my own question... with a little trial and error. I also did the same thing with namespace so the control of services is nice and elegant. It works quite nicely!
set :programs, %w[postfix nginx pgpool]
set :init_commands, %w[status start stop]
# init.d service control
init_commands.each do |init_command|
namespace :"#{init_command}" do
programs.each do |program|
desc "#{program} #{init_command}"
task :"#{program}" do
run "/etc/init.d/#{program} #{init_command}"
end
end
end
end
Related
So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)
I'm doing some experimentation with Kubeflow Pipelines and I'm interested in retrieving the run id to save along with some metadata about the pipeline execution. Is there any way I can do so from a component like a ContainerOp?
You can use kfp.dsl.EXECUTION_ID_PLACEHOLDER and kfp.dsl.RUN_ID_PLACEHOLDER as arguments for your component. At runtime they will be replaced with the actual values.
I tried to do this using the Python's DSL but seems that isn't possible right now.
The only option that I found is to use the method that they used in this sample code. You basically declare a string containing {{workflow.uid}}. It will be replaced with the actual value during execution time.
You can also do this in order to get the pod name, it would be {{pod.name}}.
Since kubeflow pipeline relies on argo, you can use argo variable to get what you want.
For example,
#func_to_container_op
def dummy(run_id, run_name) -> str:
return run_id, run_name
#dsl.pipeline(
name='test_pipeline',
)
def test_pipeline():
dummy('{{workflow.labels.pipeline/runid}}', '{{workflow.annotations.pipelines.kubeflow.org/run_name}}')
You will find that the placeholders will be replaced with the correct run_id and run_name.
For more argo variables: https://github.com/argoproj/argo-workflows/blob/master/docs/variables.md
To Know what are recorded in the labels and annotation in the kubeflow pipeline run, just get the corresponding workflow from k8s.
kubectl get workflow/XXX -oyaml
create_run_from_pipeline_func which returns RunPipelineResult, and has run_id attribute
client = kfp.Client(host)
result = client.create_run_from_pipeline_func(…)
result.run_id
Your component's container should have an environment variable called HOSTNAME that is set to its unique pod name, from which you derive all necessary metadata.
I am looking to trigger a series of processes, and I want to tell if each one succeeds or fails before starting the subsequent ones.
I am using tSSH (on Talend 6.4.1) to trigger a process and I only want the job to continue if it is a success. The tSSH "component" doesn't appear to fail if it receives a non-zero return code, so I have tried using an assert. However, even if the assert fails, it doesn't appear to prevent the component and subjob being "OK" which is a bit odd, so I can't use on-(component|subjob)-ok to link to the next job.
I don't seem to be able to find any conditional evaluation components which will allow me to stop the continuation of the job or subjob based on the evaluation result.
The only way I can find is to have
tSSH1 --IF globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> tSSH2...
--IF !globalMap.get("tSSH_1_EXIT_CODE").equals(0)--> (failure logging subjob)
which means coding the test twice with negation.
Am I missing something, or are there no such conditional components?
you can put a if condition on tSSH component for success /failure using global variable of tSSH component i.e.
((String)globalMap.get("tSSH_1_STDERR")) and ((String)globalMap.get("tSSH_1_STDOUT")).
if condition you can check is :
if(((String)globalMap.get("tSSH_1_STDERR")) != null) than call error log
else call tSSH2.
Hope this helps...
I'm writing a chef recipe as shown below. I hope the recipe can stop to continue executing the resources after this, but without giving the exception.
Do you have any ideas about this except from doing exit(0)?
ruby_block "verify #{current_container_name}" do
block do
require "docker"
begin
container = Docker::Container.get(current_container_name)
rescue Docker::Error::NotFoundError => exception
container = nil
end
if container.nil?
exit(0)
end
end
end
You could use ignore_failure true in this ruby block instead of handling the exception. That way it would still output the error messages, but wouldn't treat it as a failure so would continue to execute subsequent resources.
If you want to abort a chef-run under a special circumstance - like the current Docker-container is not available - this is not possible. The solution is to rethink your problem - you want some code to be only run when a special condition is met.
You do this by either leaving the recipe (with a return true), encapsulating your configuration steps in a conditional-clause (like a if my_container.nil? then ... end) or you use node-attributes to step through conditions.
Let's say your cookbook x relies on three recipes, 1, 2 and 3. So if you'd like to define that 2 and 3 are only run if 1 was successful, you're able to to write the state of the 1st recipe into the node-attributes (f.e. node.normal['recipe1'] = 'successful').
In the other recipes you'll then define an entry-gate like:
return true if node['recipe1'] != 'succesful'
But be aware, if you're using node-attributes you'll need to use the ruby_block-resource (mostly) at the end of your first recipe because the bare-ruby-code is evaluated and run during the resource-compilation - which takes place before the converge-run.
I am using REXX to invoke JOBTRAC programmatically which works however I am unable to pass JOBNAME arguments using this approach. Can this be done using REXX?
The idea is to find the history of the job run using the program jobtrac. We use jobtrac's schedule to find the history of when job runs happened. We invoke jobtrac using
‘TSO JOBTRAC’ AND SUPPLY history command ‘H XXXXXX’ in the command line (XXXXX – jobname)
I was thinking to route the jobtrac info to a flat file and parse it so that I can do some reporting real time during the job run. The above problem is also linked to this following scenario:
If I give dslist 'DSLIST A.B.C.*'’ in the ISPF panel
It gives the series of datasets ...
A.B.C.A,
A.B.C.D
A.B.C.E
When I give
"SAVE ORANGE"
it stores this list under
MYUSERID.ORANGE.DATASETS.
I know this can be automated pro grammatically and I have seen that . But I don’t have the code base to do that right now. This is much similar to the jobtrack issue I have.
Here is some REXX CODE to help with understanding. I know this code is wrong…we cannot use outtrap for this as it is used to get console output.
say 'No. of month end jobs considered for history :'jobnames.0
if jobnames.0 > 0 then do
do i = 1 to jobnames.0
say jobnames.i
jobname = Word(jobnames.i,1);
say 'jobname under consideration is ' jobname;
tsocmd="JOBTRAC;ADDLOC=000;H "|| strip(jobname);
say 'tso command is ' tsocmd;
y = outtrap(jobdetails.)
Address TSO "'tsocmd'" ------------------> wrong…I believe I have to use ispexec
say 'job details are ' jobdetails.6;
end;