How can I use use cpdclt dsjob for DaaS (DataStage as a Service)? - datastage

I would like to execute "cpdctl dsjob" explained in the document below URL.
https://dataplatform.cloud.ibm.com/docs/content/dstage/dsnav/topics/cli.html
I download cpdctl from below URL but can not use dsjob option.
https://github.com/IBM/cpdctl/releases/
Is there additional procedure for using dsjob option?
I setup configration and run "cpdctl job run" but "cpdctrl dsjob" option is not appear.
# ./cpdctl dsjob
Error: unknown command "dsjob" for "cpdctl"
Did you mean this?
job
Run 'cpdctl --help' for usage.
unknown command "dsjob" for "cpdctl"
Did you mean this?
job

I found that CPDCTL_ENABLE_DSJOB environment is required to use dsjob option.
export CPDCTL_ENABLE_DSJOB=1
# ./cpdctl dsjob Error: unknown command "dsjob" for "cpdctl"
Did you mean this?
job
Run 'cpdctl --help' for usage.
unknown command "dsjob" for "cpdctl"
Did you mean this?
job
# export CPDCTL_ENABLE_DSJOB=1
# ./cpdctl dsjob
The IBM DataStage service provides APIs to manage jobs, this command provides some of legacy dsjob commandline functionality.
Usage:
cpdctl dsjob [command]
Available Commands:
lprojects List Projects.
ljobs List Jobs.
run Manage Job Runs.
logdetail Print DataStage job run logs.
logsum Print Summary of Datastage job run logs.
lognewest Returns newest event id from DataStage job run logs.
jobinfo List Job Information.
migrate Migrate a exported legacy isx file into nextgen Datastage project.
lflows List DataStage flows.
compile Compile DataStage flows.
lenvs List Environments.
lhws List Hardware Specifications.
env-create Create Environment.
hwspec-create Create Hardware Specification.
Flags:
-h, --help help for dsjob
Global Flags:
--context string
--cpdconfig string
--output-path string
--raw-output
Use "cpdctl dsjob [command] --help" for more information about a command.
#

Related

How to use ultimate thread group plugin in JMeter, which is executed from an Azure Devops pipeline?

I have an ADO pipeline which starts a JMeter script execution on a VM(load-generator) using a Command Line task. Following is the command line statement -
C:\JMeter\apache-jmeter-5.2.1\bin\jmeter -n -f -t "$(Build.SourcesDirectory)\JMeterTestScripts\$(Module)\$(TestName).jmx" -l "$(Build.ArtifactStagingDirectory)\$(TestName).jtl"
I am using the ultimate thread group for one of my test scripts.
The JMeter executable folder on the VM has all the necessary plugin-related JARs present in the lib/ext folder. But for some reason while trying to run this pipeline, it throws the following error-
*An error occurred: Error in NonGUIDriver Problem loading XML from:'C:\agent\_work\1\s\Jmeter_Script_Folder\ScriptName.jmx Cause:
CannotResolveClassException: kg.apc.jmeter.threads.UltimateThreadGroup*
However, when I try to execute the same command from the VM (load generator) manually using the windows cmd, then it successfully triggers the test execution.
Any idea, what possibly could be causing this odd behaviour? Any pointers on this is much appreciated.

run id from az ml cli

How to pass run id of experiment as tag information of model ?
I want to run experiment and register model with tag information with run id of experiment in az ml cli in Azure DevOps Build pipeline.
run experiment
az ml run submit-script -e test -d myenv.yml train.py
model register
az ml model register -n mymodel -p sklearn_regression_model.pkl --tag "run id"= ????
I can't figure out how to get run id from experiment run from az ml cli and pass it to --tag argument. Any idea ?
Thanks you all.
My requirements have changed and was able to code in Azure DevOps Pipeline.
With option -t run.json, experiment run informationn is stored in run.json
az ml run submit-script -e $(experiment) -d myenv.yml -t run.json train-titanic.py
I want to register model outside the experiment run using run.json.
az ml model register --name mlops-model --experiment-name $(experiment) -f run.json -t ../release-pipeline/model.json --asset-path outputs/decision_tree.pkl
enter image description here
The run ID information is passed along automatically if you register a model from a run. You do not need to manually tag it.
az ml run list --experiment-name experiment
This command returns a list of details about runs for this experiment , the run id should also be included.
To add or update a tag, use the following command:
az ml run update -r runid --add-tag quality='fantastic run'
For details ,please refer to this docs.

Run a specific talend component using the shell executable

My question is similar to this: Execute only one talend component. Except instead of using the Talend Open Studio, I want to be able to run a specific component from the shell executable I get from building the job.
I have set up my job in a way that if a component is succeeded, the OnComponentOk trigger is used to run the next component. To run the job I run sudo bash NAME_OF_THE_JOB.sh. Is it possible to run only one component, perhaps by passing arguments to the bash shell file?
A Talend job is a single java program. Components are translated to java functions. There are no arguments that allow you to execute a single component.
What you could do is write your job in such a way to execute a component whose name is passed via a context variable but it's not pretty.
You can test the component name passed via a variable named componentName using Run If triggers:
tRunJob_1
/
if("tRunJob_1".equals(context.componentName)
/
/
Start ---if("tJava_2".equals(context.componentName))-- tJava_2
\
\
if("tRest_1".equals(context.componentName))
\
tRest_1
As you can see, this can get very cumbersome, and requires you to know the component's name in order to run it.
You can then launch your job by passing the component name as argument :
sudo bash NAME_OF_THE_JOB.sh --context_param componentName=tJava_2

Informatica windows job scheduling

need help :
informatica is installed on windows ,
so pmcmd will not work to schedule informatica workflow,
n e patch or utility to schedule informatica workflow (on windows) through unix(pmcmd).
any other solution to schedule informatica workflow (on windows). ??
Please Google for the error you have: " no gateway connectivity is provided for domain". And to resolve the issue, add the environment variable INFA_DOMAINS_FILE.
More can be found for example on: this page
We can run pmcmd command from windows command line.In command line mode we have to give every bit of information like (domain name,integration service name,username and passwords in each command.Below is the syntax
pmcmd startworkflow -sv Myintservice -d Mydomain -u username -p password -f foldername workflowname
Please lookup the help of pmcmd command just to make sure if I have misstated anything.

Where the TeamCity service messages should be written?

I'm a beginner on TeamCity, so forgive my dump question.
For some reason the coverage reporting for my solution is not working. So, to run the tests I run nunit-console in a command line step and then use the xml output file in a build feature of type [XML report processing]. Test results appear on the TeamCity GUI but no coverage statistics.
It seems to be that there a way to configure the tests reporting manually https://confluence.jetbrains.com/display/TCD8/Manually+Configuring+Reporting+Coverage but I don't know where to put these service messages:
teamcity[dotNetCoverage ='' ='' ...]
Just write them to standard output. It is captured by TeamCity and service messages from it will be processed.
Pay attention, however, to the syntax. Service message should begin with ##
As Oleg already stated you can dump them in standard output
Console.WriteLine(...) from C#
echo from command prompt or powershell,
...
Here is an example http://log.ld.si/2014/10/20/build-log-in-teamcity-using-psake
There is a psake helper module, https://github.com/psake/psake-contrib/wiki/teamcity.psm1 and source is available on https://github.com/psake/psake-contrib/blob/master/teamcity.psm1 (you can freely use this from powershell as well)
It has already implemented alot of Service Messages