Rundeck REST endpoint for fetching project details like jobs and node - rundeck

I am using rundeck 1.6 version and just want to check is there REST endpoint in any 1.6 or any latest version which solves my below requirement.
If i pass a project name it gives me all the jobs created under that project with the node names on which they are being configured to run.
Thanks
-Sam

There is, the following code uses Tokens authentication. You can also use Password Authentication
#!/bin/bash
RUNDECK_URL='localhost:4440' #<-----change it to your rundeck url
API_TOKEN='OyFXX1q4UzhTUe7deOUIPJKkrUnEwZlo' #<-----change it to your api koken
PROJECTS=`curl -H "Accept: application/json" http://$RUNDECK_URL/api/1/projects?authtoken=$API_TOKEN |tr "}" "\n"|tr "," "\n"|grep name|cut -d":" -f2 |tr -d "\""`
for proj in $PROJECTS; do
#get all Jobs in all projects:
echo "Project: $proj"
PROJECT_OUTPUT=`curl -sS "http://$RUNDECK_URL/api/1/jobs?authtoken=$API_TOKEN&project=${proj}"`
# get job definition and parse
JOB_IDS=`echo $PROJECT_OUTPUT | grep -oP "(?<=<job id=')[^']+"`
for id in $JOB_IDS; do
echo $id #job id
JOB_OUTPUT=`curl -sS "http://$RUNDECK_URL/api/1/job/$id?authtoken=$API_TOKEN"`
echo $JOB_OUTPUT | grep -oP "(?<=<name>)[^<]+" #job name
echo $JOB_OUTPUT | grep -oP "(?<=<filter>)[^<]+" #job node filter
done
done
Output:
$ sh rundeck_test.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 98 0 98 0 0 6292 0 --:--:-- --:--:-- --:--:-- 6533
Project: TestProject
02a41aaa-eb50-4831-8762-80b798468cbe <-------- job id
TestJob <-------- job name, the job doesn't have node filter - running on rundeck
9b2ac9e9-0350-4494-a463-b43ba1e458ab
TestJob2
node1.exmple.com <-------- node filter value

Related

initcontainer issue on deployment

I am facing issue on initcontianer where i am unable to pull the value from curl and story it in variable
tried with multiple methods like passing only value
tried with passing curl method directly
NOTE: condition is when the cluster size is 3 it should come out of loop and when cluster size is not equal to 3 it should get inside loop and execute waiting for the pod
Below is the script
echo "running below scripts"
curl -g -s -H "Content-Type: application/json" "http://hazelcast-service.bjy:8711/hazelcast/health " | jq -r '.clusterSize' > i;
cat 'i';
n=`cat i`;
while [ $n -ne 3 ];
do
echo "waiting for pod";
sleep 5;
done

How to get the current Nodes where the Job is running

I'm developing a job and the user can choose in which nodes can run, so the Node Filter is open to the convenient of the user.
When the job is starting I need to do a calculation based in the number of nodes chosed by the user, exist a way to get this number?
Regards,
Alejandro L
By design, that information is only available after the execution, so, a good approach is to call the job via API (in a script step) and with the execution ID number (available in the API call output) you can list and count the nodes, e.g:
#!/bin/bash
nodes=$(curl -s -X GET "http://localhost:4440/api/41/execution/16" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: your_user_token" \
| jq -r '.successfulNodes | . []')
number_of_nodes=$(echo $nodes | wc -w)
echo "Number of nodes: $number_of_nodes"
This example needs jq to extract the nodes from the API response.
Anyway, your request sounds good for an enhancement request, please suggest that here.
a workaround to take would be using job.filter variable
so if you do #job.filter#
it returns a string with the list of nodes like us-east-1-0,us-east-1-1,us-east-1-2
if you save it as a string, and then split the string on ',' then you get an array of nodes:
IFS=',' read -r -a array <<< "$string"
and then you can get the number of nodes by
echo ${#array[#]}
Note
as #MegaDrive68k mentioned this won't work if use select all node with the use of filter .*

Extract details for past jobs in SLURM

In PBS, one can query a specific job with qstat -f and obtain (all?) info and details to reproduce the job:
# qstat -f 1234
Job Id: 1234.login
Job_Name = job_name_here
Job_Owner = user#pbsmaster
...
Resource_List.select = 1:ncpus=24:mpiprocs=24
Resource_List.walltime = 23:59:59
...
Variable_List = PBS_O_HOME=/home/user,PBS_O_LANG=en_US.UTF-8,
PBS_O_LOGNAME=user,...
etime = Mon Apr 20 16:38:27 2020
Submit_arguments = run_script_here --with-these flags
How may I extract the same information from SLURM?
scontrol show job %j only works for currently running jobs or those terminated up to 5 minutes ago.
Edit: I'm currently using the following to obtain some information, but it's not as complete as a qstat -f:
sacct -u $USER \
-S 2020-05-13 \
-E 2020-05-15 \
--format "Account,JobID%15,JobName%20,State,ExitCode,Submit,CPUTime,MaxRSS,ReqMem,MaxVMSize,AllocCPUs,ReqTres%25"
.. usually piped into |(head -n 2; grep -v COMPLETED) |sort -k12 to inspect only failed runs.
You can get a list of all jobs that started before a certain date like so:
sacct --starttime 2020-01-01
Then pick the job you are interested (e.g. job 1234) and print details with sacct:
sacct -j 1234 --format=User,JobID,Jobname,partition,state,time,start,end,elapsed,MaxRss,MaxVMSize,nnodes,ncpus,nodelist
See here under --helpformat for a complete list of available fields.

Initialise and pull terraform public modules using GitHub SSH private key

Context:
I have gitlab runners which are executing terraform init command which is pulling all necessary terraform modules. Recently, I started hitting github throttling issues (60 calls to github api per hour). So I am trying to reconfigure my pipeline so it uses Github user's private key.
Currently, I have the following in my pipeline but it still doesn't seem to work and private key isn't used to pull the terraform modules.
- GITHUB_SECRET=$(aws --region ${REGION} ssm get-parameters-by-path --path /github/umotifdev --with-decryption --query 'Parameters[*].{Name:Name,Value:Value}' --output json);
- PRIVATE_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/private_key").Value' | base64 -d);
- PUBLIC_KEY=$(echo "${GITHUB_SECRET}" | jq -r '.[] | select(.Name == "/github/umotifdev/public_key").Value' | base64 -d);
- mkdir -p ~/.ssh;
- echo "${PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/id_rsa;
- chmod 700 ~/.ssh/id_rsa;
- eval $(ssh-agent -s);
- ssh-add ~/.ssh/id_rsa;
- ssh-keyscan -H 'github.com' >> ~/.ssh/known_hosts;
- ssh-keyscan github.com | sort -u - ~/.ssh/known_hosts -o ~/.ssh/known_host;
- echo -e "Host github.com\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config;
- echo ${PUBLIC_KEY} >> ~/.ssh/authorized_keys
The error I am seeing in my pipeline is something like (which is basically throttling from github):
Error: Failed to download module
Could not download module "vpc" (vpc.tf:17) source code from
"https://api.github.com/repos/terraform-aws-modules/terraform-aws-vpc/tarball/v2.21.0//*?archive=tar.gz":
bad response code: 403.
Anyone can advise how to resolve an issue where private key isn't used to pull terraform modules?

Run PDI Jobs using Web Services

I have a job created using spoon and imported to the DI repository.
Without scheduling it using PDI job scheduler how can I run PDI Job on a Data Integration Server using REST web services? So that I can call it whenever I want.
Before beginning these steps, please make sure that your Carte server (or Carte server embedded in the DI server) is configured to connect to the repository for REST calls. The process and description can be found on the wiki page. Note that the repositories.xml needs to be defined and in the appropriate location for the DI Server as well.
Method 1 : (Run Job and continue, no status checks):
Start a PDI Job (/home/admin/Job 1):
curl -L "http://admin:password#localhost:9080/pentaho-di/kettle/runJob?job=/home/admin/Job%201" 2> /dev/null | xmllint --format -
Method 2 : (Run Job and poll job status regularly):
Generate a login cookie:
curl -d "j_username=admin&j_password=password&locale=en_US" -c cookies.txt http://localhost:9080/pentaho-di/j_spring_security_check
Check DI Server status:
curl -L -b cookies.txt http://localhost:9080/pentaho-di/kettle/status?xml=Y | xmllint --format -
Result:
<?xml version="1.0" encoding="UTF-8"?>
<serverstatus>
<statusdesc>Online</statusdesc>
<memory_free>850268568</memory_free>
<memory_total>1310720000</memory_total>
<cpu_cores>4</cpu_cores>
<cpu_process_time>22822946300</cpu_process_time>
<uptime>100204</uptime>
<thread_count>59</thread_count>
<load_avg>-1.0</load_avg>
<os_name>Windows 7</os_name>
<os_version>6.1</os_version>
<os_arch>amd64</os_arch>
<transstatuslist>
<transstatus>
<transname>Row generator test</transname>
<id>de44a94e-3bf7-4369-9db1-1630640e97e2</id>
<status_desc>Waiting</status_desc>
<error_desc/>
<paused>N</paused>
<stepstatuslist>
</stepstatuslist>
<first_log_line_nr>0</first_log_line_nr>
<last_log_line_nr>0</last_log_line_nr>
<logging_string><![CDATA[]]></logging_string>
</transstatus>
</transstatuslist>
<jobstatuslist>
</jobstatuslist>
</serverstatus>
Start a PDI Job (/home/admin/Job 1):
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/runJob?job=/home/admin/Job%201" | xmllint --format -
Result:
<webresult>
<result>OK</result>
<message>Job started</message>
<id>dd419628-3547-423f-9468-2cb5ffd826b2</id>
</webresult>
Check the job's status:
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/jobStatus?name=/home/admin/Job%201&id=dd419628-3547-423f-9468-2cb5ffd826b2&xml=Y" | xmllint --format -
Result:
<?xml version="1.0" encoding="UTF-8"?>
<jobstatus>
<jobname>Job 1</jobname>
<id>dd419628-3547-423f-9468-2cb5ffd826b2</id>
<status_desc>Finished</status_desc>
<error_desc/>
<logging_string><![CDATA[H4sIAAAAAAAAADMyMDTRNzDUNzJSMDSxMjawMrZQ0FXwyk9SMATSwSWJRSUK+WkKWUCB1IrU5NKSzPw8LiPCmjLz0hVS80qKKhWiXUJ9fSNjSdQUXJqcnFpcTEibW2ZeZnFGagrEgahaFTSKUotLc0pso0uKSlNjNckwCuJ0Eg3yQg4rhTSosVwABykpF2oBAAA=]]></logging_string>
<first_log_line_nr>0</first_log_line_nr>
<last_log_line_nr>13</last_log_line_nr>
<result>
<lines_input>0</lines_input>
<lines_output>0</lines_output>
<lines_read>0</lines_read>
<lines_written>0</lines_written>
<lines_updated>0</lines_updated>
<lines_rejected>0</lines_rejected>
<lines_deleted>0</lines_deleted>
<nr_errors>0</nr_errors>
<nr_files_retrieved>0</nr_files_retrieved>
<entry_nr>0</entry_nr>
<result>Y</result>
<exit_status>0</exit_status>
<is_stopped>N</is_stopped>
<log_channel_id/>
<log_text>null</log_text>
<result-file/>
<result-rows/>
</result>
</jobstatus>
Get the status description from the jobStatus API:
curl -L -b cookies.txt "http://localhost:9080/pentaho-di/kettle/jobStatus?name=/home/admin/Job%201&id=dd419628-3547-423f-9468-2cb5ffd826b2&xml=Y" 2> /dev/null | xmllint --xpath "string(/jobstatus/status_desc)" -
Result:
Finished
PS : curl & libxml2-utils installed via apt-get.
The libxml2-utils package is optional, used solely for formatting XML output from the DI Server. This shows how to start a PDI job using a Bash shell.
Supported in version 5.3 and later.