Getting most recent execution by job via RunDeck API - rundeck

Is there a more efficient way to get the most recent execution of every job in a RunDeck project that by (1) querying for the job list and then (2) querying for the max: 1 execution list of each job in serial?

The most efficient way to get all jobs last execution is getting the IDs, putting the IDs in a list, and then calling the execution endpoint based on this answer.
I made a working bash example using jq, take a look:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="40"
rdeck_token="RQvvsGODsP8YhGUw6JARriXOAn6s6OQR"
# project name
project="ProjectEXAMPLE"
# first get all jobs (of ProjectEXAMPLE project)
jobs=$(curl -s --location "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$project/jobs" --header "Accept: application/json" --header "X-Rundeck-Auth-Token: $rdeck_token" --header "Content-Type: application/json" | jq -r .[].id)
# just for debug, print all jobs ids
echo $jobs
# then iterates on the jobs id's and extracting the last succeeded execution
for z in ${jobs}; do
curl -s --location "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$z/executions?status=succeeded&max=1" --header "Accept: application/json" --header "X-Rundeck-Auth-Token: $rdeck_token" --header "Content-Type: application/json" | jq
done
Sadly doesn't exist a direct way to do that in a single call.

Related

ADO Linking requirement & test case work items with Rest API issue

I'm trying to link a ADO Requirement work item to a ADO Test Case work item. I'm making this call:
curl -u :********** -X PATCH -H "Content-Type: application/json-patch+json" -H "Accept: application/json-patch+json" -d "[{{\"op\": \"test\", \"path\": \"/rev\",\"value\": 3 },{\"op\": \"add\", \"path\": \"/relations/-\", \"value\":\"{\"rel\": \"System.LinkTypes.Dependency-forward\",\"url\": \"https://***.***.com/{Organisation}/_apis/wit/workItems/{ID}\",\"attributes\": {\"comment\": \"Making a new link for the dependency\"}}}}]" https://***.***.com/{Organisation}/{Project}/_apis/wit/workItems/{ID}?api-version=6.0
as per: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/update?view=azure-devops-rest-7.1#add-a-link
But I'm having this error:
{"$id":"1","innerException":null,"message":"You must pass a valid patch document in the body of the request.","typeName":"Microsoft.VisualStudio.Services.Common.VssPropertyValidationException, Microsoft.VisualStudio.Services.Common","typeKey":"VssPropertyValidationException","errorCode":0,"eventId":3000}
I found my answer, the JSON was badly parse. I used a online JSON linter to fix it. https://jsonlint.com/
curl -u :********** -X PATCH -H "Content-Type: application/json-patch+json" -H "Accept: application/json-patch+json" -d "[{\"op\": \"add\", \"path\": \"/relations/-\", \"value\":{\"rel\": \"Microsoft.VSTS.Common.TestedBy-Forward\",\"url\": \"https://***.***.com/{Organisation}/_apis/wit/workItems/{ID}\",\"attributes\": {\"comment\": \"Making a new link for the dependency\"}}}]" https://***.***.com/{Organisation}/{Project}/_apis/wit/workItems/{ID}?api-version=6.0

Rundeck Job List View plugin installation issue

I'm trying out and exploring the plugins available in Rundeck. I'm trying to install Job List View plugin because I want to see the statistics of my jobs but after installing I still can't see the job list view. Then whenever I restart Rundeck service, then go to Plugin repositories, the plugin needs to be installed again even though I've clearly installed the job list view plugin before. I can't see any errors in service.log.
How can I fix this issue? Thanks!
My Rundeck version is 3.3.5
That's a bug reported here (by the question author). Anyway, you can get the job info via API, I leave some examples using jq to "beautify" the output:
To get All jobs from a project:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="your_rundeck_node"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="cqgfZlrSF84oUoC2ZzRwiltiyefjZx9R"
# specific api call info
rdeck_job="5dc08e08-0e28-4a74-9ef0-4ec0c8e3f55e"
rdeck_project="YourProject"
# get the job list from a project
curl -s --location --request GET "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: $rdeck_token" | jq
Get all job metadata:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="your_rundeck_node"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="cqgfZlrSF84oUoC2ZzRwiltiyefjZx9R"
# specific api call info
rdeck_job="5dc08e08-0e28-4a74-9ef0-4ec0c8e3f55e"
# get the job metadata
curl -s --location --request GET "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$rdeck_job/info" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: $rdeck_token" | jq
Get job forecast information:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="your_rundeck_node"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="cqgfZlrSF84oUoC2ZzRwiltiyefjZx9R"
# specific api call info
rdeck_job="5dc08e08-0e28-4a74-9ef0-4ec0c8e3f55e"
# get the job forecast
curl -s --location --request GET "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$rdeck_job/forecast" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: $rdeck_token" | jq
More info about Rundeck API here, and here a lot of useful examples.

Trying to create a usergroup in Jira Cloud with REST API

I have an upcoming tool migration where I can import assignees but not inactive ones - and there is no user group by default with only active users.
So I've exported all jira users and filtered based on active - so I have a nice list of all their usernames/emails. Now I want to use the REST API to create a usergroup from the list and add each user.
From the API documentation, it's pretty straightforward:
curl --request POST \
--url '/rest/api/3/group/user' \
--header 'Authorization: Bearer <access_token>' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"accountId": "384093:32b4d9w0-f6a5-3535-11a3-9c8c88d10192"
}'
However, I'm not about to type in one by one the accountIds. How can I specify an excel list or how else can i achieve this?
Easier than I thought - Just made a bash script where accountId's is the variable to cycle through the addresses.

View individual deployment status in Wildfly with curl/API

I'm very new to Wildfly but I need to set up automated monitoring of individual deployment status via the API.
In the same way that I can view the server state with curl, eg:
curl --insecure --digest 'https://admin:password#localhost:9993/management' --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Will return:
{
"outcome" => "success",
"result" => "running"
}
In the same way the from the jboss-cli, I issue:
:read-attribute(name=server-state)
And get the same result.
So, from the CLI, if I issue the following command to get the status of a specific deployment:
/deployment=bob :read-attribute(name=status)
I get the following result:
{
"outcome" => "success",
"result" => "OK"
}
But I can't work out what curl command will give me that result. I've read through a tonne of documentation and either it doesn't exist or I'm looking in the wrong spot. I've tried:
curl --insecure --digest 'https://password#localhost:9993/management' --header "Content-Type: application/json" -d '{"deployment":"bob","operation":"read-attribute","name":"status","json.pretty":1}'
but that didn't work. Any ideas?
Thanks,
Mark J.
You need to add an array for the address attribute and move the "deployment":"bob" in the array.
curl --insecure --digest 'https://password#localhost:9993/management' --header "Content-Type: application/json" -d '{"operation":"read-attribute", "address":[{"deployment":"bob"}],"name":"status","json.pretty":1}'
The address is a name/value pair object for the path the attribute you want to read. For example if you wanted to see the all the handlers associated with the root logger you could execute the following.
curl --insecure --digest 'https://password#localhost:9993/management' --header "Content-Type: application/json" -d '{"operation":"read-attribute","address":[{"subsystem":"logging"},{"root-logger":"ROOT"}],"name":"handlers","json.pretty":1}

Slow importing into Google Cloud SQL

Google Cloud SQL is my first real evaluation at MySQL as a service. I created a D32 instance, set replication to async, and disabled binary logging. Importing 5.5 GB from dump files from a GCE n1-standard-1 instance in the same zone took 97 minutes.
Following the documentation, the connection was done using the public IP address, but is in the same region and zone. I'm fully open to the fact that I did something incorrectly. Is there anything immediately obvious that I should be doing differently?
we have been importing ~30Gb via cloud storage from zip files containing SQL statements and this is taking over 24Hours.
A big factor is the number of indexes that you have on the given table.
To keep it manageable, we split the file into chunks with each 200K sql statements which are being inserted in one transaction. This enables us to retry individual chunks in case of errors.
We also tried to do it via compute engine (mysql command line) and in our experience this was even slower.
Here is how to import 1 chunk and wait for it to complete. You cannot do this in parallel as cloudSql only allows for 1 import operation at a time.
#!/bin/bash
function refreshAccessToken() {
echo "getting access token..."
ACCESSTOKEN=`curl -s "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" -H "X-Google-Metadata-Request: True" | jq ".access_token" | sed 's/"//g'`
echo "retrieved access token $ACCESSTOKEN"
}
START=`date +%s%N`
DB_INSTANCE=$1
GCS_FILE=$2
SLEEP_SECONDS=$3
refreshAccessToken
CURL_URL="https://www.googleapis.com/sql/v1beta1/projects/myproject/instances/$DB_INSTANCE/import"
CURL_OPTIONS="-s --header 'Content-Type: application/json' --header 'Authorization: OAuth $ACCESSTOKEN' --header 'x-goog-project-id:myprojectId' --header 'x-goog-api-version:1'"
CURL_PAYLOAD="--data '{ \"importContext\": { \"database\": \"mydbname\", \"kind\": \"sql#importContext\", \"uri\": [ \"$GCS_FILE\" ]}}'"
CURL_COMMAND="curl --request POST $CURL_URL $CURL_OPTIONS $CURL_PAYLOAD"
echo "executing $CURL_COMMAND"
CURL_RESPONSE=`eval $CURL_COMMAND`
echo "$CURL_RESPONSE"
OPERATION=`echo $CURL_RESPONSE | jq ".operation" | sed 's/"//g'`
echo "Import operation $OPERATION started..."
CURL_URL="https://www.googleapis.com/sql/v1beta1/projects/myproject/instances/$DB_INSTANCE/operations/$OPERATION"
STATE="RUNNING"
while [[ $STATE == "RUNNING" ]]
do
echo "waiting for $SLEEP_SECONDS seconds for the import to finish..."
sleep $SLEEP_SECONDS
refreshAccessToken
CURL_OPTIONS="-s --header 'Content-Type: application/json' --header 'Authorization: OAuth $ACCESSTOKEN' --header 'x-goog-project-id:myprojectId' --header 'x-goog-api-version:1'"
CURL_COMMAND="curl --request GET $CURL_URL $CURL_OPTIONS"
CURL_RESPONSE=`eval $CURL_COMMAND`
STATE=`echo $CURL_RESPONSE | jq ".state" | sed 's/"//g'`
END=`date +%s%N`
ELAPSED=`echo "scale=8; ($END - $START) / 1000000000" | bc`
echo "Import process $OPERATION for $GCS_FILE : $STATE, elapsed time $ELAPSED"
done