Getting the latest execution for a job via the Rundeck API - rundeck

I'm using the latest version of Rundeck (3.3.10) and I'm having trouble getting the latest execution for a job via the Rest API.
If I call api/38/job//executions?max=1 it doesn't seem to bring back the latest execution if it is still running. Ideally, I'd also like to be able get the latest execution Start Time, End Time, User and result for each job in a single API call, but I'd resigned myself to calling the API once per job. There doesn't seem to be any way to sort the executions you get back from the API - they seem to be sorted by status first, so the running jobs appear at the end of the list.
Does anyone know a way around this? Thanks.

Yo can get that information using: executions?status=running&max=1 call.
Script example:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="38"
rdeck_token="YRVaZikt64Am85RyLo1nyq8U1Oe4Q8J7 "
# specific api call info
rdeck_job="03f28add-84f2-4013-b8f5-e48feaf5977c"
# api call
curl --location --request GET "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$rdeck_job/executions?status=running&max=1" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: $rdeck_token" \
--header "Content-Type: application/json"
Output:
{
"paging": {
"count": 1,
"total": 1,
"offset": 0,
"max": 1
},
"executions": [
{
"id": 7,
"href": "http://localhost:4440/api/38/execution/7",
"permalink": "http://localhost:4440/project/ProjectEXAMPLE/execution/show/7",
"status": "running",
"project": "ProjectEXAMPLE",
"executionType": "user",
"user": "admin",
"date-started": {
"unixtime": 1617304896289,
"date": "2021-04-01T19:21:36Z"
},
"job": {
"id": "03f28add-84f2-4013-b8f5-e48feaf5977c",
"averageDuration": 13796,
"name": "HelloWorld",
"group": "",
"project": "ProjectEXAMPLE",
"description": "",
"href": "http://localhost:4440/api/38/job/03f28add-84f2-4013-b8f5-e48feaf5977c",
"permalink": "http://localhost:4440/project/ProjectEXAMPLE/job/show/03f28add-84f2-4013-b8f5-e48feaf5977c"
},
"description": "sleep 20; echo \"hi\"",
"argstring": null,
"serverUUID": "630be43c-e71f-4102-be96-d017dd22233e"
}
]
}

Related

Can I change a Cloud SQL instance's number of CPUs and memory via API or programmatically?

I would like to edit a PostgreSQL instance's number of CPUs and Memory to custom values such as 2 vCPUs and 5 GB of memory each, via the API, but haven't found a way to do so.
The Instance settings page shows Cores and Memory as options, but when I try setting a simple JSON with the curl example given here,
{
"settings": {
"cores": 2,
"memory": 5
}
}
nothing happens.
I found a way to get existing settings, via curl -X GET -H "Authorization: Bearer "$(gcloud auth print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://sqladmin.googleapis.com/v1/projects/MYPROJECT/instances/MYINSTANCE"
The returned JSON has dataDiskSizeGb, but nothing related to CPUs or memory that is obvious to me.
{
"kind": "sql#instance",
"state": "RUNNABLE",
"databaseVersion": "POSTGRES_12",
"settings": {
"authorizedGaeApplications": [],
"tier": "db-custom-1-3840",
"kind": "sql#settings",
"availabilityType": "ZONAL",
"pricingPlan": "PER_USE",
"replicationType": "SYNCHRONOUS",
"activationPolicy": "ALWAYS",
"ipConfiguration": {
"privateNetwork": "projects/MYPROJECT/global/networks/default",
"authorizedNetworks": [],
"ipv4Enabled": true
},
"locationPreference": {
"zone": "southamerica-east1-c",
"kind": "sql#locationPreference"
},
"dataDiskType": "PD_SSD",
"maintenanceWindow": {
"kind": "sql#maintenanceWindow",
"hour": 0,
"day": 0
},
"backupConfiguration": {
"startTime": "08:00",
"kind": "sql#backupConfiguration",
"location": "us",
"backupRetentionSettings": {
"retentionUnit": "COUNT",
"retainedBackups": 7
},
"enabled": true,
"replicationLogArchivingEnabled": false,
"pointInTimeRecoveryEnabled": false,
"transactionLogRetentionDays": 7
},
"settingsVersion": "4",
"storageAutoResizeLimit": "0",
"storageAutoResize": false,
"dataDiskSizeGb": "10"
},
"etag": "079...039",
"ipAddresses": [
{
"type": "PRIMARY",
"ipAddress": "xx.xxx.x.xxx"
},
{
"type": "OUTGOING",
"ipAddress": "xx.xx.xxx.xx"
},
{
"type": "PRIVATE",
"ipAddress": "xx.xx.xxx.xx"
}
],
"serverCaCert": {
"kind": "sql#sslCert",
"certSerialNumber": "0",
"cert": "-----BEGIN CERTIFICATE-----\nMII......c=\n-----END CERTIFICATE-----",
"commonName": "C=US,O=Google\\, Inc,CN=Google Cloud SQL Server CA,dnQualifier=9f7...e0c",
"sha1Fingerprint": "fff...8fb",
"instance": "MYINSTANCE",
"createTime": "2021-10-05T17:59:18.971Z",
"expirationTime": "2031-10-03T18:00:18.971Z"
},
"instanceType": "CLOUD_SQL_INSTANCE",
"project": "MYPROJECT",
"serviceAccountEmailAddress": "abc...#gcp-sa-cloud-sql.iam.gserviceaccount.com",
"backendType": "SECOND_GEN",
"selfLink": "https://sqladmin.googleapis.com/v1/projects/MYPROJECT/instances/MYINSTANCE",
"connectionName": "MYPROJECT:southamerica-east1:MYINSTANCE",
"name": "MYINSTANCE",
"region": "southamerica-east1",
"gceZone": "southamerica-east1-c",
"createTime": "2021-10-05T17:57:47.539Z"
}
To update the number of CPUs and Memory, on the "settings" REST reference there is a field "tier" that represents the CPU and Memory of the instance. In your example it is "db-custom-1-3840", the values represents the CPU and Memory (db-custom-[CPU]-[Memory]) this means that it has 1 CPU and 3840 memory. To change the machine to 2vCPU and and 5GB memory "tier" should have a value of "db-custom-2-5120".
For testing purposes I initially created a 4 vCPU with 26 GB memory.
For reference see initial instance configuration:
To change CPU and memory see steps below:
request.json:
{
"settings": {
"tier": "db-custom-2-5120"
}
}
NOTE: The value of the memory should be a mutiple of 256MB thus the value 5120.
Curl command:
curl -X PATCH \
-H "Authorization: Bearer "$(gcloud auth print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d #request.json \
"https://sqladmin.googleapis.com/v1/projects/your-project-name/instances/your-instance-name"
This will return a long running operation:
When I run GET curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://sqladmin.googleapis.com/v1/projects/your-project-name/instances/your-instance-name the change is reflected.
See GET response snippet:
See new instance information at Cloud Console > SQL > Edit:

Using REST API to create alerting rule in Kibana fails on 400 "Invalid action groups: default"

I have ELK cloud v. 7.13.2 and trying to create alert rule with slack action via REST API. This is my curl invocation:
curl -u ****** -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' https://***********.westeurope.azure.elastic-cloud.com:9243/api/alerting/rule -X POST -d #src/rules/cpu_utilization.json
I am expecting that new rule is created, but unfortunately I am getting following error:
{
"statusCode": 400,
"error": "Bad Request",
"message": "Invalid action groups: default"
}
The contents of src/rules/cpu_utilization.json are:
{
"params": {
"nodeType": "host",
"criteria": [
{
"comparator": ">",
"timeSize": 1,
"metric": "cpu",
"threshold": [
80
],
"timeUnit": "m"
}
],
"sourceId": "default"
},
"consumer": "alerts",
"schedule": {
"interval": "1m"
},
"tags": [],
"name": "CPU2",
"throttle": "1000d",
"enabled": true,
"rule_type_id": "metrics.alert.inventory.threshold",
"notify_when": "onThrottleInterval",
"actions": [
{
"group": "default",
"id": "fce4c27f-d22a-4209-858c-253a06511c1b",
"params": {
"message": "{{alertName}} - {{context.group}} is in a state of {{context.alertState}}\n\nReason:\n{{context.reason}}"
}
}
]
}
Documentation says clearly:
Properties of the action objects:
group
(Required, string) Grouping actions is recommended for escalations for different types of alerts. If you don’t need this, set this value to default.
Is this a bug in ELK or I am doing something wrong? I am able to use API for other purposes, like listing rules, deleting rules etc. I am also capable of creating a rule without an action, but this doen`t seem to be too useful...
OKAY, I got an answer from ELK support. Apparently, you can use another endpoint to list all rule types GET /api/alerting/rule_types. Then you need to find your type and lookup property default_action_group_id - it will hold the correct value. Eg. in the above example it was:
"default_action_group_id": "metrics.inventory_threshold.fired"

How can I use the BigQuery REST API from the command line?

Attempting to make a plain GET request to one of the BigQuery REST APIs gives an error that looks like this:
curl https://www.googleapis.com/bigquery/v2/projects/$PROJECT_ID/jobs/$JOBID
Output:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization",
...
What is the correct way to invoke one of the REST APIs from the command-line, such as the query or insert APIs? The API reference has a "Try this API", but the examples don't translate directly to something you can run from the command-line.
As a disclaimer, when working from the command-line, using the bq tool will usually be sufficient, or for more complex use cases, the BigQuery client libraries enable programming with BigQuery from multiple languages. It can still be useful sometimes to make plain requests to the REST APIs to see how certain APIs work at a low level, however.
First, make sure that you have installed the Google Cloud SDK. This should include the gcloud and bq command-line tools. If you haven't already, authorize your account by running this command from your terminal:
gcloud auth login
This should prompt you to log in and then give you an access code that you can paste into your terminal. (The exact process may change over time).
Now let's try a query using the BigQuery REST API, calling the jobs.query method. Modify this script with your own project name, which you can find from the Google Cloud Console, then paste the script into your terminal:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"kind\":\"bigquery#queryRequest\",\"useLegacySql\":false,\"query\":$QUERY}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries
If it worked, you should see output that looks like this:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<your project ID>",
"jobId": "<your job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": false
}
If you haven't set up the bq command-line tool, you can use bq init from your terminal to do so. Once you have, you can try running the same query using it:
bq query --use_legacy_sql=False "SELECT 1 AS x, 'foo' AS y;"
You can also see the REST API requests that the bq tool makes by passing the --apilog= option:
bq --apilog= query --use_legacy_sql=False "SELECT [1, 2, 3] AS x;"
Now let's try an example using the jobs.insert method instead of the query API. Run this script, replacing YOUR_PROJECT_NAME with your project name:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"configuration\":{\"query\":{\"useLegacySql\":false,\"query\":${QUERY}}}}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs
Unlike the query API, which returned a response immediately, you will see a result that looks similar to this:
{
"kind": "bigquery#job",
"etag": "\"<etag string>\"",
"id": "<project name>:<job ID>",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/<project name>/jobs/<job ID>",
"jobReference": {
"projectId": "<project name>",
"jobId": "<job ID>"
},
"configuration": {
"query": {
"query": "SELECT 1 AS x, 'foo' AS y;",
"destinationTable": {
"projectId": "<project name>",
"datasetId": "<anonymous dataset>",
"tableId": "<anonymous table>"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": false
}
},
"status": {
"state": "RUNNING"
},
"statistics": {
"creationTime": "<timestamp millis>",
"startTime": "<timestamp millis>"
},
"user_email": "<your email address>"
}
Notice the status:
"status": {
"state": "RUNNING"
},
If you want to check on the job now, you can use the jobs.get method. Similar to before, run this from your terminal, using the job ID from the output in the previous step:
PROJECT="YOUR_PROJECT_NAME"
JOB_ID="YOUR_JOB_ID"
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs/$JOB_ID
If the query is done, you'll get a response that indicates as much:
...
"status": {
"state": "DONE"
},
...
Finally, we can make a request to fetch the query results, also using the REST API.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries/$JOB_ID
The output will look similar to when we used the jobs.query method above:
{
"kind": "bigquery#getQueryResultsResponse",
"etag": "\"<etag string>\"",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project ID>",
"jobId": "<job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": true
}

Jira Rest API: Requesting issue(s) of a specific user in one or more projects (Beginner)

For testing and practice purposes I want to create a specific request in Jira by using its REST api:
I want to list all issues from a specific user in one or more specific projects.
I tried it with SOAP UI but I was not able to create or get my results with easy GET-HTTP requests (I don't know how to combine more values and parameter together). The other way would be to use a script language but here I don't know what to use.
The documentation is somewhat confusing for a beginner like me and I would like to know how combine different values and paramter and how to start in an easy way.
Try to use advance rest client for chrome browsers to make your Rest requests.
The examples below (from official documentation) are for Curl usage but its simple to pass them to advance rest client. Dont forget the authentication.
Link to advance rest client
Example of create issue:
Request
curl -D- -u fred:fred -X POST --data {see below} -H "Content-Type: application/json" http://localhost:8090/rest/api/2/issue/
Data
{
"fields": {
"project":
{
"key": "TEST"
},
"summary": "REST ye merry gentlemen.",
"description": "Creating of an issue using project keys and issue type names using the REST API",
"issuetype": {
"name": "Bug"
}
}
}
Response
{
"id":"39000",
"key":"TEST-101",
"self":"http://localhost:8090/rest/api/2/issue/39000"
}
Example of making a query issue:
Request:
curl -D- -u fred:fred -X GET -H "Content-Type: application/json" http://kelpie9:8081/rest/api/2/search?jql=assignee=fred
Response:
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 50,
"total": 6,
"issues": [
{
"expand": "html",
"id": "10230",
"self": "http://kelpie9:8081/rest/api/2/issue/BULK-62",
"key": "BULK-62",
"fields": {
"summary": "testing",
"timetracking": null,
"issuetype": {
"self": "http://kelpie9:8081/rest/api/2/issuetype/5",
"id": "5",
"description": "The sub-task of the issue",
"iconUrl": "http://kelpie9:8081/images/icons/issue_subtask.gif",
"name": "Sub-task",
"subtask": true
},
},
"customfield_10071": null
},
"transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-62/transitions",
},
{
"expand": "html",
"id": "10004",
"self": "http://kelpie9:8081/rest/api/2/issue/BULK-47",
"key": "BULK-47",
"fields": {
"summary": "Cheese v1 2.0 issue",
"timetracking": null,
"issuetype": {
"self": "http://kelpie9:8081/rest/api/2/issuetype/3",
"id": "3",
"description": "A task that needs to be done.",
"iconUrl": "http://kelpie9:8081/images/icons/task.gif",
"name": "Task",
"subtask": false
},
"transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-47/transitions",
}
]
}

Orion Context Broker - Subscriptions only notify the first 20 entities

Using this script:
#!/bin/bash
(curl http://orionip:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' \
--header 'Accept: application/json' --header 'fiware-service: service' --header 'fiware-servicepath: /servicepath' \
-d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "Sensor",
"isPattern": "true",
"id": "Parquimetro:.*"
}
],
"attributes": [
"recaudacion"
],
"reference": "http://cometip:80/notify",
"duration": "P4Y",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"recaudacion", "numeroTiques"
]
}
],
"throttling": "PT24H"
}
EOF
Makes a subscription for 170 entities (Parquimetro:1, Parquimetro:2, Parquimetro:3, ..., Parquimetro:170) to notify Comet for storing historical data, but only the first 20 entities got notified. I need it to notify all entities (which are right now 170, not 20).
Using /v1/subscribeContext?limit=200 doesn't help either.
Any idea?
There is an open issue at Orion github about it from time ago.
Currently Orion behaves that way, but there is a workaround in place: do a (paginated) query to get all entities just before doing the subscription. It could happen some "race condition" if some update arrives since the query and the subscription, but, depending on the use case, it may suffice.