Datadog keeps creating monitors when dynamic source id changes - kubernetes

I have a Datadog monitor generated by terraform.
The main query is as follows:
sum(last_1m):avg:app.application.health{application.health:healthy,cluster_name:${local.eks_cluster_name},!source:api-service-full} by {source}.as_count() < 60"
The issue is that after a system restart the {source} container changes it's name.
For example from app-tier-1-1abc-agent
to app-tier-1-def2-agent
The Datadog instead of updating, or removing the old monitors just creates new ones and leaves the old monitors in Alarm and N/A.
Is there anyway to improve this? All ideas appreciated, thanks!

Solved this by sending API calls to edit each monitor query on shutdown and startup.
I made a very clunky bash script, because I couldn't find a way to store -data for the curl in a variable in bash, but if using other scripting languages this could have been done in much less code, example is for 2 monitors
monitor_id=$(curl -L -X GET "https://api.datadoghq.com/api/v1/monitor" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: ${DATADOG_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DATADOG_APP_KEY}" \
--data-raw '{"tags":["application_id:'$APP_ID_LOWERCASE'"]}'| jq -r ' .[] | select((.name |endswith("XXXXXXX Heartbeats") or endswith("XXXXX HeartBeat monitoring")) and (.tags[]=="application_id:'$APP_ID_LOWERCASE'")) | .id')
# curl gets cluster name used in queries
CLUSTER_NAME=$(curl -L -X GET "https://api.datadoghq.com/api/v1/monitor" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: ${DATADOG_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DATADOG_APP_KEY}" \
--data-raw '{"tags":["application_id:'$APP_ID_LOWERCASE'"]}'| jq -r ' .[] | select((.name |endswith("XXXX HeartBeat monitoring")) and (.tags[]=="application_id:'$APP_ID_LOWERCASE'")) | .query' | awk -F',cluster_name:|,' '{print $2}')
# For each monitor id edit monitor query
while IFS= read -r monitors
do
# curl gets monitor name
monitor_name=$(curl -L -X GET "https://api.datadoghq.com/api/v1/monitor/"$monitors"" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "DD-API-KEY: ${DATADOG_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DATADOG_APP_KEY}" | jq -r .name)
# checking which monitor query to send based on name, the queries are hardcoded because I couldn't find a way to set the query as a variable in bash
if [[ $monitor_name == *"XXXXXX HeartBeat monitoring"* ]]; then
shutdown_query=$(curl -L -X PUT "https://api.datadoghq.com/api/v1/monitor/"$monitors"" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: ${DATADOG_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DATADOG_APP_KEY}" \
--data-raw '{"query":"sum(last_1m):avg:health{application.health:healthy,cluster_name:'${CLUSTER_NAME}',!source:XXXXXX-full-1,source:DUMMY-VALUE-TO-RESEt-QUERY} by {source}.as_count() < 60"}')
elif
[[ $monitor_name == *"XXXXXX Instance Heartbeats"* ]]; then
shutdown_query=$(curl -L -X PUT "https://api.datadoghq.com/api/v1/monitor/"$monitors"" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: ${DATADOG_API_KEY}" \
-H "DD-APPLICATION-KEY: ${DATADOG_APP_KEY}" \
--data-raw '{"query":"sum(last_1m):avg:heartbeat{cluster_name:'${CLUSTER_NAME}',source:DUMMY-VALUE-TO-RESEt-QUERY} by {source}.as_count() < 3"}')
fi
done <<< "$monitor_id"
Then just remove the dummy query value on startup and it will pick up your new monitors while forgetting the non existing ones

Related

How to upload local file to GitHub using cURL?

I try to use cURL to upload a local file (e.g. docx, jpeg...) to GitHub repository? How can I specify the local file location and upload it to GitHub?
curl -X "PUT" \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token <token>" \
https://api.github.com/repos/BT23/demo-repo/contents/hello.txt \
-d '{
"message":"Upload this file to Git",
"committer":{"name":"Bon", "email":"bon#bon.com"},
"content":{"$(openssl base64 -A in $/temp/hello.txt)"}
}'
Thanks
As an alternative, you can separate the content encoding from the curl call.
See this gist
content=$(cat /temp/hello.txt | base64)
curl -X "PUT" \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token <token>" \
https://api.github.com/repos/BT23/demo-repo/contents/hello.txt \
-d '{
"message":"Upload this file to Git",
"committer":{"name":"Bon", "email":"bon#bon.com"},
"content":"${content}"}
}'

How to restore the last on demand snapshot taken via api?

I took the backup of the database using below command
CODE=`curl --user "${{ secrets.PUBLIC_KEY }}:${{ secrets.PRIVATE_KEY }}" \
--digest --include \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request POST "https://cloud.mongodb.com/api/atlas/v1.0/groups/${{ secrets.PROJECT_ID }}/clusters/cluster-1/backup/snapshots?pretty=true" \
--data '{ "description" : "On Demand Snapshot", "retentionInDays" : 3 }'`
Now I am looking for a way to use the above snapshot the restore back the database using the curl command
First you need to get the snapshot ID of the last snapshot like this:
echo "Get the snapshots list from ${ATLAS_SOURCE_CLUSTER}"
curl --user "${ATLAS_USER}:${ATLAS_KEY}" \
--output atlas-list-snaps.json \
--digest \
--silent \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request GET "https://cloud.mongodb.com/api/atlas/v1.0/groups/${ATLAS_SOURCE_GROUPID}/clusters/${ATLAS_SOURCE_CLUSTER}/backup/snapshots/shardedClusters?pretty=true"
#cat atlas-list-snaps.json
echo "Get the last snapshot from the list"
LAST_SNAPSHOT_ID=$(cat atlas-list-snaps.json | jq -r '.results[0].id')
echo "> ${LAST_SNAPSHOT_ID}"
Then, create a json payload describing your import job and then trigger the restore job:
echo "Restore job payload generation"
RESTORE_JOB_PAYLOAD=$(cat <<RESTORE_JOB_JSON
{
"deliveryType": "automated",
"snapshotId": "${LAST_SNAPSHOT_ID}",
"targetClusterName": "${ATLAS_TARGET_CLUSTER}",
"targetGroupId": "${ATLAS_TARGET_GROUPID}"
}
RESTORE_JOB_JSON
)
echo "${RESTORE_JOB_PAYLOAD}"
echo "Submitting restore job request to cluster ${TARGET_ATLAS_CLUSTER}"
curl --user "${ATLAS_USER}:${ATLAS_KEY}" \
--digest \
--header "Accept: application/json" \
--silent \
--header "Content-Type: application/json" \
--output atlas-restore_job.json \
--request POST "https://cloud.mongodb.com/api/atlas/v1.0/groups/${ATLAS_SOURCE_GROUPID}/clusters/${ATLAS_SOURCE_CLUSTER}/backup/restoreJobs?pretty=true" \
--data "${RESTORE_JOB_PAYLOAD}"
echo "Submitted restore job"

Cloudflare DDNS repeat sh script using API v4, with multiple A record in a single sh script, but fail

I am trying to use Cloudflare API v4 to setup DDNS on my server. But I am new in scripting .sh file. I hope to update multiple DNS records in a single .sh file.
I got a script from the internet (script1.sh):
NEW_IP=`curl -s http://ipv4.icanhazip.com`
CURRENT_IP=`cat /Users/foo/Desktop/cloudflare/current_ip.txt`
if [ "$NEW_IP" = "$CURRENT_IP" ]
then
echo "No Change in IP Adddress"
else
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id}/dns_records/{dns_record_id}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"{domain_name}","content":"'$NEW_IP'","ttl":1,"proxied":true}'
echo $NEW_IP > /Users/foo/Desktop/cloudflare/current_ip.txt
fi
The above script is work fine for single DNS record update instead of multiple record update like below (script2.sh):
NEW_IP=`curl -s http://ipv4.icanhazip.com`
CURRENT_IP=`cat /Users/foo/Desktop/cloudflare/current_ip.txt`
if [ "$NEW_IP" = "$CURRENT_IP" ]
then
echo "No Change in IP Adddress"
else
#domain-one
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_one}/dns_records/{dns_record_id_for_domain_one_record_one}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"domain-one.com","content":"'$NEW_IP'","ttl":1,"proxied":true}'
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_one}/dns_records/{dns_record_id_for_domain_one_record_two}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"subdomain.domain-one.com","content":"'$NEW_IP'","ttl":1,"proxied":true}'
#domain-two
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_two}/dns_records/{dns_record_id_for_domain_two_record}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"domain-two.com","content":"'$NEW_IP'","ttl":1,"proxied":true}'
echo $NEW_IP > /Users/foo/Desktop/cloudflare/current_ip.txt
fi
Can you please help to explain and solve the problem? Please tell me what's wrong in the script. Thanks!
[edit] I run it once by sh /Users/foo/Desktop/script-name.sh, for the first example (script1.sh), it is ok; for second example (script2.sh), return -bash: fork: Resource temporarily unavailable. As I use automatic run script like cron, it is same result.
Every curl request done, needed to add & or && to continue the following function.
More explanation here.
NEW_IP=`curl -s http://ipv4.icanhazip.com`
CURRENT_IP=`cat /Users/foo/Desktop/cloudflare/current_ip.txt`
if [ "$NEW_IP" = "$CURRENT_IP" ]
then
echo "No Change in IP Adddress"
else
#domain-one
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_one}/dns_records/{dns_record_id_for_domain_one_record_one}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"domain-one.com","content":"'$NEW_IP'","ttl":1,"proxied":true}' &
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_one}/dns_records/{dns_record_id_for_domain_one_record_two}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"subdomain.domain-one.com","content":"'$NEW_IP'","ttl":1,"proxied":true}' &
#domain-two
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{zone_id_for_domain_two}/dns_records/{dns_record_id_for_domain_two_record}" \
-H "X-Auth-Email: {my_email}" \
-H "X-Auth-Key: {global_api_key}" \
-H "Content-Type: application/json" \
--data '{"type":"A","name":"domain-two.com","content":"'$NEW_IP'","ttl":1,"proxied":true}'
echo $NEW_IP > /Users/foo/Desktop/cloudflare/current_ip.txt
fi
After going through many posts on this topic I've ended up taking parts from a number posts and settled for the below.
The below will loop through the entries in the array "dnsrecords" and update each with the machines current external address.
Have had a little more time to look into this, I wanted it to run it on my NGINX server as a cronjob and update multiple records, some to be proxied and others not.
Still want to make some of the below into functions, but for now it is working as I need it to.
And it also logs onto a "log.log" file on every run.
#!/usr/bin/bash
## Cloudflare authentication details
## Keep these private
cloudflare_auth_email=Your_Email
cloudflare_auth_key="Your_API_Key"
zoneid="Your_Zone_ID"
## Cloudflare Proxied DNS Records as Array
dnsrecords_proxied=(
"domain.com"
"www.domain.com"
"sub1.domain.com"
"sub2.domain.com"
"sub3.domain.com"
"sub3.domain.com"
)
## Cloudflare Non-Proxied DNS Records as Array
dnsrecords_not_proxied=(
"vpn01.domain.com"
"vpn02.domain.com"
)
## Files to log to (replace "path/to/" with script path)
log=path/to/log.log
log_ip=path/to/previous_ip
## Getting Date/Time
dt=$(date '+%d/%m/%Y %H:%M:%S')
## Get old IP from file
old_ip=$(cat $log_ip)
## Get the current external IP address
ip=$(curl -s -X GET https://api.ipify.org)
#echo "Current IP is $ip"
## Checking if IP changed since last update
if [ $ip = $old_ip ]; then
echo -en "$dt - Previous IP:$old_ip\n$dt - Current IP:$ip\n$dt - No Changes Required....\n" >> $log
echo "$(tail -n 1000 $log)" > $log
exit
## Exit if IP has not changed
else
## If the IP changed, not match the one on file "previous_ip"
echo -en "$dt - Previous IP:$old_ip\n$dt - Current IP:$ip\n$dt - Starting Updates....\n" >> $log
## Processing Proxied DNS Records
for dnsrecord in "${dnsrecords_proxied[#]}"
do
## For each DNS Record in Array "dnsrecords"
# Getting the DNS Record ID
dnsrecordid=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records?type=A&name=$dnsrecord" \
-H "X-Auth-Email: $cloudflare_auth_email" \
-H "Authorization: Bearer $cloudflare_auth_key" \
-H "Content-Type: application/json" | jq -r '{"result"}[] | .[0] | .id') &&
# Updating the DNS Record
curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records/$dnsrecordid" \
-H "X-Auth-Email: $cloudflare_auth_email" \
-H "Authorization: Bearer $cloudflare_auth_key" \
-H "Content-Type: application/json" \
--data "{\"type\":\"A\",\"name\":\"$dnsrecord\",\"content\":\"$ip\",\"ttl\":1,\"proxied\":true}" | jq
echo -en "$dt - Updated - $dnsrecord \n" >> $log
done
## Processing Non Proxied DNS Records
for dnsrecord in "${dnsrecords_not_proxied[#]}"
do
## For each DNS Record in Array "dnsrecords"
# Getting the DNS Record ID
dnsrecordid=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records?type=A&name=$dnsrecord" \
-H "X-Auth-Email: $cloudflare_auth_email" \
-H "Authorization: Bearer $cloudflare_auth_key" \
-H "Content-Type: application/json" | jq -r '{"result"}[] | .[0] | .id') &&
# Updating the DNS Record
curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records/$dnsrecordid" \
-H "X-Auth-Email: $cloudflare_auth_email" \
-H "Authorization: Bearer $cloudflare_auth_key" \
-H "Content-Type: application/json" \
--data "{\"type\":\"A\",\"name\":\"$dnsrecord\",\"content\":\"$ip\",\"ttl\":1,\"proxied\":false}" | jq
echo -en "$dt - Updated - $dnsrecord \n" >> $log
done
echo $ip > $log_ip
echo -en "$dt - Updates Completed.... \n" >> $log
echo "$(tail -n 1000 $log)" > $log
fi

How to pass argumets to RunDeck Run API

I want to run a rundeck job using the run API. Would like to pass few parameters as well to the runDeck job during the run time.
Do I need to configure the job to accept parameters?
How to pass parameters to run API?
Thanks in advance
Regards
SJ
Option 1: In absence of tokens, first login to get cookie
curl \
-D - \
-X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-H "Cache-Control: no-cache" \
-d "j_username=${RD_USER}&j_password=${RD_PASSWORD}" \
--cookie-jar rd_cookie \
"${RD_URL}/j_security_check"
Then, use the cookie received from successful login for subsequent transactions
curl \
-D - \
-X "POST" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d "{\"argString\":\"-arg1 val1 -arg2 val2 -arg3 val-3 -arg4 val4 \"}" \
--cookie "#rd_cookie" \
"${RD_URL}/api/16/job/${RD_JOB_ID}/executions"
Option 2: With a token, it's simpler
curl \
-D - \
-X "POST" -H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "X-Rundeck-Auth-Token: ${RD_TOKEN}" \
-d "{\"argString\":\"-arg1 val1 -arg2 val2 -arg3 val-3 -arg4 val4 \"}" \
"${RD_URL}/api/16/job/${RD_JOB_ID}/executions"
The API documentation for Rundeck describes how to run a job:
http://rundeck.org/docs/api/#running-a-job
Yes you need to create a parametrized job and pass in arguments as part of the API call. This could be considered a security measure only expected parameters can be accepted.

How to create a connector-connection-pool in Glassfish 3.1.2 with REST interface

For a remote administration of multiple Glassfish 3.1.2.2 instances I want to configure resource adapter connection pool and connector resources. These configuration can only be done after the resource adapter deployment.
All works if I do things with asadmin.
Get Access via REST works as expected.
Example:
curl --user admin:pwd -X GET -H "Accept: application/JSON"
http://localhost:4848/management/domain/resources/connector-connection-pool
Now I want to create a connection pool using the following command with REST
asadmin create-connector-connection-pool --raname MulticastDNS-connector
--connectiondefinition multicastdns.outbound.MulticastDNSRegistry multicastdns/pool
I followed some of the Oracle examples http://docs.oracle.com/cd/E18930_01/html/821-2416/gjipx.html#gjijx or http://docs.oracle.com/cd/E19798-01/821-1751/gjijx/index.html
But all things I tried got 400 Bad Requests.
Example:
curl --user admin:pwd -X POST -H "Accept: application/JSON" -H "Content-Type: application/json"
-d '{"id":"multicastdn%2fspool","connectiondefinitionname":"multicastdns.outbound.MulticastDNSRegistry","resourceAdapterName":"MulticastDNS-connector"}'\
http://localhost:4848/management/domain/resources/connector-connection-pool -v
# other check
curl --user admin:pwd -X POST -H "Accept: application/JSON" \
-d id=multicastdns%2fpool \
-d connectiondefinitionname=multicastdns.outbound.MulticastDNSRegistry \
-d resourceAdapterName=MulticastDNS-connector \
http://localhost:4848/management/domain/resources/connector-connection-pool -v
Simular results if I want to delete a existing resource.
curl --user admin:pwd -X DELETE -H "Accept: application/JSON" \
http://localhost:4848/management/domain/resources/connector-connection-pool/multicastdns%2fpool -v
# other try
curl --user admin:pwd -X DELETE -H "Accept: application/JSON" \
-d id=multicastdns%2fpool \
http://localhost:4848/management/domain/resources/connector-connection-pool -v
Same issue if try the second step with connector resource
asadmin create-connector-resource --poolname multicastdns/pool jca/multicastdns
Get via REST works but add and delete an entry won't work.
Thanx florian