initcontainer issue on deployment - kubernetes

I am facing issue on initcontianer where i am unable to pull the value from curl and story it in variable
tried with multiple methods like passing only value
tried with passing curl method directly
NOTE: condition is when the cluster size is 3 it should come out of loop and when cluster size is not equal to 3 it should get inside loop and execute waiting for the pod
Below is the script
echo "running below scripts"
curl -g -s -H "Content-Type: application/json" "http://hazelcast-service.bjy:8711/hazelcast/health " | jq -r '.clusterSize' > i;
cat 'i';
n=`cat i`;
while [ $n -ne 3 ];
do
echo "waiting for pod";
sleep 5;
done

Related

How to get the current Nodes where the Job is running

I'm developing a job and the user can choose in which nodes can run, so the Node Filter is open to the convenient of the user.
When the job is starting I need to do a calculation based in the number of nodes chosed by the user, exist a way to get this number?
Regards,
Alejandro L
By design, that information is only available after the execution, so, a good approach is to call the job via API (in a script step) and with the execution ID number (available in the API call output) you can list and count the nodes, e.g:
#!/bin/bash
nodes=$(curl -s -X GET "http://localhost:4440/api/41/execution/16" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: your_user_token" \
| jq -r '.successfulNodes | . []')
number_of_nodes=$(echo $nodes | wc -w)
echo "Number of nodes: $number_of_nodes"
This example needs jq to extract the nodes from the API response.
Anyway, your request sounds good for an enhancement request, please suggest that here.
a workaround to take would be using job.filter variable
so if you do #job.filter#
it returns a string with the list of nodes like us-east-1-0,us-east-1-1,us-east-1-2
if you save it as a string, and then split the string on ',' then you get an array of nodes:
IFS=',' read -r -a array <<< "$string"
and then you can get the number of nodes by
echo ${#array[#]}
Note
as #MegaDrive68k mentioned this won't work if use select all node with the use of filter .*

Powershell: loop for equivalent

Currently, I'm performing this command on linux shell:
for binary in $(curl -s -X GET "${FHIR_SERVER}/\$export-poll-status?_jobId=${JOB_ID}" -H "Authorization: Bearer ${ACCESS_TOKEN}" | jq -r ".output[].url"); \
do wget --header="Authorization: Bearer ${ACCESS_TOKEN}" ${binary} -O ->>patients-pre.json;
done
Is there any way to get this on powershell?
While not knowing exactly the format of your incoming json, I'd use something like this in powershell / pwsh:
curl ... | % { wget (convertto-json $_).output.url } >> patients-pre.json
The % is an alias for ForEach-Object which will iterate over objects (or lines of text) sent from the left side of the pipe. Powershell is interesting because when using the pipe symbol, there's an implicit for / do / done operation going on.
curl is on all modern versions of windows, so you don't need to change that. wget isn't, but you could install it, or use curl again.
If you want to go full powershell, look at invoke-restmethod ( https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-7.2 ). This can do the jobs of both curl and wget in your example, and also automatically handle json returns to give you a structured object instead of text (which is what convertto-json is doing above.)

Getting most recent execution by job via RunDeck API

Is there a more efficient way to get the most recent execution of every job in a RunDeck project that by (1) querying for the job list and then (2) querying for the max: 1 execution list of each job in serial?
The most efficient way to get all jobs last execution is getting the IDs, putting the IDs in a list, and then calling the execution endpoint based on this answer.
I made a working bash example using jq, take a look:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="40"
rdeck_token="RQvvsGODsP8YhGUw6JARriXOAn6s6OQR"
# project name
project="ProjectEXAMPLE"
# first get all jobs (of ProjectEXAMPLE project)
jobs=$(curl -s --location "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$project/jobs" --header "Accept: application/json" --header "X-Rundeck-Auth-Token: $rdeck_token" --header "Content-Type: application/json" | jq -r .[].id)
# just for debug, print all jobs ids
echo $jobs
# then iterates on the jobs id's and extracting the last succeeded execution
for z in ${jobs}; do
curl -s --location "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$z/executions?status=succeeded&max=1" --header "Accept: application/json" --header "X-Rundeck-Auth-Token: $rdeck_token" --header "Content-Type: application/json" | jq
done
Sadly doesn't exist a direct way to do that in a single call.

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

Slow importing into Google Cloud SQL

Google Cloud SQL is my first real evaluation at MySQL as a service. I created a D32 instance, set replication to async, and disabled binary logging. Importing 5.5 GB from dump files from a GCE n1-standard-1 instance in the same zone took 97 minutes.
Following the documentation, the connection was done using the public IP address, but is in the same region and zone. I'm fully open to the fact that I did something incorrectly. Is there anything immediately obvious that I should be doing differently?
we have been importing ~30Gb via cloud storage from zip files containing SQL statements and this is taking over 24Hours.
A big factor is the number of indexes that you have on the given table.
To keep it manageable, we split the file into chunks with each 200K sql statements which are being inserted in one transaction. This enables us to retry individual chunks in case of errors.
We also tried to do it via compute engine (mysql command line) and in our experience this was even slower.
Here is how to import 1 chunk and wait for it to complete. You cannot do this in parallel as cloudSql only allows for 1 import operation at a time.
#!/bin/bash
function refreshAccessToken() {
echo "getting access token..."
ACCESSTOKEN=`curl -s "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" -H "X-Google-Metadata-Request: True" | jq ".access_token" | sed 's/"//g'`
echo "retrieved access token $ACCESSTOKEN"
}
START=`date +%s%N`
DB_INSTANCE=$1
GCS_FILE=$2
SLEEP_SECONDS=$3
refreshAccessToken
CURL_URL="https://www.googleapis.com/sql/v1beta1/projects/myproject/instances/$DB_INSTANCE/import"
CURL_OPTIONS="-s --header 'Content-Type: application/json' --header 'Authorization: OAuth $ACCESSTOKEN' --header 'x-goog-project-id:myprojectId' --header 'x-goog-api-version:1'"
CURL_PAYLOAD="--data '{ \"importContext\": { \"database\": \"mydbname\", \"kind\": \"sql#importContext\", \"uri\": [ \"$GCS_FILE\" ]}}'"
CURL_COMMAND="curl --request POST $CURL_URL $CURL_OPTIONS $CURL_PAYLOAD"
echo "executing $CURL_COMMAND"
CURL_RESPONSE=`eval $CURL_COMMAND`
echo "$CURL_RESPONSE"
OPERATION=`echo $CURL_RESPONSE | jq ".operation" | sed 's/"//g'`
echo "Import operation $OPERATION started..."
CURL_URL="https://www.googleapis.com/sql/v1beta1/projects/myproject/instances/$DB_INSTANCE/operations/$OPERATION"
STATE="RUNNING"
while [[ $STATE == "RUNNING" ]]
do
echo "waiting for $SLEEP_SECONDS seconds for the import to finish..."
sleep $SLEEP_SECONDS
refreshAccessToken
CURL_OPTIONS="-s --header 'Content-Type: application/json' --header 'Authorization: OAuth $ACCESSTOKEN' --header 'x-goog-project-id:myprojectId' --header 'x-goog-api-version:1'"
CURL_COMMAND="curl --request GET $CURL_URL $CURL_OPTIONS"
CURL_RESPONSE=`eval $CURL_COMMAND`
STATE=`echo $CURL_RESPONSE | jq ".state" | sed 's/"//g'`
END=`date +%s%N`
ELAPSED=`echo "scale=8; ($END - $START) / 1000000000" | bc`
echo "Import process $OPERATION for $GCS_FILE : $STATE, elapsed time $ELAPSED"
done