MInimized deployment of WSO2 APIM - deployment
We're considering to provide our own UI for WSO2, and make it work with the APIM gateway by invoking Publisher/Store REST API's.
Is there a way to strip of the UI part of WSO2 APIM and have a deployment containing only
the gateway
the key manager
the publisher --> REST API only, no UI
the store --> REST API only, no UI
Is there such bundle available out of the box?
Otherwise, will it be possible to download either the GitHub source or the deployment package and tear off any UI related plugins and their dependent libraries?
If you don't need any UI components, you may remove publisher and store jaggary web applications from repository/deployment/server/jaggaryapps location. If you checkout the source code, then you will need to checkout product repo which is in [1] and component repo which is in [2] to perform necessary changes for remove UI stuff, but it will add more complexity and take time. Without UI part, you can use REST API in 1.10.0. There is no OOTB bundle.
[1]-https://github.com/wso2/product-apim
[2]-https://github.com/wso2/carbon-apimgt
This is our script for doing this (I will thank any bug or comment, currenty we are testing it)
#!/bin/bash
# WSO2 2.1.0
# Publish an API in several gateways, using internal REST API
# Reference
# https://docs.wso2.com/display/AM210/apidocs/publisher/
# IMPORTANT: Change these values according your WSO2 APIM versionā«
# Version 2.1.0
# declare APP_CLIENT_REGISTRATION="/client-registration/v0.11/register"
# declare -r URI_API_CTX="/api/am/publisher/v0.11"
# Version 2.1.0 update 14
declare -r APP_CLIENT_REGISTRATION="/client-registration/v0.12/register"
declare -r URI_API_CTX="/api/am/publisher/v0.12"
# Constants
declare -r URI_TOKEN="/token"
declare -r URI_API_APIS="${URI_API_CTX}/apis"
declare -r URI_API_ENVIRONMENTS="${URI_API_CTX}/environments"
declare -r URI_API_PUBLISH="${URI_API_CTX}/apis/change-lifecycle?action=Publish&apiId="
declare -r API_SCOPE_VIEW="apim:api_view"
declare -r API_SCOPE_PUBLISH="apim:api_publish"
declare -r API_SCOPE_CREATE="apim:api_create"
# Parameters
declare APIUSER=""
declare APIPASSWORD=""
declare APIMANAGER=""
declare APINAME=""
declare APIVERSION=""
declare -a APIGATEWAY
declare -i MANAGER_SERVICES_PORT=9443
declare -i MANAGER_NIOPT_PORT=8243
# Variables
# User login for aplication registration. User:Password in base64 (default admin:admin)
declare APIAUTH="YWRtaW46YWRtaW4="
# Client application token. ClientId:ClientSecret in base64
declare CLIENTTOKEN
# User access token (view)
declare ACCESSVIEWTOKEN
# User access token type (view)
declare ACCESSVIEWTOKENTYPE="Bearer"
# User access token (publish)
declare ACCESSPUBLISHTOKEN
# User access token type (publish)
declare ACCESSVIEWPUBLISHTYPE="Bearer"
# User access token (create)
declare ACCESSCREATETOKEN
# User access token type (create)
declare ACCESSVIEWCREATETYPE="Bearer"
# API internal ID
declare APIID
# echoErr
# Send message to error stream (/dev/stderr by default)
function echoErr() {
printf "%s\n" "$*" >&2;
}
# showHelp
# Usage info
showHelp() {
cat <<-EOF
Usage: ${0##*/} [-u USER] [-p PASSWORD] APIMANAGER [-s ServicePort] [-n NioPTPort] APINAME APIVERSION APIGATEWAY [APIGATEWAY] ...
Publish an API in the selected gateways
-u USER User name (if not defined, will ask for it)
-p PASSWORD User password (if not defined, will ask for it)
-s ServicePort Services Port in api manager host (by default 9443)
-n NioPTPort Nio/PT Port in key manager host (by default 8243)
APIMANAGER API MANAGER / KEY MANAGER host name (e.g. apimanager.example.com)
APINAME API to publish (has to be in CREATED, PROTOTYPED or PUBLISH state)
APIVERSION API Version to publish
APIGATEWAYs All of the gateway to publish the API (one or more)
EOF
}
# getPassword
# get a password type field (without echo and double input)
function getPassword()
{
local pwd=${3:-"NoSet"}
local verify="_Set_No"
local default=""
if [ -z "$1" ] || [ -z "$2" ]
then
echo 'ERROR: Use getPassword "Message" VAR_NAME [default]'
exit 1
else
if [ -n ${3} ]
then
default=$'\e[31m['${3}$']\e[0m'
fi
while true
do
read -sp "$1 $default" pwd
echo ""
# if empty (=Intro) use default if available
if [ "$pwd" == "" ] && [ -n "$3" ]
then
pwd="$3"
break
fi
# check password length
if [ ${#pwd} -lt 6 ]
then
echo "Password too short. Minimum length is 6"
continue
else
read -sp "Verify - $1 " verify
echo ""
if [ "$pwd" != "$verify" ]
then
echo "Passwords do not match. Retype."
else
break
fi
fi
done
eval $2="$pwd"
fi
}
# showGateways
# Print the list of available gateways in a friendly form
function showGateways() {
local -i count
local name
local gwtype
local endpoint
if [ -z $1 ]
then
echo "Use: showGateways \$apiEnvironments"
else
count=$(echo $1|jq -r '.count')
if [ "$count" -gt "0" ]
then
printf "%-20s %-10s %s\n" "Name" "Type" "Endpoint HTTPS" >&2
printf "%-20s %-10s %s\n" "====================" "==========" "===============================================" >&2
for i in $(seq 0 $(( $count - 1 )) )
do
name=$(echo "$1"|jq -r '.list['$i'].name')
gwtype=$(echo "$1"|jq -r '.list['$i'].type')
endpoint=$(echo "$1"|jq -r '.list['$i'].endpoints.https')
printf "%-20s %-10s %s\n" "$name" "$gwtype" "$endpoint" >&2
done
fi
fi
}
# validateGateway
# validate if all the gateways names (passed as parameter - global variable) are in environments
function validateGateways() {
if [ -z $1 ]
then
echo "Use: validateGateways \$apiEnvironments"
exit 1
else
for gateway in ${APIGATEWAY[#]}
do
jq -er \
--arg gateway_name "$gateway" '
.list[] |
select(.name == $gateway_name)
' <<<"$1" >/dev/null
if [ $? -ne 0 ]
then
echo "ERROR: Gateway '$gateway' is not found" >&2
return 1
fi
done
fi
return 0
}
# getClientToken
# Parse the answer of client registration, to get client token
# return (echo to stdout) the clientToken
function getClientToken() {
local clientId
local clientSecret
local clientToken
if [ -z $1 ]
then
echo "Use: getClientToken \$clientRegistration" >&2
exit 1
else
# Parse answer to get ClientId and ClientSecret
clientId=$(echo $clientRegistration|jq -r '.clientId')
clientSecret=$(echo $clientRegistration|jq -r '.clientSecret')
if [ "$clientId" == "" ] || [ "$clientSecret" == "" ] || [ "$clientId" == "null" ] || [ "$clientSecret" == "null" ]
then
return 1
else
echo -n "$clientId:$clientSecret"|base64
return 0
fi
fi
}
# getAccessToken
# Parse the answer of client API Login, to get client token
# return (echo to stdout) the accessToken
function getAccessToken() {
local accessToken
if [ -z $1 ]
then
echo "Use: getAccessToken \$clientAPILoginView|\$clientAPILoginPublish" >&2
exit 1
else
# Parse answer to get ClientId and ClientSecret
accessToken=$(echo $1|jq -r '.access_token')
if [ "$accessToken" == "" ] || [ "$accessToken" == "null" ]
then
return 1
else
echo -n "$accessToken"
return 0
fi
fi
}
# getAccessTokenType
# Parse the answer of client API Login, to get client token type
# return (echo to stdout) the accessTokenType
function getAccessTokenType() {
local tokenType
if [ -z $1 ]
then
echo "Use: getAccessToken \$clientAPILoginView|\$clientAPILoginPublish" >&2
exit 1
else
# Parse answer to get ClientId and ClientSecret
tokenType=$(echo $1|jq -r '.token_type')
if [ "$tokenType" == "" ] || [ "$tokenType" == "null" ]
then
return 1
else
echo -n "$tokenType"
return 0
fi
fi
}
# getAPIId
# Parse the answer of query API to get the API ID (checking version name)
# Thanks to https://stackoverflow.com/users/14122/charles-duffy
# return (echo to stdout) the APIID
function getAPIId() {
if [ -z $1 ]
then
echo "Usage: getAPIId \$apiQuery" >&2
exit 1
else
# Parse answer to get API ID
jq -er \
--arg target_name "$APINAME" \
--arg target_version "$APIVERSION" '
.list[] |
select(.name == $target_name) |
select(.version == $target_version) |
.id' <<<"$1"
fi
}
# getAPIGatewayEnvironments
# Parse the answer of detailed query API to get the API gateway environments
# return (echo to stdout) the gateway environments
function getAPIGatewayEnvironments() {
if [ -z "$1" ]
then
echo "Usage: getAPIGatewayEnvironments \$apiResource" >&2
exit 1
else
# Parse answer to get API ID
jq -er '.gatewayEnvironments' <<<"$1"
fi
}
# getAPIStatus
# Parse the answer of detailed query API to get the API status
# return (echo to stdout) the status
function getAPIStatus() {
if [ -z "$1" ]
then
echo "Usage: getAPIStatus \$apiResource" >&2
exit 1
else
# Parse answer to get API ID
jq -er '.status' <<<"$1"
fi
}
# setGateways
# Update the field gatewayEnvironments in API resource from GATEWAY parameter array
# Return the new API resource update
function setGateways() {
local gateways
local oIFS
if [ -z "$1" ]
then
echo "Use: setGateways \$apiResource" >&2
exit 1
else
oIFS="$IFS";IFS=',';gateways="${APIGATEWAY[*]}";IFS="$oIFS"
jq -e '.gatewayEnvironments="'$gateways'"' <<<$1
fi
}
# checkGateways
# check if the gateways has been updated correctly
function checkGateways() {
local gateways
local apiResourceGateways
local oIFS
if [ -z "$1" ]
then
echo "Use: checkGateways \$apiResourceUpdated" >&2
exit 1
else
oIFS="$IFS";IFS=',';gateways="${APIGATEWAY[*]}";IFS="$oIFS"
apiResourceGateways=$(echo $1|jq -r '.gatewayEnvironments')
# Return value
if [ -z "$apiResourceGateways" ] || [ "$apiResrouceGateways" == "null" ]
then
return 1
fi
# TODO: The gateways are sorted in different manner (reverse as API Manager??)
#if [ "$gateways" != "$apiResourceGateways" ]
#then
# return 1
#fi
fi
return 0
}
# getParms
# Parse the parms and assign to variables
function getParms() {
local OPTIND=1
while getopts hu:p: opt $#
do
case $opt in
h)
showHelp
exit 0
;;
u)
APIUSER=$OPTARG
;;
p)
APIPASSWORD=$OPTARG
;;
s)
MANAGER_SERVICES_PORT=$OPTARG
;;
n)
MANAGER_NIOPT_PORT=$OPTARG
;;
*)
showHelp >&2
exit 1
;;
esac
done
shift "$((OPTIND-1))" # Discard the options and get parameter
APIMANAGER=$1
if [ "$APIMANAGER" == "" ]
then
echo "APIMANAGER host name is required"
showHelp >&2
exit 1
fi
shift 1
APINAME=$1
if [ "$APINAME" == "" ]
then
echo "API name to publish is required"
showHelp >&2
exit 1
fi
shift 1
APIVERSION=$1
if [ "$APIVERSION" == "" ]
then
echo "API version to publish is required"
showHelp >&2
exit 1
fi
shift 1
if [ "$1" == "" ]
then
echo "You must indicate 1 or more gateway to publish is required"
showHelp >&2
exit 1
else
local i=1
for arg in $#
do
APIGATEWAY[$i]="$1"
let i=(i+1)
shift 1
done
fi
}
###############################################################################
# Check required internal tools
if ! type -t jq >/dev/null
then
echo "jq not found. Install it, e.g. 'apt-get install jq'"
exit 2
fi
# Read and parse Parms. Request required values missing
getParms $#
if [ "$APIUSER" == "" ]
then
APIUSER=admin
read -p $'Publisher user: \e[31m['${APIUSER}$']\e[0m ' parm
APIUSER=${parm:-$APIUSER}
fi
if [ "$APIPASSWORD" == "" ]
then
APIPASSWORD=admin
read -sp $'Publisher password: \e[31m['${APIPASSWORD}$']\e[0m ' parm
APIPASSWORD=${parm:-$APIPASSWORD}
echo ""
fi
# TEST ONLY: Delete (show parameter values)
# echo "USER=$APIUSER"
# echo "PASSWORD=$APIPASSWORD"
# echo "APIMANAGER=$APIMANAGER"
# echo "APINAME=$APINAME"
# for GWY in ${!APIGATEWAY[#]}
# do
# echo "APIGATEWAY[$GWY]=${APIGATEWAY[$GWY]}"
# done
# Client registration
echo "Registering this script as a client application (rest_api_publisher)"
APIAUTH=$(echo -n $APIUSER:$APIPASSWORD|base64)
clientRegistration=$(
curl -s -X POST "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${APP_CLIENT_REGISTRATION}" \
-H "Authorization: Basic ${APIAUTH}" \
-H "Content-Type: application/json" \
-d #- <<-EOF
{
"callbackUrl": "www.google.lk",
"clientName": "rest_api_publisher",
"owner": "$APIUSER",
"grantType": "password refresh_token",
"saasApp": true
}
EOF
)
if [ "$clientRegistration" == "" ]
then
echo "ERROR: Empty answer from https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${APP_CLIENT_REGISTRATION}. Is APIMANAGER correct?" >&2
exit 3
fi
# Get Application Client Token
CLIENTTOKEN=$(getClientToken $clientRegistration)
if [ $? -ne 0 ]
then
echo $clientRegistration >&2
echo "ERROR: Cannot get ClientId/ClientSecret: Is user/password correct?" >&2
exit 4
fi
# TEST ONLY: Delete
# echo "CLIENTTOKEN=$CLIENTTOKEN"
echo "Aplication rest_api_publisher registered"
# Client Login for get Access Token (and Token Type) - View Scope
echo "Obtaining access token for API query (scope api_view)"
clientAPILoginView=$(
curl -s -X POST "https://${APIMANAGER}:${MANAGER_NIOPT_PORT}${URI_TOKEN}" \
-H "Authorization: Basic ${CLIENTTOKEN}" \
-d "grant_type=password&username=${APIUSER}&password=${APIPASSWORD}&scope=${API_SCOPE_VIEW}"
)
ACCESSVIEWTOKEN=$(getAccessToken $clientAPILoginView) && ACCESSVIEWTOKENTYPE=$(getAccessTokenType $clientAPILoginView)
if [ $? -ne 0 ]
then
echo $clientAPILoginView >&2
echo "ERROR: Cannot get Access Token: Has the user '$APIUSER' in necesary role for scope ${API_SCOPE_VIEW}" >&2
exit 5
fi
# TEST ONLY: Delete
# echo "Access View Token=$ACCESSVIEWTOKEN"
# echo "Token View Type=$ACCESSVIEWTOKENTYPE"
# Client Login for get Access Token (and Token Type) - Publish Scope
echo "Obtaining access token for API publish (scope api_publish)"
clientAPILoginPublish=$(
curl -s -X POST "https://${APIMANAGER}:${MANAGER_NIOPT_PORT}${URI_TOKEN}" \
-H "Authorization: Basic ${CLIENTTOKEN}" \
-d "grant_type=password&username=${APIUSER}&password=${APIPASSWORD}&scope=${API_SCOPE_PUBLISH}"
)
ACCESSPUBLISHTOKEN=$(getAccessToken $clientAPILoginPublish) && ACCESSPUBLISHTOKENTYPE=$(getAccessTokenType $clientAPILoginPublish)
if [ $? -ne 0 ]
then
echo $clientAPILoginPublish >&2
echo "ERROR: Cannot get Access Token: Has the user $APIUSER in necesary role for scope ${API_SCOPE_PUBLISH}" >&2
exit 5
fi
# TEST ONLY: Delete
# echo "Access Publish Token=$ACCESSPUBLISHTOKEN"
# echo "Token Publish Type=$ACCESSPUBLISHTOKENTYPE"
# Client Login for get Access Token (and Token Type) - Publish Scope
echo "Obtaining access token for API create (scope api_create)"
clientAPILoginCreate=$(
curl -s -X POST "https://${APIMANAGER}:${MANAGER_NIOPT_PORT}${URI_TOKEN}" \
-H "Authorization: Basic ${CLIENTTOKEN}" \
-d "grant_type=password&username=${APIUSER}&password=${APIPASSWORD}&scope=${API_SCOPE_CREATE}"
)
ACCESSCREATETOKEN=$(getAccessToken $clientAPILoginCreate) && ACCESSCREATETOKENTYPE=$(getAccessTokenType $clientAPILoginCreate)
if [ $? -ne 0 ]
then
echo $clientAPILoginCreate|jq . >&2
echo "ERROR: Cannot get Access Token: Has the user $APIUSER in necesary role for scope ${API_SCOPE_CREATE}" >&2
exit 5
fi
# TEST ONLY: Delete
# echo "Access Create Token=$ACCESSCREATETOKEN"
# echo "Token Create Type=$ACCESSCREATETOKENTYPE"
echo "All tokens obtained"
# Get API info (exists?)
echo "Checking API with name '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'"
apiQuery=$(
curl -s "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_APIS}?query=name:$APINAME" \
-H "Authorization: ${ACCESSVIEWTOKENTYPE} ${ACCESSVIEWTOKEN}"
)
# TEST ONLY: Delete
# echo "apiQuery=${apiQuery}"
APIID=$(getAPIId $apiQuery)
if [ $? -ne 0 ]
then
echo $apiQuery >&2
echo "ERROR: Cannot find an API ${APINAME} with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 6
fi
echo "API Found. APIID='$APIID'"
# Get availables gateways and validate gateways names
echo "Checking if requested gateways '${APIGATEWAY[#]}' are available in '${APIMANAGER}'"
apiEnvironments=$(
curl -s "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_ENVIRONMENTS}" \
-H "Authorization: ${ACCESSVIEWTOKENTYPE} ${ACCESSVIEWTOKEN}"
)
# TEST ONLY: Delete
# echo "apiEnvironments=$apiEnvironments"
if ! validateGateways $apiEnvironments
then
echo "Valid gateways are:"
showGateways $apiEnvironments
exit 7
fi
echo "API required gateways checked"
# Get API detailed info
echo "Getting API detailed info of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'"
apiResource=$(
curl -s -S -f -X GET "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_APIS}/${APIID}" \
-H "Authorization: ${ACCESSVIEWTOKENTYPE} ${ACCESSVIEWTOKEN}"
)
if [ $? -ne 0 ]
then
echo "ERROR: Cannot get API detailed information of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 8
fi
# TEST ONLY: Delete
# jq . <<<$apiResource
currentGatewayEnvironments=$(getAPIGatewayEnvironments "$apiResource") && currentStatus=$(getAPIStatus "$apiResource")
if [ $? -ne 0 ]
then
jq . <<<$apiResource >&2
echo "ERROR: Cannot get API detailed information of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 8
fi
echo "API is currently configured for gateways: '${currentGatewayEnvironments}'"
echo "API is currently in status: '${currentStatus}'"
# Update API gateways info
apiResourceUpdated=$(setGateways "$apiResource")
if [ $? -ne 0 ]
then
echo $apiResourceUpdated | jq . >&2
echo "ERROR: Cannot update gateways in API resource" >&2
exit 9
fi
# TEST ONLY: Delete
jq . <<<$apiResouceUpdated >&2
# PENDING: Update also required information (e.g., Endpoints)
# Update gateways
echo "Updating API gateways of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}' to '${APIGATEWAY[#]}'"
apiResourceUpdatedResponse=$(
curl -s -S -f -X PUT "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_APIS}/${APIID}" \
-H "Content-Type: application/json" \
-H "Authorization: ${ACCESSCREATETOKENTYPE} ${ACCESSCREATETOKEN}" \
-d "$apiResourceUpdated"
)
if [ $? -ne 0 ]
then
# Retry request to show error in console
curl -s -X PUT "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_APIS}/${APIID}" \
-H "Content-Type: application/json" \
-H "Authorization: ${ACCESSCREATETOKENTYPE} ${ACCESSCREATETOKEN}" \
-d "$apiResourceUpdated"|jq .
echo "ERROR: Cannot update gateways in API resource. Check API for missing information (HTTP Endpoints, ...)" >&2
exit 10
fi
# TEST ONLY: Delete
# jq . <<<$apiResourceUpdatedResponse
if ! checkGateways "$apiResourceUpdatedResponse"
then
echo $apiResourceUpdated| jq . >&2
echo "ERROR: Error updating gateways in API resource" >&2
exit 9
fi
echo "API Updated"
# Publish
echo "Publishing '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}' "
apiResource=$(
curl -s -S -f -X POST "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_PUBLISH}${APIID}" \
-H "Authorization: ${ACCESSPUBLISHTOKENTYPE} ${ACCESSPUBLISHTOKEN}"
)
if [ $? -ne 0 ]
then
echo "ERROR: Publishing '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 10
fi
echo "API Published"
# Verify status and gateways
echo "Verify API detailed info of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'"
apiResource=$(
curl -s -S -f -X GET "https://${APIMANAGER}:${MANAGER_SERVICES_PORT}${URI_API_APIS}/${APIID}" \
-H "Authorization: ${ACCESSVIEWTOKENTYPE} ${ACCESSVIEWTOKEN}"
)
if [ $? -ne 0 ]
then
echo "ERROR: Cannot get API detailed information of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 11
fi
currentGatewayEnvironments=$(getAPIGatewayEnvironments "$apiResource") && currentStatus=$(getAPIStatus "$apiResource")
if [ $? -ne 0 ]
then
jq . <<<$apiResource >&2
echo "ERROR: Cannot get API detailed information of '${APINAME}' with version '${APIVERSION}' in '${APIMANAGER}'" >&2
exit 12
fi
echo "API is now configured for gateways: '${currentGatewayEnvironments}'"
echo "API is now in status: '${currentStatus}'"
Related
error calling tpl: error during tpl function execution for "configuration.yaml.default (home assistant helm upgrade on truenas scale)
I'm having trouble trying to update my home assistant with truecharts. [EFAULT] Failed to upgrade chart release: Error: UPGRADE FAILED: template: commonloader.apply" at : error calling include: template: home-assistant/charts/common/templates/spawner/_configmap.tpl:16:10: executing "tc.common.spawner.configmap" at : error calling include: template: home-assistant/charts/common/templates/class/_configmap.tpl:33:6: executing "tc.common.class.configmap" at : error calling tpl: error during tpl function execution for "configuration.yaml.default: {{- if hasKey .Values \"ixChartContext\" }} - {{ .Values.ixChartContext.kubernetes_config.cluster_cidr }} {{- else }} {{- range .Values.homeassistant.trusted_proxies }} - {{ . }} {{- end }} {{- end }} init.sh: |- #!/bin/sh if test -f \"/config/configuration.yaml\"; then echo \"configuration.yaml exists.\" if grep -q recorder: \"/config/configuration.yaml\"; then echo \"configuration.yaml already contains recorder\" else cat /config/init/recorder.default >> /config/configuration.yaml fi if grep -q http: \"/config/configuration.yaml\"; then echo \"configuration.yaml already contains http section\" else cat /config/init/http.default >> /config/configuration.yaml fi else echo \"configuration.yaml does NOT exist.\" cp /config/init/configuration.yaml.default /config/configuration.yaml cat /config/init/recorder.default >> /config/configuration.yaml cat /config/init/http.default >> /config/configuration.yaml fi echo \"Creating include files...\" for include_file in groups.yaml automations.yaml scripts.yaml scenes.yaml; do if test -f \"/config/$include_file\"; then echo \"$include_file exists.\" else echo \"$include_file does NOT exist.\" touch \"/config/$include_file\" fi done cd \"/config\" || echo \"Could not change path to /config\" echo \"Creating custom_components directory...\" mkdir \"/config/custom_components\" || echo \"custom_components directory already exists\" echo \"Changing to the custom_components directory...\" cd \"/config/custom_components\" || echo \"Could not change path to /config/custom_components\" echo \"Downloading HACS\" wget \"https://github.com/hacs/integration/releases/latest/download/hacs.zip\" || exit 0 if [ -d \"/config/custom_components/hacs\" ]; then echo \"HACS directory already exist, cleaning up...\" rm -R \"/config/custom_components/hacs\" fi echo \"Creating HACS directory...\" mkdir \"/config/custom_components/hacs\" echo \"Unpacking HACS...\" unzip \"/config/custom_components/hacs.zip\" -d \"/config/custom_components/hacs\" >/dev ull 2>&1 echo \"Removing HACS zip file...\" rm \"/config/custom_components/hacs.zip\" echo \"Installation complete.\" recorder.default: |2- recorder: purge_keep_days: 30 commit_interval: 3 db_url: {{ ( printf \"%s?client_encoding=utf8\" ( .Values.postgresql.url.complete | trimAll \"\\\"\" ) ) | quote }}": template: home-assistant/templates/common.yaml:19:18: executing "home-assistant/templates/common.yaml" at <.Values.ixChartContext.kubernetes_config.cluster_cidr>: nil pointer evaluating interface {}.cluster_cidr I tried chmod 755 on the custom_components directory and also tried to use the bare minimum for the configuration.yaml. Still got the same error. Is there a way I can run a debug on this? Anyone have any ideas?
CircleCI run failed on delete k8s resource
I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes. I have the config.yml followed the doc from https://circleci.com/developer/orbs/orb/circleci/kubernetes Here is my version of setup in the config file: version: 2.1 orbs: kube-orb: circleci/kubernetes#1.3.0 commands: docker-check: steps: - docker/check: docker-username: MY_USERNAME docker-password: MY_PASS registry: $DOCKER_REGISTRY jobs: create-deployment: executor: aws-eks/python3 parameters: cluster-name: description: Name of the EKS cluster type: string steps: - checkout # It failed on this step - kube-orb/delete-resource: now: true resource-names: my-frontend-deployment resource-types: deployments wait: true Below is a copy of the error log #!/bin/bash -eo pipefail #!/bin/bash RESOURCE_FILE_PATH=$(eval echo "$PARAM_RESOURCE_FILE_PATH") RESOURCE_TYPES=$(eval echo "$PARAM_RESOURCE_TYPES") RESOURCE_NAMES=$(eval echo "$PARAM_RESOURCE_NAMES") LABEL_SELECTOR=$(eval echo "$PARAM_LABEL_SELECTOR") ALL=$(eval echo "$PARAM_ALL") CASCADE=$(eval echo "$PARAM_CASCADE") FORCE=$(eval echo "$PARAM_FORCE") GRACE_PERIOD=$(eval echo "$PARAM_GRACE_PERIOD") IGNORE_NOT_FOUND=$(eval echo "$PARAM_IGNORE_NOT_FOUND") NOW=$(eval echo "$PARAM_NOW") WAIT=$(eval echo "$PARAM_WAIT") NAMESPACE=$(eval echo "$PARAM_NAMESPACE") DRY_RUN=$(eval echo "$PARAM_DRY_RUN") KUSTOMIZE=$(eval echo "$PARAM_KUSTOMIZE") if [ -n "${RESOURCE_FILE_PATH}" ]; then if [ "${KUSTOMIZE}" == "1" ]; then set -- "$#" -k else set -- "$#" -f fi set -- "$#" "${RESOURCE_FILE_PATH}" elif [ -n "${RESOURCE_TYPES}" ]; then set -- "$#" "${RESOURCE_TYPES}" if [ -n "${RESOURCE_NAMES}" ]; then set -- "$#" "${RESOURCE_NAMES}" elif [ -n "${LABEL_SELECTOR}" ]; then set -- "$#" -l set -- "$#" "${LABEL_SELECTOR}" fi fi if [ "${ALL}" == "true" ]; then set -- "$#" --all=true fi if [ "${FORCE}" == "true" ]; then set -- "$#" --force=true fi if [ "${GRACE_PERIOD}" != "-1" ]; then set -- "$#" --grace-period="${GRACE_PERIOD}" fi if [ "${IGNORE_NOT_FOUND}" == "true" ]; then set -- "$#" --ignore-not-found=true fi if [ "${NOW}" == "true" ]; then set -- "$#" --now=true fi if [ -n "${NAMESPACE}" ]; then set -- "$#" --namespace="${NAMESPACE}" fi if [ -n "${DRY_RUN}" ]; then set -- "$#" --dry-run="${DRY_RUN}" fi set -- "$#" --wait="${WAIT}" set -- "$#" --cascade="${CASCADE}" if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then set -x fi kubectl delete "$#" if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then set +x fi error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" Exited with code exit status 1 CircleCI received exit code 1 Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.
I was facing the exact issue since yesterday morning (16 hours ago). Then taking #Gavy's advice, I simply added this in my config.yml: steps: - checkout # !!! HERE !!! - kubernetes/install-kubectl: kubectl-version: v1.23.5 - run: And now it works. Hope it helps.
Get run id after triggering a github workflow dispatch event
I am triggering a workflow run via github's rest api. But github doesn't send any data in the response body (204). How do i get the run id of the trigger request made? I know about the getRunsList api, which would return runs for a workflow id, then i can get the latest run, but this can cause issues when two requests are submitted at almost the same time.
This is not currently possible to get the run_id associated to the dispatch API call in the dispatch response itself, but there is a way to find this out if you can edit your worflow file a little. You need to dispatch the workflow with an input like this: curl "https://api.github.com/repos/$OWNER/$REPO/actions/workflows/$WORKFLOW/dispatches" -s \ -H "Authorization: Token $TOKEN" \ -d '{ "ref":"master", "inputs":{ "id":"12345678" } }' Also edit your workflow yaml file with an optionnal input (named id here). Also, place it as the first job, a job which has a single step with the same name as the input id value (this is how we will get the id back using the API!): name: ID Example on: workflow_dispatch: inputs: id: description: 'run identifier' required: false jobs: id: name: Workflow ID Provider runs-on: ubuntu-latest steps: - name: ${{github.event.inputs.id}} run: echo run identifier ${{ inputs.id }} The trick here is to use name: ${{github.event.inputs.id}} https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#inputs Then the flow is the following: run the dispatch API call along with the input named id in this case with a random value POST https://api.github.com/repos/$OWNER/$REPO/actions/workflows/$WORKFLOW/dispatches in a loop get the runs that have been created since now minus 5 minutes (the delta is to avoid any issue with timings): GET https://api.github.com/repos/$OWNER/$REPO/actions/runs?created=>$run_date_filter example in the run API response, you will get a jobs_url that you will call: GET https://api.github.com/repos/$OWNER/$REPO/actions/runs/[RUN_ID]/jobs the job API call above returns the list of jobs, as you have declared the id jobs as 1st job it will be in first position. It also gives you the steps with the name of the steps. Something like this: { "id": 3840520726, "run_id": 1321007088, "run_url": "https://api.github.com/repos/$OWNER/$REPO/actions/runs/1321007088", "run_attempt": 1, "node_id": "CR_kwDOEi1ZxM7k6bIW", "head_sha": "4687a9bb5090b0aadddb69cc335b7d9e80a1601d", "url": "https://api.github.com/repos/$OWNER/$REPO/actions/jobs/3840520726", "html_url": "https://github.com/$OWNER/$REPO/runs/3840520726", "status": "completed", "conclusion": "success", "started_at": "2021-10-08T15:54:40Z", "completed_at": "2021-10-08T15:54:43Z", "name": "Hello world", "steps": [ { "name": "Set up job", "status": "completed", "conclusion": "success", "number": 1, "started_at": "2021-10-08T17:54:40.000+02:00", "completed_at": "2021-10-08T17:54:42.000+02:00" }, { "name": "12345678", <=============== HERE "status": "completed", "conclusion": "success", "number": 2, "started_at": "2021-10-08T17:54:42.000+02:00", "completed_at": "2021-10-08T17:54:43.000+02:00" }, { "name": "Complete job", "status": "completed", "conclusion": "success", "number": 3, "started_at": "2021-10-08T17:54:43.000+02:00", "completed_at": "2021-10-08T17:54:43.000+02:00" } ], "check_run_url": "https://api.github.com/repos/$OWNER/$REPO/check-runs/3840520726", "labels": [ "ubuntu-latest" ], "runner_id": 1, "runner_name": "Hosted Agent", "runner_group_id": 2, "runner_group_name": "GitHub Actions" } The name of the id step is returning your input value, so you can safely confirm that it is this run that was triggered by your dispatch call Here is an implementation of this flow in python, it will return the workflow run id: import random import string import datetime import requests import time # edit the following variables owner = "YOUR_ORG" repo = "YOUR_REPO" workflow = "dispatch.yaml" token = "YOUR_TOKEN" authHeader = { "Authorization": f"Token {token}" } # generate a random id run_identifier = ''.join(random.choices(string.ascii_uppercase + string.digits, k=15)) # filter runs that were created after this date minus 5 minutes delta_time = datetime.timedelta(minutes=5) run_date_filter = (datetime.datetime.utcnow()-delta_time).strftime("%Y-%m-%dT%H:%M") r = requests.post(f"https://api.github.com/repos/{owner}/{repo}/actions/workflows/{workflow}/dispatches", headers= authHeader, json= { "ref":"master", "inputs":{ "id": run_identifier } }) print(f"dispatch workflow status: {r.status_code} | workflow identifier: {run_identifier}") workflow_id = "" while workflow_id == "": r = requests.get(f"https://api.github.com/repos/{owner}/{repo}/actions/runs?created=%3E{run_date_filter}", headers = authHeader) runs = r.json()["workflow_runs"] if len(runs) > 0: for workflow in runs: jobs_url = workflow["jobs_url"] print(f"get jobs_url {jobs_url}") r = requests.get(jobs_url, headers= authHeader) jobs = r.json()["jobs"] if len(jobs) > 0: # we only take the first job, edit this if you need multiple jobs job = jobs[0] steps = job["steps"] if len(steps) >= 2: second_step = steps[1] # if you have position the run_identifier step at 1st position if second_step["name"] == run_identifier: workflow_id = job["run_id"] else: print("waiting for steps to be executed...") time.sleep(3) else: print("waiting for jobs to popup...") time.sleep(3) else: print("waiting for workflows to popup...") time.sleep(3) print(f"workflow_id: {workflow_id}") gist link Sample output $ python3 github_action_dispatch_runid.py dispatch workflow status: 204 | workflow identifier: Z7YPF6DD1YP2PTM get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs waiting for steps to be executed... get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs waiting for steps to be executed... get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs waiting for steps to be executed... get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs workflow_id: 1321475221 It would have been easier if there was a way to retrieve the workflow inputs via API but there is no way to do this at this moment Note that in the worflow file, I use ${{github.event.inputs.id}} because ${{inputs.id}} doesn't work. It seems inputs is not being evaluated when we use it as the step name
Get WORKFLOWID gh workflow list --repo <repo-name> Trigger workflow of type workflow_dispatch gh workflow run $WORKFLOWID --repo <repo-name> It doesnot return the run-id which is required get the status of execution Get latest run-id WORKFLOW_RUNID gh run list -w $WORKFLOWID --repo <repo> -L 1 --json databaseId | jq '.[]| .databaseId' Get workflow run details gh run view --repo <repo> $WORKFLOW_RUNID This is workaround that we do. It is not perfect, but should work.
inspired by the comment above, made a /bin/bash script which gets your $run_id name: ID Example on: workflow_dispatch: inputs: id: description: 'run identifier' required: false jobs: id: name: Workflow ID Provider runs-on: ubuntu-latest steps: - name: ${{github.event.inputs.id}} run: echo run identifier ${{ inputs.id }} workflow_id= generates a random 8 digit number now, later, date_filter= use for time filter, now - 5 minutes \ generates a random ID POST job and trigger workflow GET action/runs descending and gets first .workflow_run[].id keeps looping until script matches random ID from step 1 echo run_id TOKEN="" \ GH_USER="" \ REPO="" \ REF="" WORKFLOW_ID=$(tr -dc '0-9' </dev/urandom | head -c 8) \ NOW=$(date +"%Y-%m-%dT%H:%M") \ LATER=$(date -d "-5 minutes" +"%Y-%m-%dT%H:%M") \ DATE_FILTER=$(echo "$NOW-$LATER") \ JSON=$(cat <<-EOF {"ref":"$REF","inputs":{"id":"$WORKFLOW_ID"}} EOF ) && \ curl -s \ -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $TOKEN" \ "https://api.github.com/repos/$GH_USER/$REPO/actions/workflows/main.yml/dispatches" \ -d $JSON && \ INFO="null" \ COUNT=1 \ ATTEMPTS=10 && \ until [[ $CHECK -eq $WORKFLOW_ID ]] || [[ $COUNT -eq $ATTEMPTS ]];do echo -e "$(( COUNT++ ))..." INFO=$(curl -s \ -X GET \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $TOKEN" \ "https://api.github.com/repos/$GH_USER/$REPO/actions/runs?created:<$DATE_FILTER" | jq -r '.workflow_runs[].id' | grep -m1 "") CHECK=$(curl -s \ -X GET \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer $TOKEN" \ "https://api.github.com/repos/$GH_USER/$REPO/actions/runs/$INFO/jobs" | jq -r '.jobs[].steps[].name' | grep -o '[[:digit:]]*') sleep 5s done echo "Your run_id is $CHECK" output: 1... 2... 3... Your run_id is 67530050
I recommend using the convictional/trigger-workflow-and-wait action: - name: Example uses: convictional/trigger-workflow-and-wait#v1.6.5 with: owner: my-org repo: other-repo workflow_file_name: other-workflow.yaml github_token: ${{ secrets.GH_TOKEN }} client_payload: '{"key1": "value1", "key2": "value2"}' This takes care of waiting for the other job and returning a success or failure according to whether the other workflow succeeded or failed. It does so in a robust way that handles multiple runs being triggered at almost the same time.
the whole idea is to know which run was dispatched, when id was suggested to use on dispatch, this id is expected to be found in the response of the GET call to this url "actions/runs" so now user is able to identify the proper run to monitor. The injected id is not part of the response, so extracting another url to find your id is not helpful since this is the point where id needed to identify the run for monitoring
Github V4 graphql - Cant get organization user contribution info
I'm new to Github API v3 (rest) & v4 (graphql). I have created a PAT (personal access token) and using it to fetch data from github - organization private repositories. While able to get other users data (event, commits, issues etc.) in my organization with V3 api (REST), I can't get information when using V4 api (Graphql) - I get info only when querying my own user, for other users I get zeros. The graphql query I use (token is set in the header and some user in my org): query { user (login: "<user>") { contributionsCollection(from: "2020-01-01T00:00:00.000Z", to: "2020-06-25T23:59:59.999Z", organizationID: "<my_org_id>") { totalCommitContributions totalIssueContributions totalPullRequestContributions totalPullRequestReviewContributions totalRepositoriesWithContributedCommits totalRepositoriesWithContributedIssues totalRepositoriesWithContributedPullRequestReviews totalRepositoriesWithContributedPullRequests } } } and the zeroed response (when querying myself I get non-zero values): { "data": { "user": { "contributionsCollection": { "totalCommitContributions": 0, "totalIssueContributions": 0, "totalPullRequestContributions": 0, "totalPullRequestReviewContributions": 0, "totalRepositoriesWithContributedCommits": 0, "totalRepositoriesWithContributedIssues": 0, "totalRepositoriesWithContributedPullRequestReviews": 0, "totalRepositoriesWithContributedPullRequests": 0 } } } } While in REST api: curl -H "Authorization: token <TOKEN>" -X GET 'https://api.github.com/users/<user>/events?page=1&per_page=10' I get information (on the specified dates above) What am I missing?
Start with this: #!/bin/bash query=" \ { \ \"query\": \"query \ { \ repository(owner: \\\"MyOrg\\\", name: \\\"some-repo-name\\\") {\ issues(last: 50, states: OPEN, labels: \\\"some label\\\") {\ edges {\ node {\ title\ body\ url\ createdAt\ lastEditedAt\ }\ }\ }\ }\ } \" \ } \ " curl -H "Authorization: bearer XX-your-PAT-goes-here-XX" \ -X POST \ -d "${query%%$'\n*'}" \ https://api.github.com/graphql | jq --compact-output '.[] | .repository.issues.edges[] | .node' | while read -r object; do echo "================================" # determine last edit date and time lastEditedAt=$(prettify_date $(echo "$object" | jq '.lastEditedAt')) createdAt=$(prettify_date $(echo "$object" | jq '.createdAt')) echo "${createdAt} -- ${lastEditedAt}" [[ ${lastEditedAt} == nul ]] && reallyLastEditedAt="${createdAt}" || reallyLastEditedAt="${lastEditedAt}" title=$(echo "$object" | jq '.title') content="$(echo "$object" | jq '.body' | sed -e 's/^"//' -e 's/"$//')" echo ${content} | sed 's!\\r\\n!\n!g' done
How can I view the config details of the current context in kubectl?
I'd like to see the 'config' details as shown by the command of: kubectl config view However this shows the entire config details of all contexts, how can I filter it (or perhaps there is another command), to view the config details of the CURRENT context?
kubectl config view --minify displays only the current context
use the below command to get the full config including certificates kubectl config view --minify --flatten
The cloud-native way to do this is to use the JSON output of the command, then filter it with jq: kubectl config view -o json | jq '. as $o | ."current-context" as $current_context_name | $o.contexts[] | select(.name == $current_context_name) as $context | $o.clusters[] | select(.name == $context.context.cluster) as $cluster | $o.users[] | select(.name == $context.context.user) as $user | {"current-context-name": $current_context_name, context: $context, cluster: $cluster, user: $user}' { "current-context-name": "docker-for-desktop", "context": { "name": "docker-for-desktop", "context": { "cluster": "docker-for-desktop-cluster", "user": "docker-for-desktop" } }, "cluster": { "name": "docker-for-desktop-cluster", "cluster": { "server": "https://localhost:6443", "insecure-skip-tls-verify": true } }, "user": { "name": "docker-for-desktop", "user": { "client-certificate-data": "REDACTED", "client-key-data": "REDACTED" } } } This answer helped me figure out some of the jq bits.
The bash/kubectl with a little bit of jq, for any context equivalent: exec >/tmp/output && CONTEXT_NAME=kubernetes-admin#kubernetes \ CONTEXT_CLUSTER=$(kubectl config view -o=jsonpath="{.contexts[?(#.name==\"${CONTEXT_NAME}\")].context.cluster}") \ CONTEXT_USER=$(kubectl config view -o=jsonpath="{.contexts[?(#.name==\"${CONTEXT_NAME}\")].context.user}") && \ echo "[" && \ kubectl config view -o=json | jq -j --arg CONTEXT_NAME "$CONTEXT_NAME" '.contexts[] | select(.name==$CONTEXT_NAME)' && \ echo "," && \ kubectl config view -o=json | jq -j --arg CONTEXT_CLUSTER "$CONTEXT_CLUSTER" '.clusters[] | select(.name==$CONTEXT_CLUSTER)' && \ echo "," && \ kubectl config view -o=json | jq -j --arg CONTEXT_USER "$CONTEXT_USER" '.users[] | select(.name==$CONTEXT_USER)' && \ echo -e "\n]\n" && \ exec >/dev/tty && \ cat /tmp/output | jq && \ rm -rf /tmp/output
You can use the command kubectl config view --minify to get current context only. It is handy to use --help to get the options what you could have for kubectl operations. kubectl config view --help