grunt, grunt-shell - command inheritance - coffeescript

I need something like command inheritance like this:
shell:
virtualenvActivate:
command: [
'. `command -v virtualenvwrapper.sh`'
'workon <%= pkg.name %>'
].join '&&'
pelican:
command: [
shell:virtualenvActivate # <-- THIS LINE
'pelican src/content/ -o dist/ -s publishconf.py'
].join '&&'
Is it ever possible?

Unfortunately, as far as I know, CoffeeScript does not currently implements YAML's anchor/reference features.
I don't know Grunt, but generally speaking, for now, you would probably have to declare the common data outside of your data structure:
_my_command = [
'. `command -v virtualenvwrapper.sh`'
'workon <%= pkg.name %>'
].join '&&'
shell:
virtualenvActivate:
command: _my_command
pelican:
command: [
_my_command
'pelican src/content/ -o dist/ -s publishconf.py'
].join '&&'
For more advanced use case, you probably can go to something like (ab)using an object constructor to achieve the desired result. Something like that maybe:
shell: new ->
#virtualenvActivate =
command: [
'. `command -v virtualenvwrapper.sh`'
'workon <%= pkg.name %>'
].join '&&'
#pelican =
command: [
console.log "A", this.virtualenvActivate
#virtualenvActivate.command
'pelican src/content/ -o dist/ -s publishconf.py'
].join '&&'
#
That being said, I don't know if this would be the recommended way though.

Related

error calling tpl: error during tpl function execution for "configuration.yaml.default (home assistant helm upgrade on truenas scale)

I'm having trouble trying to update my home assistant with truecharts.
[EFAULT] Failed to upgrade chart release:
Error: UPGRADE FAILED:
template: commonloader.apply" at :
error calling include:
template: home-assistant/charts/common/templates/spawner/_configmap.tpl:16:10:
executing "tc.common.spawner.configmap" at :
error calling include: template: home-assistant/charts/common/templates/class/_configmap.tpl:33:6: executing "tc.common.class.configmap" at :
error calling tpl: error during tpl function execution for "configuration.yaml.default:
{{- if hasKey .Values \"ixChartContext\" }}
- {{ .Values.ixChartContext.kubernetes_config.cluster_cidr }}
{{- else }}
{{- range .Values.homeassistant.trusted_proxies }}
- {{ . }}
{{- end }}
{{- end }}
init.sh: |-
#!/bin/sh
if test -f \"/config/configuration.yaml\"; then
echo \"configuration.yaml exists.\"
if grep -q recorder: \"/config/configuration.yaml\"; then
echo \"configuration.yaml already contains recorder\"
else
cat /config/init/recorder.default >> /config/configuration.yaml
fi
if grep -q http: \"/config/configuration.yaml\"; then
echo \"configuration.yaml already contains http section\"
else
cat /config/init/http.default >> /config/configuration.yaml
fi
else
echo \"configuration.yaml does NOT exist.\"
cp /config/init/configuration.yaml.default /config/configuration.yaml
cat /config/init/recorder.default >> /config/configuration.yaml
cat /config/init/http.default >> /config/configuration.yaml
fi
echo \"Creating include files...\"
for include_file in groups.yaml automations.yaml scripts.yaml scenes.yaml; do
if test -f \"/config/$include_file\"; then
echo \"$include_file exists.\"
else
echo \"$include_file does NOT exist.\"
touch \"/config/$include_file\"
fi
done
cd \"/config\" || echo \"Could not change path to /config\"
echo \"Creating custom_components directory...\"
mkdir \"/config/custom_components\" || echo \"custom_components directory already exists\"
echo \"Changing to the custom_components directory...\"
cd \"/config/custom_components\" || echo \"Could not change path to /config/custom_components\"
echo \"Downloading HACS\"
wget \"https://github.com/hacs/integration/releases/latest/download/hacs.zip\" || exit 0
if [ -d \"/config/custom_components/hacs\" ]; then
echo \"HACS directory already exist, cleaning up...\"
rm -R \"/config/custom_components/hacs\"
fi
echo \"Creating HACS directory...\"
mkdir \"/config/custom_components/hacs\"
echo \"Unpacking HACS...\"
unzip \"/config/custom_components/hacs.zip\" -d \"/config/custom_components/hacs\" >/dev
ull 2>&1
echo \"Removing HACS zip file...\"
rm \"/config/custom_components/hacs.zip\"
echo \"Installation complete.\"
recorder.default: |2-
recorder:
purge_keep_days: 30
commit_interval: 3
db_url: {{ ( printf \"%s?client_encoding=utf8\" ( .Values.postgresql.url.complete | trimAll \"\\\"\" ) ) | quote }}": template: home-assistant/templates/common.yaml:19:18: executing "home-assistant/templates/common.yaml" at <.Values.ixChartContext.kubernetes_config.cluster_cidr>: nil pointer evaluating interface {}.cluster_cidr
I tried chmod 755 on the custom_components directory and also tried to use the bare minimum for the configuration.yaml. Still got the same error. Is there a way I can run a debug on this? Anyone have any ideas?

CircleCI run failed on delete k8s resource

I have CircleCI setup and running fine normally, it will helps with creating deployment for me. Today I have suddenly had an issue with the step in creating the deployment due to an error related to kubernetes.
I have the config.yml followed the doc from https://circleci.com/developer/orbs/orb/circleci/kubernetes
Here is my version of setup in the config file:
version: 2.1
orbs:
kube-orb: circleci/kubernetes#1.3.0
commands:
docker-check:
steps:
- docker/check:
docker-username: MY_USERNAME
docker-password: MY_PASS
registry: $DOCKER_REGISTRY
jobs:
create-deployment:
executor: aws-eks/python3
parameters:
cluster-name:
description: Name of the EKS cluster
type: string
steps:
- checkout
# It failed on this step
- kube-orb/delete-resource:
now: true
resource-names: my-frontend-deployment
resource-types: deployments
wait: true
Below is a copy of the error log
#!/bin/bash -eo pipefail
#!/bin/bash
RESOURCE_FILE_PATH=$(eval echo "$PARAM_RESOURCE_FILE_PATH")
RESOURCE_TYPES=$(eval echo "$PARAM_RESOURCE_TYPES")
RESOURCE_NAMES=$(eval echo "$PARAM_RESOURCE_NAMES")
LABEL_SELECTOR=$(eval echo "$PARAM_LABEL_SELECTOR")
ALL=$(eval echo "$PARAM_ALL")
CASCADE=$(eval echo "$PARAM_CASCADE")
FORCE=$(eval echo "$PARAM_FORCE")
GRACE_PERIOD=$(eval echo "$PARAM_GRACE_PERIOD")
IGNORE_NOT_FOUND=$(eval echo "$PARAM_IGNORE_NOT_FOUND")
NOW=$(eval echo "$PARAM_NOW")
WAIT=$(eval echo "$PARAM_WAIT")
NAMESPACE=$(eval echo "$PARAM_NAMESPACE")
DRY_RUN=$(eval echo "$PARAM_DRY_RUN")
KUSTOMIZE=$(eval echo "$PARAM_KUSTOMIZE")
if [ -n "${RESOURCE_FILE_PATH}" ]; then
if [ "${KUSTOMIZE}" == "1" ]; then
set -- "$#" -k
else
set -- "$#" -f
fi
set -- "$#" "${RESOURCE_FILE_PATH}"
elif [ -n "${RESOURCE_TYPES}" ]; then
set -- "$#" "${RESOURCE_TYPES}"
if [ -n "${RESOURCE_NAMES}" ]; then
set -- "$#" "${RESOURCE_NAMES}"
elif [ -n "${LABEL_SELECTOR}" ]; then
set -- "$#" -l
set -- "$#" "${LABEL_SELECTOR}"
fi
fi
if [ "${ALL}" == "true" ]; then
set -- "$#" --all=true
fi
if [ "${FORCE}" == "true" ]; then
set -- "$#" --force=true
fi
if [ "${GRACE_PERIOD}" != "-1" ]; then
set -- "$#" --grace-period="${GRACE_PERIOD}"
fi
if [ "${IGNORE_NOT_FOUND}" == "true" ]; then
set -- "$#" --ignore-not-found=true
fi
if [ "${NOW}" == "true" ]; then
set -- "$#" --now=true
fi
if [ -n "${NAMESPACE}" ]; then
set -- "$#" --namespace="${NAMESPACE}"
fi
if [ -n "${DRY_RUN}" ]; then
set -- "$#" --dry-run="${DRY_RUN}"
fi
set -- "$#" --wait="${WAIT}"
set -- "$#" --cascade="${CASCADE}"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set -x
fi
kubectl delete "$#"
if [ "$SHOW_EKSCTL_COMMAND" == "1" ]; then
set +x
fi
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Exited with code exit status 1
CircleCI received exit code 1
Does anyone have idea what is wrong with it? Im not sure whether the issue is happening on Circle CI side or Kubernetes side.
I was facing the exact issue since yesterday morning (16 hours ago). Then taking #Gavy's advice, I simply added this in my config.yml:
steps:
- checkout
# !!! HERE !!!
- kubernetes/install-kubectl:
kubectl-version: v1.23.5
- run:
And now it works. Hope it helps.

How to make a string compare work in CloudBuild?

I have a simple string test in my GCP CloudBuild step, but it never works. The step looks like this
steps:
- id: 'branch name'
name: 'alpine'
entrypoint: 'sh'
args:
- '-c'
- |
export ENV=$BRANCH_NAME
if [ $ENV = "master" ]; then
export ENV="test-dev"
fi
echo "***********************"
echo "$BRANCH_NAME"
echo "$ENV"
echo "***********************"
CloudBuild always reports this as sh: master: unknown operand. It's a literal, obviously.
I put the same code into a little sh script and it ran fine as long as I set a value for BRANCH_NAME. CloudBuild definitely supplies a value for BRANCH_NAME and it shows up in the echo "$BRANCH_NAME" while the echo "$ENV" is always empty.
Is there a way to make this string compare work?
When you use linux env var and not substitution variables (or predefined variables), you have to escape the $ with another one
steps:
- id: 'branch name'
name: 'alpine'
entrypoint: 'sh'
args:
- '-c'
- |
export ENV=$BRANCH_NAME
if [ $$ENV = "master" ]; then
export ENV="test-dev"
fi
echo "***********************"
echo "$BRANCH_NAME"
echo "$$ENV"
echo "***********************"

How can I print an Ansible vaulted variable that includes a Kubernetes secret from the CLI?

I have a Ansible group_vars directory with the following file within it:
$ cat inventory/group_vars/env1
...
...
ldap_config: !vault |
$ANSIBLE_VAULT;1.1;AES256
31636161623166323039356163363432336566356165633232643932623133643764343134613064
6563346430393264643432636434356334313065653537300a353431376264333463333238383833
31633664303532356635303336383361386165613431346565373239643431303235323132633331
3561343765383538340a373436653232326632316133623935333739323165303532353830386532
39616232633436333238396139323631633966333635393431373565643339313031393031313836
61306163333539616264353163353535366537356662333833653634393963663838303230386362
31396431636630393439306663313762313531633130326633383164393938363165333866626438
...
...
This Ansible encrypted string has a Kubernetes secret encapsulated within it. A base64 blob that looks something like this:
IyMKIyBIb3N0IERhdGFiYXNlCiMKIyBsb2NhbGhvc3QgaXMgdXNlZCB0byBjb25maWd1cmUgdGhlIGxvb3BiYWNrIGludGVyZmFjZQojIHdoZW4gdGhlIHN5c3RlbSBpcyBib290aW5nLiAgRG8gbm90IGNoYW5nZSB0aGlzIGVudHJ5LgojIwoxMjcuMC4wLjEJbG9jYWxob3N0CjI1NS4yNTUuMjU1LjI1NQlicm9hZGNhc3Rob3N0Cjo6MSAgICAgICAgICAgICBsb2NhbGhvc3QKIyBBZGRlZCBieSBEb2NrZXIgRGVza3RvcAojIFRvIGFsbG93IHRoZSBzYW1lIGt1YmUgY29udGV4dCB0byB3b3JrIG9uIHRoZSBob3N0IGFuZCB0aGUgY29udGFpbmVyOgoxMjcuMC4wLjEga3ViZXJuZXRlcy5kb2NrZXIuaW50ZXJuYWwKIyBFbmQgb2Ygc2VjdGlvbgo=
How can I decrypt this in a single CLI?
We can use an Ansible adhoc command to retrieve the variable of interest, ldap_config. To start we're going to use this adhoc to retrieve the Ansible encrypted vault string:
$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config }}"' \
--vault-password-file=~/.vault_pass.txt \
-e#inventory/group_vars/env1
localhost | SUCCESS => {
"msg": "ABCD......."
Make note that we're:
using the debug module and having it print the variable, msg={{ ldap_config }}
giving ansible the path to the secret to decrypt encrypted strings
using the notation -e#< ...path to file...> to pass the file with the encrypted vault variables
Now we can use Jinja2 filters to do the rest of the parsing:
$ ansible -i "localhost," all \
-m debug \
-a 'msg="{{ ldap_config | b64decode | from_yaml }}"' \
--vault-password-file=~/.vault_pass.txt \
-e#inventory/group_vars/env1
localhost | SUCCESS => {
"msg": {
"apiVersion": "v1",
"bindDN": "uid=readonly,cn=users,cn=accounts,dc=mydom,dc=com",
"bindPassword": "my secret password to ldap",
"ca": "",
"insecure": true,
"kind": "LDAPSyncConfig",
"rfc2307": {
"groupMembershipAttributes": [
"member"
],
"groupNameAttributes": [
"cn"
],
"groupUIDAttribute": "dn",
"groupsQuery": {
"baseDN": "cn=groups,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"filter": "(objectclass=groupOfNames)",
"scope": "sub"
},
"tolerateMemberNotFoundErrors": false,
"tolerateMemberOutOfScopeErrors": false,
"userNameAttributes": [
"uid"
],
"userUIDAttribute": "dn",
"usersQuery": {
"baseDN": "cn=users,cn=accounts,dc=mydom,dc=com",
"derefAliases": "never",
"scope": "sub"
}
},
"url": "ldap://192.168.1.10:389"
}
}
NOTE: The above section -a 'msg="{{ ldap_config | b64decode | from_yaml }}" is what's doing the heavy lifting in terms of converting from Base64 to YAML.
References
How to run Ansible without hosts file
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data
Base64 Decode String in jinja
How to decrypt string with ansible-vault 2.3.0
If you need a one liner that works with any yaml file (not only in inventory) containing inlined vault vars, and if you are ready to install a pip package for that, there is a solution using yq, a yaml processor built on top of jq
prerequesite: Install yq
pip install yq
Usage
You can get your result with the following command:
yq -r .ldapconfig inventory/group_vars/env1 | ansible_vault decrypt
If you need to type your vault pass interactively, don't forget to add the relevant option
yq -r .ldapconfig inventory/group_vars/env1 | ansible_vault --ask-vault-pass decrypt
Note: the -r option to yq is mandatory to get a raw result without the quotation marks around the value.

shell "No such file or directory" on variable execution result

some basic function with goal to extract xml var by xpath:
function get_xml_value_from_config_dir {
local src_root=$1
local xpath_expr="//$2/text()"
local path_to_local="$src_root/app/etc/local.xml"
if [ ! -f $path_to_local ]; then echo "Config file not found: $path_to_local"; exit; fi;
echo $("$xmllint --nocdata --xpath '$xpath_expr' $path_to_local")
}
## and then
src_usr=$(get_xml_value_from_config_dir $src_dir username)
gives me
line 34: /usr/bin/xmllint --nocdata --xpath '//username/text()' /tmp/bin/app/etc/local.xml: No such file or directory
why? ( /usr/bin/xmllint exist as well as /tmp/bin/app/etc/local.xml )
It's telling you it can't find the file or directory named
/usr/bin/xmllint --nocdata --xpath '//username/text()' /tmp/bin/app/etc/local.xml
which indeed is unlikely to exist on your system.
Replace
echo $("$xmllint --nocdata --xpath '$xpath_expr' $path_to_local")
with
echo $($xmllint --nocdata --xpath "$xpath_expr" $path_to_local)
Incidentally, that will put all xmllint output on a single line; to avoid that, just use
xmllint --nocdata --xpath "$xpath_expr" $path_to_local