I'm trying to delete releases older than 10 days, but some namespace shouldn't be touched (ex: monitoring)
In helm2 i did it with awk, but in helm3 they changed date type so it's not working.
Is there any way to do that?
Let me show you how I've resolved a similar issue. In our flow, we have an automatic rollout of helm releases for every feature branch, and we decided to implement an automatic cleanup process for deleting old feature releases in the development flow.
The current implementation requires jq as a dependency.
#!/usr/bin/env bash
set -e
echo "Staring delete-old-helm-release.sh ..."
helm_release_name=${1:-$HELM_RELEASE_NAME}
k8s_namespace=${2:-$KUBERNETES_NAMESPACE}
# Get helm release date, take updated field and remove UTC from string
helm_release_updated=$(helm list --filter "${helm_release_name}" -n "${k8s_namespace}" -o json \
| jq --raw-output ".[0].updated" \
| sed s/"UTC"// \
)
if [[ "$helm_release_name" == null ]]; then
echo "Helm release: ${helm_release_name} in namespace: ${k8s_namespace} not found"
echo "Exit from delete-old-helm-release.sh ..."
exit 1
fi
# Convert date string to timestamp, get current timestamp and calculate time delta
helm_release_date_timestamp=$(date --utc --date="${helm_release_updated}" +"+%s")
current_date_timestamp=$(date --utc +"+%s")
time_difference=$((current_date_timestamp - helm_release_date_timestamp))
# 86400 means 24 hours (60*60*24) in seconds
if [[ (( $time_difference -gt 86400 )) ]]; then
echo "Detected old release: ${helm_release_name} in namespace: ${k8s_namespace}"
echo "Difference is more than 24hr: $((time_difference/60/60))hr"
echo "Deliting it ..."
helm delete "${helm_release_name}" -n "${k8s_namespace}" --purge
echo "Done"
else
echo "Detected fresh release"
echo "Current time difference is less than 24hr: $((time_difference/60/60))hr"
echo "Skipping ..."
fi
exit 0
It's tested with helm 3.2.4 and I think it should work with all helm 3.x.x until they changed date format.
BTW, please update your issue description so it will be more clear and have bigger priority in search engines :)
Please let me know is it helps,
Good luck,
Oleg
Related
Is it possible to make the build stage parallel?
today the build stage builds and deploys all the images in a sequence, which takes quite a lot of time. it would save a lot of time if each image will be built in parallel to the others (same as the deploy stage).
The deploy stage does run in parallel, unless you opt to deploy them in order with the stages.deployments field in your pipeline manifest.
As for the build stage, you can make changes to your own pipeline's buildspec, specifically in this block:
for env in $pl_envs; do
tag=$(sed 's/:/-/g' <<<"${CODEBUILD_BUILD_ID##*:}-${env}" | rev | cut -c 1-128 | rev)
for svc in $svcs; do
./copilot-linux svc package -n $svc -e $env --output-dir './infrastructure' --tag $tag --upload-assets;
if [ $? -ne 0 ]; then
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
exit 1;
fi
done;
for job in $jobs; do
./copilot-linux job package -n $job -e $env --output-dir './infrastructure' --tag $tag --upload-assets;
if [ $? -ne 0 ]; then
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
exit 1;
fi
done;
done;
Is there a way to restrict helm to install or update if there are no new changes or modifications detected in your charts?
One way of doing this - take helm template on old and new chart and do a diff. Then proceed with updates only if there are changes.
Essentially,
values_diff=$(diff work-values/values.yaml work-values/values-prev.yaml | wc -l)
Where values.yaml and values-prev.yaml are outputs of helm template command on latest and previous charts.
Then you do
if [ $values_diff -gt 0 ]
then
....
And your update logic goes where dots are.
See full working sample here (note it has few extra things that you may omit) - https://github.com/relizaio/reliza-hub-integrations/blob/master/Helm-cd-with-Reliza/helm_configmap.yaml, which is part of my bigger tutorial here - https://worklifenotes.com/2021/05/22/helm-cd-with-reliza-hub-tutorial/
I found a different way to do it. Wrote a small python script to list down files changed in the last 2 commits and then filtered out the apps which were modified.
This is a great plugin for helm.
helm plugin install https://github.com/databus23/helm-diff
helm diff upgrade -n mynamespace myapp foo/myapp -f custom-values.yaml
You could use it like so
#!/bin/bash
lines=$(helm diff upgrade -n mynamespace myapp foo/myapp -f custom-values.yaml)
if [[ $(echo $lines | wc -l) > 0 ]]; then
echo "$lines" | grep "^+\|^-"
echo "Helm changes detected."
echo "Running upgrade in 5."; sleep 5 helm upgrade --install -n mynamespace myapp foo/myapp -f custom-values.yaml
fi
You can use terraform helm provider as well.
https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
I am using kubernetes and its resources like secrets. During deployment one secret has been created (say test-secret) with some values inside it.
Now I need to renamed this secretes (dev-secret) within the same namespace.
How can I rename the secret or how can I copy test-secret value to dev-secret.
Please let me know the correct approach for this.
There is no specific way to do this. The Kubernetes API does not have "rename" as an operation. In this particular case you would kubectl get server test-secret -o yaml, clean up the metadata: sections that don't apply anymore, edit the name, and kubectl apply it again.
Extending #coderanger answer:
If you still have secret config yaml file you can do
kubectl delete -f </path/to/secret-config-yaml>
change metadata.name object and issue
kubectl apply -f </path/to/secret-config-yaml>
I needed to do something similar: rename K8s secrets.
I searched everywhere, but could not find a good way to do it.
So I wrote a bash script for copying secrets into new secrets with a new name.
In my case, I also wanted to do this in batch, as I had many secrets with the same prefix that I needed to change.
I don't work with bash all the time, so there might be better ways... but it did the trick for me.
I hope it helps!
#!/bin/bash
# Copies K8s secrets with names containing the NAME_PART into new
# secrets where the NAME_PART was replaced with NEW_NAME_PART.
# i.e. if NAME_PART is "test-abc" and NEW_NAME_PART is "test-xyz", a secret names test-abc-123
# will be copied into a new secret named test-xyz-123
#
# Pre-requisites:
# - have kubectl installed and pointing to the cluster you want to alter
#
# NOTE: tested with kubectl v1.18.0 and K8s v1.21.5-eks-bc4871b
# configure the NAME_PARTs here
NAME_PART=test-abc
NEW_NAME_PART=test-xyz
WORK_DIR=work_secret_copy
mkdir -p $WORK_DIR
echo "Getting secrets from K8s..."
allSecrets=`kubectl get secrets | tail -n +2 | cut -d " " -f1`
matchingSecrets=`echo $allSecrets | tr ' ' '\n' | grep $NAME_PART`
#printf "All secrets:\n $allSecrets \n"
#printf "Secrets:\n $secrets \n"
for secret in $matchingSecrets; do
newSecret=${secret/$NAME_PART/$NEW_NAME_PART}
echo "Copying secret $secret to $newSecret"
# skip this secret if one with the new name already exists
if [[ $(echo $allSecrets | tr ' ' '\n' | grep -e "^$newSecret\$") ]]; then
echo "Secret $newSecret already exists, skipping..."
continue
fi
kubectl get secret $secret -o yaml \
| grep -v uid: \
| grep -v time: \
| grep -v creationTimestamp: \
| sed "s/$secret/$newSecret/g" \
> $WORK_DIR/$newSecret.yml
kubectl apply -f $WORK_DIR/$newSecret.yml
done
I know I can list all helm release using helm ls --tiller-namespace <tiller-namespace>
What command I can use for deleting helm release old than 1 month ?
You could use the below shell script, that takes all the list of releases and it's last deployed in seconds using the helm ls and jq utility command; which then loops through the releases list and does some subtraction on the number of days it has been deployed and then deletes the releases which are older than a month. By month, I've just given 30 days.
#!/bin/bash
#Store the release names alone for a specific tiller.
helm_releases=(`helm ls --short --tiller-namespace "kube-system"`)
#Store current date
CURRENT_TIME_SECONDS=`date '+%s'`
for RELEASE in ${helm_releases[#]};
do
LAST_DEPLOYED_SECONDS=`helm status $RELEASE --tiller-namespace "kube-system" --output=json | jq -r '.info.last_deployed.seconds'`
SEC_DIFF=`expr $CURRENT_TIME_SECONDS - $LAST_DEPLOYED_SECONDS`
DAY_DIFF=`expr $SEC_DIFF / 86400`
if [ "$DAY_DIFF" -gt 30 ]; then
echo "$RELEASE is older than a month. Proceeding to delete it."
helm delete --purge --no-hooks $RELEASE
fi
done
You can still define your own logic on top of this by calculating the seconds differences for a month.
Please note that I've explicitly mentioned the --tiller-namespace. You can use that if you're releases are deployed in a namespace which uses tiller other than kube-system.
What is the best method for checking to see if a custom resource definition exists before running a script, using only kubectl command line?
We have a yaml file that contains definitions for a NATS cluster ServiceAccount, Role, ClusterRoleBinding and Deployment. The image used in the Deployment creates the crd, and the second script uses that crd to deploy a set of pods. At the moment our CI pipeline needs to run the second script a few times, only completing successfully once the crd has been fully created. I've tried to use kubectl wait but cannot figure out what condition to use that applies to the completion of a crd.
Below is my most recent, albeit completely wrong, attempt, however this illustrates the general sequence we'd like.
kubectl wait --for=condition=complete kubectl apply -f 1.nats-cluster-operator.yaml kubectl apply -f 2.nats-cluster.yaml
The condition for a CRD would be established:
kubectl -n <namespace-here> wait --for condition=established --timeout=60s crd/<crd-name-here>
You may want to adjust --timeout appropriately.
In case you are wanting to wait for a resource that may not exist yet, you can try something like this:
{ grep -q -m 1 "crontabs.stable.example.com"; kill $!; } < <(kubectl get crd -w)
or
{ sed -n /crontabs.stable.example.com/q; kill $!; } < <(kubectl get crd -w)
I understand the question would prefer to only use kubectl, however this answer helped in my case. The downside to this method is that the timeout will have to be set in a different way and that the condition itself is not actually checked.
In order to check the condition more thoroughly, I made the following:
#!/bin/bash
condition-established() {
local name="crontabs.stable.example.com"
local condition="Established"
jq --arg NAME $name --arg CONDITION $condition -n \
'first(inputs | if (.metadata.name==$NAME) and (.status.conditions[]?.type==$CONDITION) then
null | halt_error else empty end)'
# This is similar to the first, but the full condition is sent to stdout
#jq --arg NAME $name --arg CONDITION $condition -n \
# 'first(inputs | if (.metadata.name==$NAME) and (.status.conditions[]?.type==$CONDITION) then
# .status.conditions[] | select(.type==$CONDITION) else empty end)'
}
{ condition-established; kill $!; } < <(kubectl get crd -w -o json)
echo Complete
To explain what is happening, $! refers to the command run by bash's process substitution. I'm not sure how well this might work in other shells.
I tested with the CRD from the official kubernetes documentation.