Adding helm hooks to RBAC resources - kubernetes

I want to create a post-install,post-upgrade helm hook (a Job to be more precise).
This will need the following RBAC resources (I have already added the corresponding helm-hook annotations)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ .Release.Name }}-post-install-role"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ .Release.Name }}-post-install-rolebinding"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: "{{ .Release.Name }}-post-install-sa"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ .Release.Name }}-post-install-role"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ .Release.Name }}-post-install-sa"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
In my corresponding Job spec:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation
...
serviceAccountName: "{{ .Release.Name }}-post-install-sa"
I though that by adding pre- to the RBAC resources, I would make sure these were created before the actual Job which is a post- thing.
By also setting the hook-delete-policy to before-hook-creation,hook-succeeded,hook-failed, these would also be deleted in all cases (whether the Job failed or succeeded) to avoid having them lying around for security considerations.
However the Job creation errors out as unable to find the ServiceAccount
error looking up service account elastic/elastic-stack-post-install-sa: serviceaccount "elastic-stack-post-install-sa" not found
Why is that?

Try using hook weight to ensure a deterministic order.Helm loads the hook with the lowest weight first (negative to positive)
"helm.sh/hook-weight": "0"
Example:
Service account creation with lowest weight.

As PGS suggested, "helm.sh/hook-weight" annotation is the solution here.
Important Notes:
Hook weights can be positive, zero or negative numbers but must be represented as strings.
Example: "helm.sh/hook-weight": "-5" (Note: -5 within double quotes)
When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order.
Hook weights ensure below:
Execute in the right weight sequence (negative to positive in ascending order)
Block each other (Important for your scenario)
All block main K8s resource from starting

Related

How to create a kubernetes serviceAccount when I do helm install?

I added this in my values.yaml expecting the serviceAccount to be created when I do the helm install but that did not work, am I missing something ?
helm version v3.9.0
kubernetes version v1.24.0
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: my-service-account
I even tried adding the following (based on https://helm.sh/docs/chart_best_practices/rbac/#helm), with no luck:
rbac:
# Specifies whether RBAC resources should be created
create: true
Thanks
Thanks for the help, I ended up putting this file in the templates directory so it gets processed as you mentioned, I used helm lookup to check if the ServiceAccount is there or not so the first helm install does the installation (https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function)
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-service-account") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-cluster-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: {{ .Values.namespace }}
{{- end }}
You have to create the YAML or helm template into your template directory and helm will create/apply that config file to the K8s cluster.
service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.fullname" . }}
Ref :https://github.com/CenterForOpenScience/helm-charts/blob/master/elasticsearch/templates/service-account.yaml
You can add your conditions accordingly to check if create is true or false etc.
Condition or flow control doc : https://helm.sh/docs/chart_template_guide/control_structures/

How to create secret in pre-install hook & delete after Helm uninstall

I create a K8S Secret with Helm in a pre-install hook.
This secret is a random password for a database user. When I uninstall the Helm Chart I delete the database and the database user, so I'd like to delete the K8S Secret as-well.
Everything works fine, except the secret is not deleted after uninstallation.
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: sample
labels:
app.kubernetes.io/managed-by: "sample"
app.kubernetes.io/instance: "sample"
app.kubernetes.io/version: "1.1"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-1"
type: Opaque
stringData:
user: "test-user"
password: {{ randAlphaNum 10 | quote }}
In the documentation, there is the hook-delete-policy annotation, but the possible values are
before-hook-creation
hook-succeeded
hook-failed
In my case, none of the options seem to be right.
How can I automatically delete the K8S Secret at uninstall time?
Why not just use post-delete along with pre-install?
post-delete -- Executes on a deletion request after all of the
release's resources have been deleted
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook": pre-install

Helm not deleting all the related resources of a chart

I had a helm release whose deployment was not successful. I tried uninstalling it so that I can create a fresh one.
The weird thing which I found is that there were some resources created partially (a couple of Jobs) because of the failed deployment, Uninstalling the failed deployment using helm does not removes those partially created resources which could cause issues when I try to install the release again with some changes.
My question is: Is there a way where I can ask helm to delete all the related resources of a release completely.
Since there are no details on partially created resources. One scenario could be where helm uninstall/delete would not delete the PVC's in the namespace. We resolved this by creating a separate namespace to deploy the application and helm release is uninstalled/deleted, we delete the namespace as well. For a fresh deployment, create a namespace again and do a helm installation on the namespace for a clean install or you can also change the reclaimPolicy to "Delete" while creating the storageClass (by default Reclaimpolicy is retain) as mentioned in the below post
PVC issue on helm: https://github.com/goharbor/harbor-helm/issues/268#issuecomment-505822451
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
clusterNamespace: rook-ceph-system
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Delete
As you said in the comment that the partially created object is a job. In helm there is a concept name hook, which also runs a job for different situations like: pre-install, post-install etc. I thing you used one of this.
The yaml of an example is given below, where you can set the "helm.sh/hook-delete-policy": hook-failed instead of hook-succeeded then if the hook failed the job will be deleted. For more please see the official doc of helm hook
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: pre-install-job
image: "ubuntu"
#command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
args:
- /bin/bash
- -c
- echo
- "pre-install hook"

Why must I explicitly set a namespace for the ServiceAccount of ClusterRoleBinding.rbac.authorization.k8s.io resource?

I'm deploying my chart with helm like this:
helm upgrade --install --namespace newnamespace --create-namespace testing mychart
My understanding is everything should be deployed into newnamespace
I have this in my chart:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "mychart.serviceAccountName" . }}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ include "mychart.serviceAccountName" . }}
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: {{ include "mychart.serviceAccountName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "mychart.serviceAccountName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "mychart.serviceAccountName" . }}
When deployed I get this error:
Error: ClusterRoleBinding.rbac.authorization.k8s.io "my-service-account" is invalid: subjects[0].namespace: Required value
Then I add this and the deploy works:
...
subjects:
- kind: ServiceAccount
name: {{ include "mychart.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
Why is this? What is this requirement of ClusterRoleBinding? I can't it see the namespace where it's being deployed?
Is it because ClusterRoleBinding is cluster wide it must have the namespace defined in its definition? Are ClusterRoleBinding resources not created in any namespaces? If so where do they live kube-system?
Does this mean that if I deleted the namespace containing my helm release before doing a helm uninstall the ClusterRoleBinding would be left behind?
ClusterRoleBinding binds the ClusterRole with you service account. ClusterRoleBinding gives the access in cluster-wide. In cluster role you basically tell that what actions can your service account perform. A ClusterRole is a set of permissions that can be assigned to resources within a given cluster.
Now by ClusterRoleBinding you are just binding the ClusterRole with your service account, as service account is a namespace scoped object so you must need to provide the namespace name in your subject as you did in the second part.
btw, ClusterRole is a non-namespaced resource. As far the k8s docs, you can use a ClusterRole to:
define permissions on namespaced resources and be granted within individual namespace(s)
define permissions on namespaced resources and be granted across all namespaces
define permissions on cluster-scoped resources
Another thing will also work is adding the apiGroup like apiGroup: rbac.authorization.k8s.io.
When you created service account you created in basically in default namespace as it is the default thing, here:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "mychart.serviceAccountName" . }}
As your last question, ClusterRole is cluster-scoped but ClusterRoleBinding and service account is namespace scoped and as far the rules if you delete a namespace then all the object of that namespace will be gone along with the namespace.
You can see the k8s doc for getting more clear idea.
I found another good tuto

Time-based scaling with Kubernetes CronJob: How to avoid deployments overriding minReplicas

I have a HorizontalPodAutoscalar to scale my pods based on CPU. The minReplicas here is set to 5:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I've then added Cron jobs to scale up/down my horizontal pod autoscaler based on time of day:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
This works really well, except that now when I deploy it overwrites this minReplicas value with the minReplicas in the HorizontalPodAutoscaler spec (in my case, this is set to 5)
I'm deploying my HPA using kubectl apply -f ~/autoscale.yaml
Is there a way of handling this situation? Do I need to create some kind of shared logic so that my deployment scripts can work out what the minReplicas value should be? Or is there a simpler way of handling this?
I think you could also consider the following two options:
Use helm to manage the life-cycle of your application with lookup function:
The main idea behind this solution is to query the state of specific cluster resource (here HPA) before trying to create/recreate it with helm install/upgrade commands.
Helm.sh: Docs: Chart template guide: Functions and pipelines: Using the lookup function
I mean to check the current minReplicas value each time before you upgrade your application stack.
Manage the HPA resource separately to application manifest files
Here you can handover this task to a dedicated HPA operator, which can coexist with your CronJobs that adjust minReplicas according specific schedule:
Banzaicloud.com: Blog: K8S HPA Operator