How to create a kubernetes serviceAccount when I do helm install? - kubernetes

I added this in my values.yaml expecting the serviceAccount to be created when I do the helm install but that did not work, am I missing something ?
helm version v3.9.0
kubernetes version v1.24.0
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: my-service-account
I even tried adding the following (based on https://helm.sh/docs/chart_best_practices/rbac/#helm), with no luck:
rbac:
# Specifies whether RBAC resources should be created
create: true
Thanks

Thanks for the help, I ended up putting this file in the templates directory so it gets processed as you mentioned, I used helm lookup to check if the ServiceAccount is there or not so the first helm install does the installation (https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function)
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-service-account") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-cluster-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: {{ .Values.namespace }}
{{- end }}

You have to create the YAML or helm template into your template directory and helm will create/apply that config file to the K8s cluster.
service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.fullname" . }}
Ref :https://github.com/CenterForOpenScience/helm-charts/blob/master/elasticsearch/templates/service-account.yaml
You can add your conditions accordingly to check if create is true or false etc.
Condition or flow control doc : https://helm.sh/docs/chart_template_guide/control_structures/

Related

helm chart conditional install based on a list

I'm trying to find a way to optionally install a manifest based on a list or a map (really don't mind which) in the values file.
in the values file I have
provisioners: ["gp","test"]
and in the manifest I have
{{- if has "test" .Values.provisioners }}
I've also tried
provisioners:
- "gp"
- "test"
and put this in the yaml
{{- if hasKey .Values.provisioners "test" }}
but I can't either way to work, the chart never installs anything.
I feel like I'm missing something pretty basic, but I can't figure out what. Can someone point me in the right direction.
I don't think you shared everything in you template and there might be something else. What you already did is correct, as you in my example below:
# templates/configmap.yaml
{{- if has "test" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
{{- if has "gp" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gp-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
{{- if has "unknown" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: not-templated-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
Output of the helm template . against local chart:
---
# Source: chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
data:
config.yaml: |
attr=content
---
# Source: chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gp-config
namespace: default
data:
config.yaml: |
attr=content

Config Map in Helm

I am creating a configMap from CSV file using helm tempate but it is getting create different from OC command.
Heml Template:
apiVersion: v1
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Values.namespace }}
labels:
app: {{ .Values.appname }}
data:
{{- $path := printf "%s/application-config/*" .Values.env }}
{{ (.Files.Glob $path).AsConfig | indent 2 }}
Generated Configmap
kind: ConfigMap
apiVersion: v1
metadata:
name: service-config
namespace: ''
labels:
app.kubernetes.io/managed-by: Helm
data:
esm-instrument-type-map.csv: |-
M-MKT,COB
CMO,COB
MUNI,COB
WARRNT,EQU
PFD,EQU
OC Command :
oc create configmap test-config --from-file= ./esm-instrument-type-map.csv
Generated ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: test-config
namespace: ''
data:
esm-instrument-type-map.csv: "CORP,COB\r\nEQUITY,EQU\r\nGOVT,TRY\r\nMBS,FNM\r\nST-PRD,COB\r\nM-MKT,COB\r\nCMO,COB\r\nMUNI,COB\r\nWARRNT,EQU\r\nPFD,EQU"
As we see, data from the CSV file is in double quotes, when generated by the OC command. I want the same in helm. How can I achieve this?

Usage of Variable Chart.Name in inherited Helm Chart

I've created a helm chart which contains some resources, which are reused in several other Helm charts:
base/templates/base.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: {{ .Chart.Name }}
Then I've created a helm chart which inherits the base chart and contains some special resources:
sub1/templates/sub1.yaml
...
name: {{ .Chart.Name }}
Actual Output
In the actual output the resources of the base chart use always the chart name of the base chart.
---
# Source: sub1/templates/sub1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sub1
---
# Source: sub1/charts/base/templates/base.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: base
Wanted output
But I want the chart name of the sub chart to be used in the base chart resources.
# Source: sub1/charts/base/templates/base.yaml
...
kind: SecretProviderClass
metadata:
name: sub1
How can I achieve this?
A solution is to reuse the resources via named templates:
base/templates/base.yaml
{{- define "base-lib.secret-provider-class" -}}
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: {{ .Chart.Name }}
{{- end -}}
sub1/templates/sub1.yaml
{{ include "base-lib.secret-provider-class" . }}
---
...

Adding helm hooks to RBAC resources

I want to create a post-install,post-upgrade helm hook (a Job to be more precise).
This will need the following RBAC resources (I have already added the corresponding helm-hook annotations)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ .Release.Name }}-post-install-role"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ .Release.Name }}-post-install-rolebinding"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
subjects:
- kind: ServiceAccount
namespace: {{ .Release.Namespace }}
name: "{{ .Release.Name }}-post-install-sa"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ .Release.Name }}-post-install-role"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: "{{ .Release.Name }}-post-install-sa"
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
In my corresponding Job spec:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation
...
serviceAccountName: "{{ .Release.Name }}-post-install-sa"
I though that by adding pre- to the RBAC resources, I would make sure these were created before the actual Job which is a post- thing.
By also setting the hook-delete-policy to before-hook-creation,hook-succeeded,hook-failed, these would also be deleted in all cases (whether the Job failed or succeeded) to avoid having them lying around for security considerations.
However the Job creation errors out as unable to find the ServiceAccount
error looking up service account elastic/elastic-stack-post-install-sa: serviceaccount "elastic-stack-post-install-sa" not found
Why is that?
Try using hook weight to ensure a deterministic order.Helm loads the hook with the lowest weight first (negative to positive)
"helm.sh/hook-weight": "0"
Example:
Service account creation with lowest weight.
As PGS suggested, "helm.sh/hook-weight" annotation is the solution here.
Important Notes:
Hook weights can be positive, zero or negative numbers but must be represented as strings.
Example: "helm.sh/hook-weight": "-5" (Note: -5 within double quotes)
When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order.
Hook weights ensure below:
Execute in the right weight sequence (negative to positive in ascending order)
Block each other (Important for your scenario)
All block main K8s resource from starting

kubernetes get endpint in the containers

on kubernetes vm Im running for example : kubectl get endpoints
how can I get the same output inside the pod , what should I run within a pod?
I understood there is a kubeapi but Im new to kubernetes can someone explain how can I use it
this is my clusterrolebinding:
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
subjects:
- kind: ServiceAccount
name: {{ template "elasticsearch.serviceAccountName.client" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ template "elasticsearch.fullname" . }}
apiGroup: rbac.authorization.k8s.io
clusterrole.yaml:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: {{ template "elasticsearch.fullname" . }}
labels:
app: {{ template "elasticsearch.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
rules:
#
# Give here only the privileges you need
#
- apiGroups: [""]
resources:
- pods
- endpoints
verbs:
- get
- watch
- list
serviceaccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "elasticsearch.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.client.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "elasticsearch.client.fullname" . }}
You don't have to have kubectl installed in pod to access the Kubernetes API. You will be ableto do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named your-service from within a container in the cluster, you can do:
$ curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/your-service
Replace {namespace} with the namespace of the your-service Endpoints resource._
To extract the IP addresses of the returned JSON pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
IMPORTANT:
The Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request will be denied.
You can do this by creating a ClusterRole, ClusterRoleBinding, and Service Account - set this up once:
$ kubectl create sa endpoint-reader-sa
$ kubectl create clusterrole endpoint-reader-cr --verb=get,list --resource=endpoints
$ kubectl create clusterrolebinding endpoint-reader-crb --serviceaccount=default:endpoint-reader-sa --clusterrole=endpoint-reader-cr
Next use created ServiceAccount - endpoint-reader-sa for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any different API operations works in the same way.
Source: get-pod-ip.
And as also #ITChap mentioned similar answer: kubectl-from-inside-the-pod.