I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (ServiceMonitor used by prometheus-operator).
I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors only caused by the fact that such a CRD is missing.
Is that possible and how can I achieve that?
Helm's Capabilities object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.
In your .tpl files, you can wrap the entire file in a {{ if }}...{{ end }} block. Helm doesn't especially care if the rendered version of a file is empty.
That would lead you to a file like:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
That would get installed if the operator is installed in the cluster, and skipped if not.
If you are on Helm 3 you can put your CRD in the crds/ directory. Helm will treat it differently, see the docs here.
In Helm 2 there is another mechanism using the crd-install hook. You can add the following to your CRD:
annotations:
"helm.sh/hook": crd-install
There are some limitations with this approach so if you are using Helm 3 that would be preferred.
In Helm v3, you can test for specific resources:
{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1/ServiceMonitor" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
spec:
...
{{- end }}
https://helm.sh/docs/chart_template_guide/builtin_objects/
Related
I'm trying to implement canary deployment with Istio but first I have to deploy chart pods from the old version (Already managed to do it) and chart pods from the new version.
I created a new version of my chart. The chart has been created successfully.
Now I try to use helm install command to deploy the new version side by side with the old one.
I pass a new release name to the command in order to avoid overriding the old version my-release-v2 but I get an error that the release name in the chart must match the release name.
At this stage I'm a bit puzzled. Should I override it in the values.yaml if so - How exactly? Is this a best practice?
OK, I got this one is case it helps someone.
The release name should be unique. A good practice is to use our application name Chart.AppName.fullname along with the intended version in our helm install conmmand.
Then, we can use the same practice for our Deployment object that deploys our pods so it will be unique to the version. (Relevant part in the deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include ".Chart.Name.fullname" . }}-{{ .Chart.AppVersion }}
And of course for a future selector in Istio create a version label in the pods (Relevant part in the deployment.yaml):
apiVersion: apps/v1
kind: Deployment
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app.kubernetes.io/version: {{ .Chart.AppVersion }}
I am trying to setup Kuberentes for my company. In that process I am trying to learn Helm.
One of the tasks I have is to setup automation to take a supplied namespace name parameter, and create a namespace and setup the correct permissions in that namespace for the deployment user account.
I can do this simply with a script that uses kubectl apply like this:
kubectl create namespace $namespaceName
kubectl create rolebinding deployer-edit --clusterrole edit --user deployer --namespace $namespaceName
But I am wondering if I should set up things like this using Helm charts. As I look at Helm charts, it seems that everything is a deployment. I am not sure that this fits the model of "deploying" things. It is more just a general setup of a namespace that will then allow deployments into it. But I want to try it out as a Helm chart if it is possible.
How can I create a Kubernetes namespace and rolebinding using Helm?
A Namespace is a Kubernetes object and it can be described in YAML, so Helm can create one. #mdaniel's answer describes the syntax for doing it for a single Namespace and the corresponding RoleBinding.
There is a chicken-and-egg problem if you are trying to use this syntax to create the Helm installation namespace, though. In Helm 3, metadata about the installation is stored in Kubernetes objects, usually in the same namespace you're installing into
helm install release-name ./a-chart-that-creates-a-namespace --namespace ns
If the namespace doesn't already exist, then Helm can't retrieve the installation metadata; or, if it does, then the declaration of the Namespace object in the chart will conflict with an existing object in the cluster. You can create other objects this way (like RoleBindings) but Namespaces themselves are a problem.
But! You can create other namespaces safely. You can also use Helm's templating constructs to create multiple objects based on what's present in the .Values configuration. So if your values.yaml file (possibly environment-specific) has
namespaces: [service-a, service-b]
clusterRole: edit
user: deploy
Then you can write a template file like
{{- $top := . }}
{{- range $namespace := .Values.namespaces -}}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ $namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ $namespace }}
name: deployer-edit
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ $top.Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ $top.Values.user }}
{{ end -}}
This will create two YAML documents for each item in .Values.namespaces. Since the range looping construct overwrites the . special variable, we save its value in a $top local variable before we start, and then use $top.Values where we'd otherwise need to reference .Values. We also need to make sure to explicitly name the metadata: { namespace: } of each object we create, since we're not using the default installation namespace.
You need to make sure the helm install --namespace name isn't any of the namespaces you're managing with this chart.
This would let you have a single chart that manages all of the per-service namespaces. If you needed to change the set of services, you can just update the chart values and helm update. The one other caution is that this will happily delete namespaces with no warning if you remove a value from the .Values.namespaces list, and also take everything in that namespace with it (notably, any PersistentVolumeClaims that have data you might need).
Almost any chart for an install that needs to interact with kubernetes itself will include RBAC resources, so it is for sure not just Deployments
# templates/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.bindingName }}
roleRef:
apiGroup: ""
kind: ClusterRole
name: {{ .Values.clusterRole }}
subjects:
- apiGroup: ""
kind: User
name: {{ .Values.user }}
then a values.yaml isn't strictly required, but helps folks know what values could be provided:
# values.yaml
bindingName: deployment-edit
clusterRole: edit
user: deployer
Helm v3 has --create-namespace which will create the provided --namespace if it doesn't already exist, which isn't very declarative but does achieve the end result just like the kubectl version
It's also theoretically possible to have the chart create the Namespace but I would not guess that helm remove the-namespaced-rolebinding will do the right thing, since the order of item removal matters a lot:
# templates/00namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.theNamespace }}
and then run helm --namespace kube-system ... or any NS other than the real one, since it doesn't yet exist
i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}
10 microservices on kubernetes with helm3 charts, and saw that all of them have similar structure standard, deployment, service, hpa, network policies etc. and basically the <helm_chart_name>/templates directory is 99% same on all with some if statements on top of file whether we want to deploy that resource,
{{ if .Values.hpa.create }}
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Values.deployment.name }}
...
spec:
scaleTargetRef:
...
{{ end }}
and in values passing yes/no whether we want it - Is there some tool to easily create template for the helm charts ? To create Helm chart with this 5 manifests pre-populated with the reference to values as above ?
What you need is the Library Charts:
A library chart is a type of Helm chart that defines chart primitives
or definitions which can be shared by Helm templates in other charts.
This allows users to share snippets of code that can be re-used across
charts, avoiding repetition and keeping charts DRY.
You can find more details and examples in the linked documentation.
I think closest thing to the thing I want is https://helm.sh/docs/topics/library_charts/
I want to use application.yaml file to be passed as a config map.
So I have written this.
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
{{ (.Files.Glob "foo/*").AsConfig | indent 2 }}
my application.yaml is present in foo folder and
contains a service name which I need it to be dynamically populated via helm interpolation.
foo:
service:
name: {{.Release.Name}}-service
When I dry run , I am getting this
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
data:
application.yaml: "ei:\r\n service:\r\n name: {{.Release.Name}}-service"
but I want name: {{.Release.Name}}-service to contain actual helm release name.
Is it possible to do templating for external files using helm , if yes then how to do it ?
I have gone through https://v2-14-0.helm.sh/docs/chart_template_guide/#accessing-files-inside-templates
I didn't find something which solves my use case.
I can also copy the content to config map yaml and can do interpolation but I don't want to do it. I want application.yml to be in a separate file, so that, it will be simple to deal with config changes..
Helm includes a tpl function that can be used to expand an arbitrary string as a Go template. In your case the output of ...AsConfig is a string that you can feed into the template engine.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-conf
data:
{{ tpl (.Files.Glob "foo/*").AsConfig . | indent 2 }}
Once you do that you can invoke arbitrary template code from within the config file. For example, it's common enough to have a defined template that produces the name prefix of the current chart as configured, and so your config file could instead specify
foo:
service:
name: {{ template "mychart.name" . }}-service
As best I can tell, there is no recursive template evaluation available in helm (nor in Sprig), likely by design
However, in your specific case, if you aren't expecting the full power of golang templates, you can cheat and use Sprig's regexReplaceAllLiteral:
kind: ConfigMap
data:
{{/* here I have used character classes rather that a sea of backslashes
you can use the style you find most legible */}}
{{ $myRx := "[{][{] *[.]Release[.]Name *[}][}]" }}
{{ regexReplaceAllLiteral $myRx (.Files.Glob "foo/*").AsConfig .Release.Name }}
If you genuinely need the full power of golang templates for your config files, then helm, itself, is not the mechanism for doing that -- but helmfile has a lot of fancy tricks for generating the ultimate helm chart that helm will install