How to create a namespace if it doesn't exists from HELM templates? - kubernetes

I have a kind: Namespace template yaml, as per below:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
namespace: ""
How do I make helm install create the above-given namespace ({{ .Values.namespace }}) if and only if above namespace ({{ .Values.namespace }}) doesn't exits in the pointed Kubernetes cluster?
Thanks.

This feature is implemented in helm >= 3.2 (Pull Request)
Use --create-namespace in addition to --namespace <namespace>

For helm2 it's best to avoiding creating the namespace as part of your chart content if at all possible and letting helm manage it. helm install with the --namespace=<namespace_name> option should create a namespace for you automatically. You can reference that namespace in your chart with {{ .Release.Namespace }}. There's currently only one example of creating a namespace in the public helm/charts repo and it uses a manual flag for checking whether to create it
For helm3 functionality has changed and there's a github issue on this

There are some differences in Helm commands due to different versions.
For Helm 2, just use --namespace; for Helm 3, need to use --namespace and --create-namespace.
Helm 2 Example:
helm install stable/nginx-ingress --name ingress-nginx --namespace ingress-nginx --wait
Helm 3 Example:
helm install ingress-nginx stable/nginx-ingress --namespace ingress-nginx --create-namespace --wait

For terraform users, set create_namespace attribute to true:
resource "helm_release" "kube_prometheus_stack" {
name = ...
repository = ...
chart = ...
namespace = ...
create_namespace = true
}

Related

helm: 'lookup' function always returns empty map

The relevant docs: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
My helm version:
$ helm version
version.BuildInfo{Version:"v3.4.1",
GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29",
GitTreeState:"dirty", GoVersion:"go1.15.4"}
Minimal example to reproduce:
Create a new helm chart and install it.
$ helm create my-chart
$ helm install my-chart ./my-chart
Create a simple ConfigMap.
# my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
someKey: someValue
Upgrade the existing chart so that the ConfigMap is applied.
$ helm upgrade my-chart ./my-chart
Confirm that the ConfigMap exists.
$ kubectl -n default get configmap my-configmap
Which returns as expected:
NAME DATA AGE
my-configmap 1 12m
Try to use the lookup function to reference the existing ConfigMap.
# my-chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
someKey: someValue
someOtherKey: {{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.someValue }}
Then do a dry-run of the upgrade.
$ helm upgrade my-chart ./my-chart --dry-run
You will be met with a nil pointer error:
Error: UPGRADE FAILED: template: my-chart/templates/configmap.yaml:9:54: executing "my-
chart/templates/configmap.yaml" at <"my-configmap">: nil pointer evaluating interface
{}.someValue
What am I doing wrong?
This is an expected behavior if you are using --dry-run flag.
From documentation
Keep in mind that Helm is not supposed to contact the Kubernetes API
Server during a helm template or a helm install|update|delete|rollback --dry-run, so the lookup function will return an empty list (i.e. dict) in such a case.

Deploying helm release forcefully when same name deployments, svcs, etc. are running in the same namespace

How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
Is there's any way to import the config running, which is not being handled by helm?
Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.
Error getting while deploying with the same name:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Is their any other approach?
Thanks
Addressing the error message and part of the question:
How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that were not added by Helm to "import" the existing resources and act on them. Please try to run your workload on a test environment first before trying it as it could redeploy some resources.
There is already similar answer on how to annotate resources:
Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl
see this feature of helm3 Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Assuming following situation:
Deployment created outside of Helm (example below).
Helm Chart with equivalent templated Deployment in templates/ (example below).
Creating below Deployment without Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Assuming that above file is used with kubectl apply and it's also residing in templates/ directory (templated) of your Chart, you will get the following error (when you try to run $ helm install release_name .):
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.
After that you can run $ helm install release_name . and provision your resources with desired changes.
Additional resources:
Jacky-jiang.medium.com: Import existing resources in Helm3
A nice oneliner to annotate all resources in a helm release to be adopted by the new release:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE && rm -rf "$x"
Or, if you also moved the release to a new namespace:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE "meta.helm.sh/release-namespace"=$NEW_NAMESPACE && rm -rf "$x"
A more common approach is to use the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
As can be seen in different Helm chart providers (for example Bitnami charts, External-Dns , Nginx ingress controller and more).
(*) Read more on the K8s Recommended Labels and Helm standard labels sections.

how to move prometheus adapter to another namespace?

For now I have prometheus and prometheus adapter in different namespaces:
I tried to configure adapter YML but I was not successful:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2020-01-30T08:49:05Z"
generation: 2
labels:
app: prometheus-adapter
chart: prometheus-adapter-2.0.1
heritage: Tiller
release: prometheus-adapter
name: prometheus-adapter
namespace: my-custom-namespace
resourceVersion: "18513075"
selfLink: /apis/apps/v1/namespaces/my-custom-namespace/deployments/prometheus-adapter
...
But I see error:
the namespace of the object (my-custom-namespace) does not match the namespace on the request (default)
How to fix it ?
You can not edit an existing resource to change namespace.You need to delete the existing deployment first and then recreate the deployment in another namespace.
Edit:
With Helm2 you need to delete the release first helm delete --purge release-name and then deploy it to different namespace as helm install stable/prometheus-adapter --namespace namespace-name
With helm 3 since there is no --namespace flag you need to delete the existing deployment and then redeploy it to a different namespace as below example to deploy metrics server.
$ helm install metricserver stable/metrics-server
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ helm install metricserver stable/metrics-server --namespace=kube-system
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ kubectl config set-context kube-system --cluster=kubernetes --user=kubernetes-admin --namespace=kube-system
Context "kube-system" created.
$ kubectl config use-context kube-system
Switched to context "kube-system".
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kube-system kubernetes kubernetes-admin kube-system
kubernetes-admin#kubernetes kubernetes kubernetes-admin
metallb kubernetes kubernetes-admin metallb
nfstorage kubernetes kubernetes-admin nfstorage
$ helm install metricserver stable/metrics-server
NAME: metricserver
LAST DEPLOYED: 2019-05-26 14:37:45.582245559 -0700 PDT m=+2.942929639
NAMESPACE: kube-system
STATUS: deployed
For helm 2 you can install the chart in any namespace you want by using:
helm install stable/prometheus-adapter --name my-release --namespace foo
Keep in mind that you need to remove the previous one.
This can be done using helm delete --purge my-release
Also there is a really nice article regarding changes in Helm3 Breaking Changes in Helm 3 (and How to Fix Them).

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}

Helm how to define .Release.Name value

I have created basic helm template using helm create command. While checking the template for Ingress its adding the string RELEASE-NAME and appname like this RELEASE-NAME-microapp
How can I change .Release.Name value?
helm template --kube-version 1.11.1 microapp/
# Source: microapp/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-microapp
labels:
app: microapp
chart: microapp-0.1.0
release: RELEASE-NAME
heritage: Tiller
annotations:
kubernetes.io/ingress.class: nginx
This depends on what version of Helm you have; helm version can tell you this.
In Helm version 2, it's the value of the helm install --name parameter, or absent this, a name Helm chooses itself. If you're checking what might be generated via helm template that also takes a --name parameter.
In Helm version 3, it's the first parameter to the helm install command. Helm won't generate a name automatically unless you explicitly ask it to helm install --generate-name. helm template also takes the same options.
Also, in helm 3, if you want to specify a name explicitly, you should use the --name-template flag. e.g. helm template --name-template=dummy in order to use the name dummy instead of RELEASE-NAME
As of helm 3.9 the flag is --release-name, making the command: helm template --release-name <release name>