Namespace deployment issue in Kubernetes Helm Chart - kubernetes

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.

Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.

On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}

Related

Helm upgrade that does a rolling Pod restart if chart values change

I have a simple Helm chart that consists of a Deployment and a ConfigMap. The ConfigMap looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.APP_NAMESPACE }}-config
data:
LOGGED_OUT_MSG: "{{ .Values.LOGGED_OUT_MSG }}"
The ConfigMap is mounted as an envfrom in the Pod template:
...
envFrom:
- configMapRef:
name: {{ .Values.APP_NAMESPACE }}-config
For one of my non-production environments I have the file override.yaml:
# override.yaml
LOGGED_OUT_MSG: "You are logged out (DEV)"
I then do a Helm upgrade like this:
$ helm upgrade -f override.yaml mychart .
What I assumed would happen was that if I make a change to override.yaml and run the above helm upgrade command that Helm would notice that the value of LOGGED_OUT_MSG has changed and do a rolling restart of my Pods. However, that does not happen. Instead, I have to manually delete the Pods so that the change comes through.
Is there a way to run helm upgrade so that changes in override.yaml trigger Helm to do a rolling restart of the Pods?
There is no way to do it by default AFAIK.
You are looking for reloader by stakater.
"Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets."
This will require installing the tool in your cluster and adding an annotation to your deployment.
https://github.com/stakater/Reloader

Deploying helm release forcefully when same name deployments, svcs, etc. are running in the same namespace

How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
Is there's any way to import the config running, which is not being handled by helm?
Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.
Error getting while deploying with the same name:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Is their any other approach?
Thanks
Addressing the error message and part of the question:
How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that were not added by Helm to "import" the existing resources and act on them. Please try to run your workload on a test environment first before trying it as it could redeploy some resources.
There is already similar answer on how to annotate resources:
Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl
see this feature of helm3 Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Assuming following situation:
Deployment created outside of Helm (example below).
Helm Chart with equivalent templated Deployment in templates/ (example below).
Creating below Deployment without Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Assuming that above file is used with kubectl apply and it's also residing in templates/ directory (templated) of your Chart, you will get the following error (when you try to run $ helm install release_name .):
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.
After that you can run $ helm install release_name . and provision your resources with desired changes.
Additional resources:
Jacky-jiang.medium.com: Import existing resources in Helm3
A nice oneliner to annotate all resources in a helm release to be adopted by the new release:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE && rm -rf "$x"
Or, if you also moved the release to a new namespace:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE "meta.helm.sh/release-namespace"=$NEW_NAMESPACE && rm -rf "$x"
A more common approach is to use the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
As can be seen in different Helm chart providers (for example Bitnami charts, External-Dns , Nginx ingress controller and more).
(*) Read more on the K8s Recommended Labels and Helm standard labels sections.

helm deploy with no objects

I'm doing a very simple chart with helm.
It consists on deploying a chart with just one object ("/templates/pod.yaml"), that have to be deployed just if a parameter of file Values.yaml is true.
To provide an example of my case, this is what I have:
/templates/pod.yaml
{{- if eq .Values.shoudBeDeployed true }}
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
{{- end}}
Values.yaml
shoudBeDeployed: true
So when I use shoudBeDeployed with true value, helm installs it correctly.
My problem is that when shoudBeDeployed is false, helm doesn't deploy anything (as I expected), but helm shows the following message:
Error: release CHART_NAME failed: no objects visited
And if I execute helm ls I get that CHART_NAME is deployed with STATUS FAILED.
My question is if there is a way to not have it as a failed helm deploy. So I would like to not see it when using the command helm ls
I know that I could move the logic of shoudBeDeployed variable outside the chart, and then deploy the chart or not depending on its value, but I would like to know if there is a solution just using helm.
#pcampana I think there is no way to stop helm deployment if there is nothing to deploy. But here is a trick that you can use to delete a helm chart if it is
FAILED.
helm install --name temp demo --atomic
where demo is the helm chart directory and temp is release name .
release name is mandatory for this to work.
One scenario is when you see error
Error: release temp failed: no objects visited
you can use above command to deploy helm chart.
I think this might be useful for you.

Should jx step helm apply create/produce a helm release

I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:
sh 'jx step helm build'
sh 'jx step helm apply'
It finishes with success and deploys pods/creates deployment etc. however, helm list is empty.
When I execute something like helm install ... or helm upgrade --install ... it creates a release and helm list shows that.
Is it correct behavior?
More details:
EKS installed with:
eksctl create cluster --region eu-west-2 --name integration --version 1.12 \
--nodegroup-name integration-nodes \
--node-type t3.large \
--nodes 3 \
--nodes-min 1 \
--nodes-max 10 \
--node-ami auto \
--full-ecr-access \
--vpc-cidr "172.20.0.0/16"
Then I set up ingresses (external & internal) with some kubectly apply command (won't share the files). Then I set up routes and vpc related stuff.
JX installed with:
jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \
--ingress-class='internal-nginx' \
--ingress-deployment='nginx-internal-ingress-controller' \
--ingress-service='internal-ingress-nginx' --on-premise \
--external-ip='#########' \
--git-api-token=######### \
--git-username=######### --no-default-environments=true
Details from the installation:
? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles
? Would you like wait and resolve this address to an IP address and use it for the domain? No
? Domain ###########
? Cloud Provider eks
? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes
? Your custom DNS name: ###########
? Would you like to enable Long Term Storage? A bucket for provider eks will be created No
? local Git user for GitHub server: ###########
? Do you wish to use GitHub as the pipelines Git server: Yes
? A local Jenkins X versions repository already exists, pull the latest? Yes
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Then I set up helm:
kubectl apply -f tiller-rbac-config.yaml
helm init --service-account tiller
where tiller-rbac-config.yaml is:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
helm version says:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
jx version says:
NAME VERSION
jx 2.0.258
jenkins x platform 2.0.330
Kubernetes cluster v1.12.6-eks-d69f1b
helm client Client: v2.13.1+g618447c
git git version 2.17.1
Operating System Ubuntu 18.04.2 LTS
Applications were imported this way:
jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git#github.com:#####.git
And environment was created this way:
jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test"
Going throught the changelog, it seems that the tillerless mode has been made the default mode since version 2.0.246.
In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.
The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as you can read on the Jenkins X docs.
What --no-tiller means is to switch helm to use template mode which
means we no longer internally use helm install mychart to install a
chart, we actually use helm template mychart instead which generates
the YAML using the same helm charts and the standard helm
confiugration management via --set and values.yaml files.
Then we use kubectl apply to apply the YAML.
As mentioned by James Strachan in the comments, when using the tillerless mode, you can view your deployments using jx step helm list

How to create a namespace if it doesn't exists from HELM templates?

I have a kind: Namespace template yaml, as per below:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
namespace: ""
How do I make helm install create the above-given namespace ({{ .Values.namespace }}) if and only if above namespace ({{ .Values.namespace }}) doesn't exits in the pointed Kubernetes cluster?
Thanks.
This feature is implemented in helm >= 3.2 (Pull Request)
Use --create-namespace in addition to --namespace <namespace>
For helm2 it's best to avoiding creating the namespace as part of your chart content if at all possible and letting helm manage it. helm install with the --namespace=<namespace_name> option should create a namespace for you automatically. You can reference that namespace in your chart with {{ .Release.Namespace }}. There's currently only one example of creating a namespace in the public helm/charts repo and it uses a manual flag for checking whether to create it
For helm3 functionality has changed and there's a github issue on this
There are some differences in Helm commands due to different versions.
For Helm 2, just use --namespace; for Helm 3, need to use --namespace and --create-namespace.
Helm 2 Example:
helm install stable/nginx-ingress --name ingress-nginx --namespace ingress-nginx --wait
Helm 3 Example:
helm install ingress-nginx stable/nginx-ingress --namespace ingress-nginx --create-namespace --wait
For terraform users, set create_namespace attribute to true:
resource "helm_release" "kube_prometheus_stack" {
name = ...
repository = ...
chart = ...
namespace = ...
create_namespace = true
}