Helm Hook not triggered - kubernetes

Context: Kubenete 1.0.3, Helm 2.8.2
Helm Hook: pre-install
weight: 0
delete-policy: before-hook-creation
Helm command: helm upgrade --install -n namespace
Problem description:
The hook block is well rendered when running with --dry-run mode. But after installing (without dry-run), no hook job is triggered.
Check the job using command kubectl get jobs -n namespace.

Hook is a mechanism introduced in HELM to intervence at certain points in release life cycle.
Hooks can be definied in few ways via special annotations in metadata section, i.e. "pre-install", "post-install", "pre-upgrade" etc. Example of the hook:
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
Full list of Hooks can be found here. In addition there can be used more than one hook.
In this case "pre-upgrade" option resolved the issue which was "Executes on an upgrade request after templates are rendered, but before any resources are loaded into Kubernetes (e.g. before a Kubernetes apply operation)."

Related

Kubernetes - Reconfiguring a Service to point to a new Deployment (blue/green)

I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.

Deploying helm release forcefully when same name deployments, svcs, etc. are running in the same namespace

How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
Is there's any way to import the config running, which is not being handled by helm?
Or deleting the same name objects is the only solution to deploy the helm release first time?(As I don't want to change the release names because it will break the communication between the microservices)
Deleting the objects will cause downtime and I want to avoid that.
Error getting while deploying with the same name:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Is their any other approach?
Thanks
Addressing the error message and part of the question:
How to deploy the helm release for the first time when there's already the deployment, svc, etc. running with the same name.
You can't deploy resources with Helm that weren't created by Helm. It will give you the same message as you've encountered. You can annotate the existing resources that were not added by Helm to "import" the existing resources and act on them. Please try to run your workload on a test environment first before trying it as it could redeploy some resources.
There is already similar answer on how to annotate resources:
Stackoverflow.com: Answers: Use Helm 3 for existing resources deployed with kubectl
see this feature of helm3 Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Assuming following situation:
Deployment created outside of Helm (example below).
Helm Chart with equivalent templated Deployment in templates/ (example below).
Creating below Deployment without Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Assuming that above file is used with kubectl apply and it's also residing in templates/ directory (templated) of your Chart, you will get the following error (when you try to run $ helm install release_name .):
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Deployment "nginx" in namespace "default" exists and cannot be imported into the current release: ...
By running the script that was mentioned in the answer I linked, you can annotate and label your resources for Helm to not produce mentioned error message.
After that you can run $ helm install release_name . and provision your resources with desired changes.
Additional resources:
Jacky-jiang.medium.com: Import existing resources in Helm3
A nice oneliner to annotate all resources in a helm release to be adopted by the new release:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE && rm -rf "$x"
Or, if you also moved the release to a new namespace:
x=`mktemp` && helm -n $NAMESPACE get manifest $RELEASE >$x && kubectl annotate -f $x --overwrite "meta.helm.sh/release-name"=$NEW_RELEASE "meta.helm.sh/release-namespace"=$NEW_NAMESPACE && rm -rf "$x"
A more common approach is to use the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
As can be seen in different Helm chart providers (for example Bitnami charts, External-Dns , Nginx ingress controller and more).
(*) Read more on the K8s Recommended Labels and Helm standard labels sections.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

no matches for kind "Deployment" in version "extensions/v1beta1"

While deploying mojaloop, Kubernetes responds with the following errors:
Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "apps/v1beta2", unable to recognize "": no
matches for kind "Deployment" in version "extensions/v1beta1", unable
to recognize "": no matches for kind "StatefulSet" in version
"apps/v1beta2", unable to recognize "": no matches for kind
"StatefulSet" in version "apps/v1beta1"]
My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?
I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed.
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.
You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.
If this does not help, please add your YAML to the question.
EDIT
As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:
1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
to convert an older Deployment to apps/v1, you can run:
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
You can change manually as an alternative. Fetch the helm chart:
helm fetch --untar stable/metabase
Access the chart folder:
cd ./metabase
Change API version:
sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml
Add spec.selector.matchLabels:
spec:
[...]
selector:
matchLabels:
app: {{ template "metabase.name" . }}
[...]
Finally install your altered chart:
helm install ./ \
-n metabase \
--namespace metabase \
--set ingress.enabled=true \
--set ingress.hosts={metabase.$(minikube ip).nip.io}
Enjoy!
I prefer kubectl explain.
# kubectl explain deploy
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
With kubectl explain you can also see specific parameters of an object:
# kubectl explain Service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy denotes if this Service desires to route external
traffic to node-local or cluster-wide endpoints. "Local" preserves the
client source IP and avoids a second hop for LoadBalancer and Nodeport type
services, but risks potentially imbalanced traffic spreading. "Cluster"
obscures the client source IP and may cause a second hop to another node,
but should have good overall load-spreading.
To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files.
If you want to check which version your current kube supports, run :
root#ubn64:~# kubectl api-versions | grep -i apps
apps/v1
I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
solution that worked for me -
modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml
Reason -
we had upgraded the K8 cluster hence this error occured.
This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps
see below
New workflow
First fetch the chart as a tgz to your working directory
helm fetch repo/chart
then in your working directly run bash script below - which I named helmk
helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]
Contents of helmk - need to edit your kubeconfig clustername to work
#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2 #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${#:4} -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default
It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.
You will get a warning about using the kubectl convert facility like this
If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too.
Good luck
I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).
$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-deployment
labels:
...
Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.
The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.
$ helm plugin install https://github.com/helm/helm-mapkubeapis
It can be run with the --dry-run flag to simulate the effects:
$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.
and then run without the flag to apply the changes.

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}