I'm trying to edit services created via helm chart and when changing from NodePort to ClusterIP I get this error
The Service "<name>" is invalid: spec.ports[0].nodePort: Fordbidden: may not be used when 'type' is 'ClusterIP'
I've seen solutions from other people where they just run kubectl apply -f service.yaml --force - but I'm not using kubectl but helm to do it - any thoughts ? If it was just one service I would just update/re-deploy manually but there are xx of them.
Found the answer to my exact question in here https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_upgrading_service_type_change.html
In short they suggest either to:
There are three methods you can use to avoid the service conversion issue above. You will only need to perform one of these methods:
Method 1: Installing the new version of the helm chart with a different release name and update all clients to point to the new probe service endpoint if required. Then delete the old release. This is the recommended method but requires a re-configuration on the client side.
Method 2: Manually changing the service type using kubectl edit svc. This method requires more manual steps but preserves the current service name and previous revisions of the helm chart. After performing this workaround, users should be able to perform a helm upgrade.
Method 3: Deleting and purging the existing helm release, and then install the new version of helm chart with the same release name.
Related
Previously we only use helm template to generate the manifest and apply to the cluster, recently we start planning to use helm install to manage our deployment, but running into following problems:
Our deployment is a simple backend api which contains "Ingress", "Service", and "Deployment", when there is a new commit, the pipeline will be triggered to deploy.
We plan to use the short commit sha as the image tag and helm release name. Here is the command
helm upgrade --install releaseName repo/chartName -f value.yaml --set image.tag=SHA
This runs perfectly fine for the first time, but when I create another release it fails with following error message
rendered manifests contain a resource that already exists. Unable to continue with install: Service "app-svc" in namespace "ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "rel-124": current value is "rel-123"
The error message is pretty clear on what the issue is, but I am just wondering what's "correct" way of using helm in this case?
It is not practical that I uninstall everything for a new release, and I also dont want to keep using the same release.
You are already doing it "right" way, just don't change release-name. That's key for Helm to identify resources. It seems that you previously used different name for release (rel-123) then you are using now (rel-124).
To fix your immediate problem, you should be able to proceed by updating value of annotation meta.helm.sh/release-name on problematic resource. Something like this should do it:
kubectl annotate --overwrite service app-svc meta.helm.sh/release-name=rel-124
I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)
I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896
Is there some tool available that could tell me whether a K8s YAML configuration (to-be-supplied to kubectl apply) is valid for the target Kubernetes version without requiring a connection to a Kubernetes cluster?
One concrete use-case here would be to detect incompatibilities before actual deployment to a cluster, just because some already-deprecated label has been finally dropped in a newer Kubernetes version, e.g. as has happened for Helm and the switch to Kubernetes 1.16 (see Helm init fails on Kubernetes 1.16.0):
Dropped:
apiVersion: extensions/v1beta1
New:
apiVersion: apps/v1
I want to check these kind of incompatibilities within a CI system, so that I can reject it before even attempting to deploy it.
just run below command to validate the syntax
kubectl create -f <yaml-file> --dry-run
In fact the dry-run option is to validate the YAML syntax and the object schema. You can grab the output into a variable and if there is no error then rerun the command without dry-run
You could use kubeval
https://kubeval.instrumenta.dev/
I don't think kubectl support client-side only validation yet (02/2022)
If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.