Apologize if this comes under basic. I am using stable/rabbitmq community helm chart to install. But whenever I changed release name using --name option in install like helm install --name rabbitmq stable/rabbitmq, it gives this error: 0/7 nodes are available: 1 PodToleratesNodeTaints, 7 MatchNodeSelector and hangs indefinitely. And if default/auto generated name is used then it creates release. I don't want to use default release name as it creates unpredictable service name as well. Can someone please help me understanding this? Or is there a way to solve it.
Thanks in advance.
Related
I know title sounds like "did you even try to google", but helm gives me the typical: another operation (install/upgrade/rollback) is in progress
What I can't figure out, is there's no actual releases anywhere that are actually in progress.
I've run helm list --all --all-namespaces and the list is just blank. Same with running helm history against any namespace I can think of. Nothing, all just blank. I've even deleted the namespace and everything in it that the app was initially installed in, and it still is broken.
I've also found answers to delete secrets, which I have, and it doesn't help.
Is there some way to hard reset helm's state? Because all the answers I find on this topic involve rolling back, uninstalling, or deleting stuck releases, and none exist on this entire cluster.
Helm is v3.8.1 if that helps. Thanks for any help on this, it's driving me crazy.
I ended up figuring this out. The pipeline running helm executed from a gitlab runner that runs on one cluster, but uses a kubernetes context to target my desired cluster. At some point, the kubernetes context wasn't loaded correctly, and a bad install went to the host cluster the runner lived on.
While I was still targeting a different cluster, the helm command saw a bad install on the cluster local to it, so a helm list --all didn't see anything on the target clusetr.
I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?
Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.
Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml
It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.
The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.
I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)
I'm trying to edit services created via helm chart and when changing from NodePort to ClusterIP I get this error
The Service "<name>" is invalid: spec.ports[0].nodePort: Fordbidden: may not be used when 'type' is 'ClusterIP'
I've seen solutions from other people where they just run kubectl apply -f service.yaml --force - but I'm not using kubectl but helm to do it - any thoughts ? If it was just one service I would just update/re-deploy manually but there are xx of them.
Found the answer to my exact question in here https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_upgrading_service_type_change.html
In short they suggest either to:
There are three methods you can use to avoid the service conversion issue above. You will only need to perform one of these methods:
Method 1: Installing the new version of the helm chart with a different release name and update all clients to point to the new probe service endpoint if required. Then delete the old release. This is the recommended method but requires a re-configuration on the client side.
Method 2: Manually changing the service type using kubectl edit svc. This method requires more manual steps but preserves the current service name and previous revisions of the helm chart. After performing this workaround, users should be able to perform a helm upgrade.
Method 3: Deleting and purging the existing helm release, and then install the new version of helm chart with the same release name.
I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).
The pullPolicy is Always.
Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried helm upgrade but it seems like nothing happened.
With helm list I can see that revision was deployed but the changes to source code were not deployed.
watch kubectl get pods also shows that no new pods were created the way you expect it with kubectl --apply...
What did I do wrong?
Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use :latest there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using latest, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need ImagePullPolicy: Always as well.
Possible workaround:
spec:
template:
metadata:
labels:
date: "{{ now | unixEpoch }}"
Add it to your Deployment or StatefulSet yaml
It's worth noting that there's nothing special about the 'latest' tag. In other words it doesn't mean what we would normally think, i.e. "the most recent version".
It's just string of characters from a container standpoint. It could be anything, like "blahblah".
The runtime (docker or kubernetes) will just look to see if it has an image with that tag and only get the new image if that tag doesn't exist.
Given that "latest" doesn't actually mean anything, best practice if you want to be updating images constantly, is to use the actual version of the code itself as an image tag. And then when deploying, have your infrastructure specifically deploy the newest version using the correct tag.
The way I solved this in the deployment script in .gitlab.yaml, you can do similar in any of your deployment scripts.
export SAME_SHA=$(helm get values service-name | grep SHA | wc -l)
if [ SAME_SHA -eq 1] then helm uninstall service-name; fi
helm upgrade --install service-name -f service-values.yml .
This may not be the best approach for production as you may end up uninstall a live service, but for me, production sha are never the same so this works.