Upgrade Failled in Helm Upgrade stage - kubernetes

I get the below error in my helm upgrade stage. I did the following change apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"

The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.
Helm has the official documentation on how to recover from that scenario:
https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest

Helm doesn't like that an old version of the template contains removed apiVersion’s and results in the above error.To fix it, follow the steps in the official documentation from Helm.
Because we didn’t upgrade the apiVersion before it was removed, we had to follow the manual approach. We have quite a few services that need updating, in two different kubernetes clusters (production and test). So there is a script that would update the apiVersion for the ingress object.You can find the script here.
The script assumes that you want to change networking.k8s.io/v1beta1 to networking.k8s.io/v1. If you have a problem with another apiVersion, change those values in the script in line 30. Updating your helm chart template if further changes are needed and deploy/apply the new helm chart.

Related

Ngnix Ingress Controller with for Long term support

I see many different Nginx implementation.
I see some post saying stable/nginx-ingresschart is deprecated, move toingress-nginx/nginx-ingress` chat.
This project https://github.com/kubernetes/ingress-nginx/releases has two Nginx images NGINX: 0.34.1 & ingress-nginx-2.16.0 what is the difference between these two images.
Which Nignx Helm Chat to use for Long term support.
Thanks
SR
The ingress-nginx-2.x.x helm chart uses the nginx-x.x.x container. You don't normally need to reference the container image directly when using helm chart as that is set in the default values.
Helm itself moved a major version recently, from 2 -> 3 which caused a lot of changes to how helm repos are structured which is why you see the "deprecated" message in the old Helm 2 stable repo.
I don't believe the ingress-nginx project has an LTS release strategy. Just use a latest 2.X release or n-1 if you want to protect yourself from unexpected changes which get thrown in occasionally.
NGINX (the company) do provide their own alternative NGINX kubernetes-ingress project if you are looking for commercial support.

What is the difference between 'istioctl manifest apply' and 'istioctl install'?

I have noticed that setting values through istioctl manifest apply will affect other Istio resources. For example, when I set --set values.tracing.enabled=true, Kiali which was previously installed in cluster vanished.
And what is the right way to set values(option) like values.pilot.traceSampling?
Thanks
Istio install has been introduced in istio 1.6 however the --set options work the same as in istioctl manifest apply which it replaces. I suspect it is made for better
clarity and accessibility as istioctl manifest has lots of other uses like istioctl manifest generate which allows to create manifest yaml and save it to a file.
According to istio documentation:
While istioctl install will automatically detect environment specific settings from your Kubernetes context, manifest generate cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow these steps if your Kubernetes environment does not support third party service account tokens.
As for Kiali You need to install it separately like in this guide.
To set values like values.pilot.tracingSampling i suggest using istio Operator.
Hope it helps.

How to access helm programmatically

I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output

How to expose helm to kubernetes deployment which is with golang application

I am writing a Golang application which is more like automating the helm install, so I would like to know how to expose helm to your Kubernetes deployment or any API that creates helm object which can communicate with the tiller directly for the instruction, please describe the answer with a piece of code. thanks
I have been trying with the package https://godoc.org/k8s.io/helm/pkg/helm but does not really know what are the parameters that we need to pass when creating helm client
Not to discourage you, but I thought I should point out that Helm is nearing a v3 release, which will entirely remove tiller, and hence the client will likely change also.
Here are some relevant links:
Helm v3.0.0-beta.3 release notes
Helm v3 Beta 1 Released blog post
Hope this helps.

How can I remove a deprecated version of a specific API resource from a Kubernetes cluster?

When the storage version of a Kubernetes API resource changes, is it still necessary to manually read and write back resources as describe here or does the apiserver now deal with this automatically?
For example, if I wanted to remove the deprecated extensions/v1beta1 version of deployments from my cluster and migrate to apps/v1 would it be enough to specify --storage-versions=extensions=apps/v1 on the apiserver and then ‘wait for a bit’ before setting something like ---runtime-config=api/all=true,extensions/v1beta1/deployments=false? Or would I have to use the update-storage-objects.sh script after setting --storage-versions=extensions=apps/v1?
Additionally, would specifying --storage-versions=extensions=apps/v1 cause any issues for ingress resources that still use API version extensions/v1beta1 but have no conversion to apps/v1?
does the apiserver now deal with this automatically?
No, the api-server does not do it automatically, you need to do it manually.
Regarding the upgrade between API versions, all necessary steps are described in the official documentation:
This is an infrequent event, but it requires careful management. There
is a sequence of steps to upgrade to a new API version.
Turn on the new API version.
Upgrade the cluster’s storage to use the new version.
Upgrade all config files. Identify users of the old API version endpoints.
Update existing objects in the storage to new version by running cluster/update-storage-objects.sh.
Turn off the old API version.
Step 4 is not only about storage but also about all resources related to the old version which you have in the cluster.
Additionally, would specifying --storage-versions=extensions=apps/v1 cause any issues for ingress resources that still use API version extensions/v1beta1 but have no conversion to apps/v1?
Versioning of each type of resource is independent. Storage and Ingress are different resources so there are no relations between their versions and different versions should not affect them in any way.
The recommended method for doing this is still in flux. Removing API versions is currently prohibited: https://github.com/kubernetes/kubernetes/issues/52185
Im usually upgrading the cluster with a new API version then upgrade the config files but not removing the old API. Only once I had to remove a old API version due to a bug. You can do this by running kubectl get apiservice to list all available versions then kubectl delete apiservice some_api and you don't have to set any other flag.