I see many different Nginx implementation.
I see some post saying stable/nginx-ingresschart is deprecated, move toingress-nginx/nginx-ingress` chat.
This project https://github.com/kubernetes/ingress-nginx/releases has two Nginx images NGINX: 0.34.1 & ingress-nginx-2.16.0 what is the difference between these two images.
Which Nignx Helm Chat to use for Long term support.
Thanks
SR
The ingress-nginx-2.x.x helm chart uses the nginx-x.x.x container. You don't normally need to reference the container image directly when using helm chart as that is set in the default values.
Helm itself moved a major version recently, from 2 -> 3 which caused a lot of changes to how helm repos are structured which is why you see the "deprecated" message in the old Helm 2 stable repo.
I don't believe the ingress-nginx project has an LTS release strategy. Just use a latest 2.X release or n-1 if you want to protect yourself from unexpected changes which get thrown in occasionally.
NGINX (the company) do provide their own alternative NGINX kubernetes-ingress project if you are looking for commercial support.
Related
I get the below error in my helm upgrade stage. I did the following change apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"
The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.
Helm has the official documentation on how to recover from that scenario:
https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest
Helm doesn't like that an old version of the template contains removed apiVersion’s and results in the above error.To fix it, follow the steps in the official documentation from Helm.
Because we didn’t upgrade the apiVersion before it was removed, we had to follow the manual approach. We have quite a few services that need updating, in two different kubernetes clusters (production and test). So there is a script that would update the apiVersion for the ingress object.You can find the script here.
The script assumes that you want to change networking.k8s.io/v1beta1 to networking.k8s.io/v1. If you have a problem with another apiVersion, change those values in the script in line 30. Updating your helm chart template if further changes are needed and deploy/apply the new helm chart.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output
I understand that helm consists of a client-side component (the helm CLI) and a cluster-side component (tiller). The docs say that tiller is responsible for building and managing releases. But why does this need to be done from the cluster? Why can't helm build and manage releases from the client, and then simply push resources to kubernetes?
Tiller can also be run on the client side as mentioned in the Helm documentation here. The documentation refers to it as Running Tiller Locally.
But, as mentioned in the same documentation it's mainly for the sake of development. Had been thinking about it and not exactly sure why only for development and not for production.
There where a lot of limitations with running client side only, as mentioned in this thread https://github.com/helm/helm/issues/2722.
But helm v3 will be a complete rewrite with no server side component.
If the installation of OpenEBS can be completed with a single command, why would a developer use helm install ? (It is probably more a helm benefits question). I'd like to understand the additional benefits OpenEBS charts can present to a helm user, if any.
I guess you're looking at the two current supported options for OpenEBS installation and noting that the helm install section is much larger with more steps than the operator-based install option. If so, note that the helm section has two sub-sections - you only need one or the other and the one that uses the stable helm charts repo is just a single command. But one might still wonder why install helm in the first place.
One of the main advantages of helm is the availability of standard, reusable charts for a wide range of applications. This is including but not limited to the official charts repo. Relative to pure kubernetes descriptors, helm charts are easier to pass parameters into since they work as templates from which kubernetes descriptor files are generated.
Often the level of parameterisation that you get from templating is needed to ensure that an app can be installed to lots of different clusters and provide the full range of installation options that the app needs. Things like turning on or off certain permissions or pointing at storage. Different apps need different levels of configurability.
If you look at the OpenEBS non-helm deployment descriptor at https://openebs.github.io/charts/openebs-operator-0.7.0.yaml, you'll see it defines a list of resources. The same resources defined in https://github.com/helm/charts/tree/master/stable/openebs/templates Within the non-helm version the number of replicas for maya-apiserver is set at 1. To change this, you'd need to download the file and edit it or change it in your running kubernetes. With the helm version it's one of a range of parameters that you can set at install time (https://github.com/helm/charts/blob/master/stable/openebs/values.yaml#L19) as options on the helm install command
When the storage version of a Kubernetes API resource changes, is it still necessary to manually read and write back resources as describe here or does the apiserver now deal with this automatically?
For example, if I wanted to remove the deprecated extensions/v1beta1 version of deployments from my cluster and migrate to apps/v1 would it be enough to specify --storage-versions=extensions=apps/v1 on the apiserver and then ‘wait for a bit’ before setting something like ---runtime-config=api/all=true,extensions/v1beta1/deployments=false? Or would I have to use the update-storage-objects.sh script after setting --storage-versions=extensions=apps/v1?
Additionally, would specifying --storage-versions=extensions=apps/v1 cause any issues for ingress resources that still use API version extensions/v1beta1 but have no conversion to apps/v1?
does the apiserver now deal with this automatically?
No, the api-server does not do it automatically, you need to do it manually.
Regarding the upgrade between API versions, all necessary steps are described in the official documentation:
This is an infrequent event, but it requires careful management. There
is a sequence of steps to upgrade to a new API version.
Turn on the new API version.
Upgrade the cluster’s storage to use the new version.
Upgrade all config files. Identify users of the old API version endpoints.
Update existing objects in the storage to new version by running cluster/update-storage-objects.sh.
Turn off the old API version.
Step 4 is not only about storage but also about all resources related to the old version which you have in the cluster.
Additionally, would specifying --storage-versions=extensions=apps/v1 cause any issues for ingress resources that still use API version extensions/v1beta1 but have no conversion to apps/v1?
Versioning of each type of resource is independent. Storage and Ingress are different resources so there are no relations between their versions and different versions should not affect them in any way.
The recommended method for doing this is still in flux. Removing API versions is currently prohibited: https://github.com/kubernetes/kubernetes/issues/52185
Im usually upgrading the cluster with a new API version then upgrade the config files but not removing the old API. Only once I had to remove a old API version due to a bug. You can do this by running kubectl get apiservice to list all available versions then kubectl delete apiservice some_api and you don't have to set any other flag.