What is the difference between 'istioctl manifest apply' and 'istioctl install'? - kubernetes

I have noticed that setting values through istioctl manifest apply will affect other Istio resources. For example, when I set --set values.tracing.enabled=true, Kiali which was previously installed in cluster vanished.
And what is the right way to set values(option) like values.pilot.traceSampling?
Thanks

Istio install has been introduced in istio 1.6 however the --set options work the same as in istioctl manifest apply which it replaces. I suspect it is made for better
clarity and accessibility as istioctl manifest has lots of other uses like istioctl manifest generate which allows to create manifest yaml and save it to a file.
According to istio documentation:
While istioctl install will automatically detect environment specific settings from your Kubernetes context, manifest generate cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow these steps if your Kubernetes environment does not support third party service account tokens.
As for Kiali You need to install it separately like in this guide.
To set values like values.pilot.tracingSampling i suggest using istio Operator.
Hope it helps.

Related

Super-Operator In Kubernetes

I need to automate the provisioning of a complex application in Kubernetes. It's a complex, multi-step process that involves provisioning of some cluster-wide resources and some app-specific resources. The cluster-wide resources are:
Istio
A few Operators (Cert Manager, Prometheus Operator, Postgres Operator, among others)
Then I want to create an application (let's call it Foo) which leverages Istio and the aforementioned operators. It will create statefulsets, services, Certificates, a Postgres database, Istio gateways, Prometheus PodMonitors, etc.
There will be multiple Foo's created, each configured differently (since the Kubernetes cluster will be used to provide Foo applications as a multi-tenant service).
What's the idiomatic way to do this? I think I should write a Foo controller which assumes that Istio and the other operators (prometheus, cert-manager, postgres, etc) already exist.
Is it possible to write a meta ClusterOfFoos operator that installs Istio, installs the required operators, and then installs the Foo controller?
If so, how does one go about provisioning operators (normally installed through Helm) from within a controller?
So far I have looked into using helm to do this, but there are too many dependencies and Helm just tends to create all resources at once, which makes some things fail (eg. when a deployment refers to a Secret that hasn't yet been created by cert-manager).
The Operator Lifecycle Manager is really well suited for the task.
When you create operator Foo, you can package it in the OLM way by creating a bundle which contains the ClusterServiceVersion needed to inform OLM of dependencies that need to be resolved before install and during upgrades. These can just be a list of APIs you need - and OLM will find and install the set of latest versions of the operators that own each API.
All your dependencies are operators available in the Operatorhub.io Catalog so they are available for install and dependency resolution as soon as you install OLM.
You can also configure certain dependencies by including these objects in the bundle itself. According to the docs, the following objects are supported as of the time of this post:
Secret
ClusterRole
ClusterRoleBinding
ConfigMap
ServiceAccount
Service
Role
RoleBinding
PrometheusRule
ServiceMonitor
PodDisruptionBudget
PriorityClasse
VerticalPodAutoscaler
ConsoleYAMLSample
ConsoleQuickStart
ConsoleCLIDownload
ConsoleLink
The Operator SDK can help you with bootstrapping the bundle.
By using GitOps workflow you can automate complex applications in Kubernetes.
You need to define cluster-wide resources and application specific resources in a YAML file.
By using GitOps tools you can continuously deploy kubernetes resources and they will automatically deploy the changes in the cluster.
Use Helm chart to install Istio and make sure dependencies in the Helm chart are created in order.
You can create a custom controller by FOO where it can read configuration of YAML files.
Use kubernetes CRDs to define configuration of each FOO; they will allow you to create custom resources which are specific for each application.
By using Helm; it will read the configuration from the CRD and generate correct YAML values.
The above described approach will allow you to create multiple FOO applications with different configurations and ensure that the required resources are installed in correct order.
You can check this article from codefresh regarding GitOps Workflow and official kubernetes page.
You can also check Working with Multiple Applications and Environments and how Argo CD is useful for this scenario.

Upgrade Failled in Helm Upgrade stage

I get the below error in my helm upgrade stage. I did the following change apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"
The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.
Helm has the official documentation on how to recover from that scenario:
https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest
Helm doesn't like that an old version of the template contains removed apiVersion’s and results in the above error.To fix it, follow the steps in the official documentation from Helm.
Because we didn’t upgrade the apiVersion before it was removed, we had to follow the manual approach. We have quite a few services that need updating, in two different kubernetes clusters (production and test). So there is a script that would update the apiVersion for the ingress object.You can find the script here.
The script assumes that you want to change networking.k8s.io/v1beta1 to networking.k8s.io/v1. If you have a problem with another apiVersion, change those values in the script in line 30. Updating your helm chart template if further changes are needed and deploy/apply the new helm chart.

How to access helm programmatically

I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output

How to expose helm to kubernetes deployment which is with golang application

I am writing a Golang application which is more like automating the helm install, so I would like to know how to expose helm to your Kubernetes deployment or any API that creates helm object which can communicate with the tiller directly for the instruction, please describe the answer with a piece of code. thanks
I have been trying with the package https://godoc.org/k8s.io/helm/pkg/helm but does not really know what are the parameters that we need to pass when creating helm client
Not to discourage you, but I thought I should point out that Helm is nearing a v3 release, which will entirely remove tiller, and hence the client will likely change also.
Here are some relevant links:
Helm v3.0.0-beta.3 release notes
Helm v3 Beta 1 Released blog post
Hope this helps.

Is there any benefit of using Helm installation method while installing OpenEBS?

If the installation of OpenEBS can be completed with a single command, why would a developer use helm install ? (It is probably more a helm benefits question). I'd like to understand the additional benefits OpenEBS charts can present to a helm user, if any.
I guess you're looking at the two current supported options for OpenEBS installation and noting that the helm install section is much larger with more steps than the operator-based install option. If so, note that the helm section has two sub-sections - you only need one or the other and the one that uses the stable helm charts repo is just a single command. But one might still wonder why install helm in the first place.
One of the main advantages of helm is the availability of standard, reusable charts for a wide range of applications. This is including but not limited to the official charts repo. Relative to pure kubernetes descriptors, helm charts are easier to pass parameters into since they work as templates from which kubernetes descriptor files are generated.
Often the level of parameterisation that you get from templating is needed to ensure that an app can be installed to lots of different clusters and provide the full range of installation options that the app needs. Things like turning on or off certain permissions or pointing at storage. Different apps need different levels of configurability.
If you look at the OpenEBS non-helm deployment descriptor at https://openebs.github.io/charts/openebs-operator-0.7.0.yaml, you'll see it defines a list of resources. The same resources defined in https://github.com/helm/charts/tree/master/stable/openebs/templates Within the non-helm version the number of replicas for maya-apiserver is set at 1. To change this, you'd need to download the file and edit it or change it in your running kubernetes. With the helm version it's one of a range of parameters that you can set at install time (https://github.com/helm/charts/blob/master/stable/openebs/values.yaml#L19) as options on the helm install command