Super-Operator In Kubernetes - kubernetes

I need to automate the provisioning of a complex application in Kubernetes. It's a complex, multi-step process that involves provisioning of some cluster-wide resources and some app-specific resources. The cluster-wide resources are:
Istio
A few Operators (Cert Manager, Prometheus Operator, Postgres Operator, among others)
Then I want to create an application (let's call it Foo) which leverages Istio and the aforementioned operators. It will create statefulsets, services, Certificates, a Postgres database, Istio gateways, Prometheus PodMonitors, etc.
There will be multiple Foo's created, each configured differently (since the Kubernetes cluster will be used to provide Foo applications as a multi-tenant service).
What's the idiomatic way to do this? I think I should write a Foo controller which assumes that Istio and the other operators (prometheus, cert-manager, postgres, etc) already exist.
Is it possible to write a meta ClusterOfFoos operator that installs Istio, installs the required operators, and then installs the Foo controller?
If so, how does one go about provisioning operators (normally installed through Helm) from within a controller?
So far I have looked into using helm to do this, but there are too many dependencies and Helm just tends to create all resources at once, which makes some things fail (eg. when a deployment refers to a Secret that hasn't yet been created by cert-manager).

The Operator Lifecycle Manager is really well suited for the task.
When you create operator Foo, you can package it in the OLM way by creating a bundle which contains the ClusterServiceVersion needed to inform OLM of dependencies that need to be resolved before install and during upgrades. These can just be a list of APIs you need - and OLM will find and install the set of latest versions of the operators that own each API.
All your dependencies are operators available in the Operatorhub.io Catalog so they are available for install and dependency resolution as soon as you install OLM.
You can also configure certain dependencies by including these objects in the bundle itself. According to the docs, the following objects are supported as of the time of this post:
Secret
ClusterRole
ClusterRoleBinding
ConfigMap
ServiceAccount
Service
Role
RoleBinding
PrometheusRule
ServiceMonitor
PodDisruptionBudget
PriorityClasse
VerticalPodAutoscaler
ConsoleYAMLSample
ConsoleQuickStart
ConsoleCLIDownload
ConsoleLink
The Operator SDK can help you with bootstrapping the bundle.

By using GitOps workflow you can automate complex applications in Kubernetes.
You need to define cluster-wide resources and application specific resources in a YAML file.
By using GitOps tools you can continuously deploy kubernetes resources and they will automatically deploy the changes in the cluster.
Use Helm chart to install Istio and make sure dependencies in the Helm chart are created in order.
You can create a custom controller by FOO where it can read configuration of YAML files.
Use kubernetes CRDs to define configuration of each FOO; they will allow you to create custom resources which are specific for each application.
By using Helm; it will read the configuration from the CRD and generate correct YAML values.
The above described approach will allow you to create multiple FOO applications with different configurations and ensure that the required resources are installed in correct order.
You can check this article from codefresh regarding GitOps Workflow and official kubernetes page.
You can also check Working with Multiple Applications and Environments and how Argo CD is useful for this scenario.

Related

K6: How to apply custom resource in kubernetes?

Currently we could create configmaps, deployment, pods, jobs, namespaces, ingress.
https://github.com/grafana/xk6-kubernetes
But I would like to deploy custom resources. Is it possible?
But I would like to deploy custom resources. Is it possible?
Creation of CR is possible. If definitions are maintained in yaml files for Custom resources, then CR can be applied or created using kubectl tool.
Usually Custom resources are associated with CRD's or API aggregators. Please refer kubernetes documentation for more details.
Adding custom resources
Kubernetes provides two ways to add custom resources to your cluster:
CRDs are simple and can be created without any programming.
API Aggregation requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions.
This is not presently possible. Even when the recently introduced generic interface there are some issues regarding how Kubernetes API handles CRDs. Please refer to this open issue for tracking the progress of this requirement.

How to access helm programmatically

I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
I found pyhelm but it supports only Helm 2. I looked on npm, but nothing there. I wrote a bash script but if I try to use it's output I get just a string really so it's not really useful.
I'd like to access cluster deployed Helm charts programmatically to make web interface which will allow manual chart manipulation.
Helm 3 is different than previous versions in that it is a client only tool, similar to e.g. Kustomize. This means that helm charts only exists on the client (and in chart repositories) but is then transformed to a kubernetes manifest during deployment. So only Kubernetes objects exists in the cluster.
Kubernetes API is a REST API so you can access and get Kubernetes objects using a http client. Kubernetes object manifests is available in JSON and Yaml formats.
If you are OK to use Go then you can use the Helm 3 Go API.
If you want to use Python, I guess you'll have to wait for the Helm v3 support of pyhelm, there is already an issue addressing this.
reached this as we also need an npm package to deploy helm3 charts programmatically (sorta whitelabel app with a gui to manage the instances).
Only thing I could find was an old discontinued package from microsoft for helm v2 https://github.com/microsoft/helm-web-api/tree/master/on-demand-micro-services-deployment-k8s
I dont think using k8s API would work, as some charts can get fairly complex in terms of k8s resources, so I got some inspiration and I think I will develop my own package as a wrapper to the helm cli commands, using -o json param for easier handling of the CLI output

Can someone explain me some use cases of helm?

I’m currently using kubernetes and I came across of helm.
Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.
So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.
For example (taking it from guides I read:
you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done
repositories -> I have docker ones (where I pull my images from)
you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy
What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply).
In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?
It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.
Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.
Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too.
One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple helm install and reuse the configuration that someone has already made, sometimes even the company who created the Application.
There are a lot of other use cases, but these two are the ones that I would say are the most common.
Here's three suggestions of ways Helm can be useful:
Your continuous deployment system somewhat routinely produces new builds and wants to send them to the Kubernetes cluster. You can use templating to specify the image name and tag in a deployment, and so helm upgrade ... --set tag=201907211931 to request a specific tag.
You might have various service-specific controls like the log level or external database hostnames. The Helm values mechanism gives a uniform way to specify them, without having to know the details of the Kubernetes YAML files.
There is a repository of pre-packaged application charts, so if you want replicated PostgreSQL with in-cluster persistent storage, that's already built for you and you can just depend on it, rather than figuring out the right combination of StatefulSets and PersistentVolumeClaims yourself.
You can combine these in interesting (and potentially complex) ways: use an in-cluster database for developer testing but use a cloud-hosted and backed-up database for production, for example, and compute the database host name based on what combination of settings are provided.
There are, of course, alternative ways to do all of these things. Kustomize in particular can change the image value fairly straightforwardly, and is notable for having been included in the kubectl tool since Kubernetes 1.14 (see also Declarative Management of Kubernetes Objects Using Kustomize in the Kubernetes documentation). The "operator" pattern gives an alternate path to install software in your cluster, but even more so than Helm you're trusting an arbitrary program with API access.

Operator Lifecycle Manager (OLM) vs Helm

What is the difference and benefit of the Operator Lifecycle Manager (OLM) vs Helm?
OLM - https://github.com/operator-framework/operator-lifecycle-manager
Helm - https://helm.sh/
I understand that Helm is a general purpose package manager for Kubernetes where as OLM is specific to operators. But, Helm can be used to deploy operators. So, how is OLM different/better than Helm for operators?
Helm provides the ability to install applications onto Kubernetes via Helm Charts, which themselves are a collection of templatized K8s manifests. It handles only the basic lifecycle of these applications (install/delete/rollback/upgrade) by rendering these templates and feeding them to the K8s API server. Based on the version of Helm, there are limitations related to dependency management and which resources can be created in which namespaces.
OLM (Operator Lifecycle Manager), as the previous user mentioned, is a declarative based system which is meant to support installation of Operators, which themselves are responsible for providing the logic and instructions to manage the lifecycle of an application (install/create/delete/upgrade). OLM is an opinionated approach to managing the lifecycle and packaging of these Operators. There is also an SDK which help users create Operators from Helm/Ansible/Go to fit into this system. It has various components which talk to each other through the K8s APIServer, heavily leveraging CRDs and custom resources to make this all happen.
Benefits/differences:
Both can be used to install/delete/rollback/upgrade an Operator, but OLM offers a model whereby you can craft various methods of deployment operations for your application deployment (think alpha vs stable) into different subscribable "channels". As you update these the methods in these "channels", those that are subscribed automatically gain the ability to upgrade/install a newer version according to these methods. Dependencies in OLM are also handled in a different way, and you can have a chain of dependent Operators installed, in order, in various namespaces. Helm is a bit more restricted in this regard.
Lastly OLM make the assumption that your container images are publicly reachable and their use in manifests are built into containers (CatalogSource, Operators, etc), whereas Helm charts much more easily modifiable using various Helm based CLI commands (or 3rd party tools) to override template values before creation.
Well, Helm cannot deploy itself. It only provides primitives for Helm Charts, which you can install when your infrastructure is set up accordingly. In order to deploy anything you need some sort of pipeline that puts all the pieces together.
OLM is a declarative approach to solve some kind of release management where you define different versions of "deployables," which are then upgraded. I have yet to understand how this can be used with custom services. As far as I was digging some time ago, you could only use some predefined applications. Also note that OLM does not replace Helm. I would assume whatever "deployable" OLM manages can also be something that is installed via Helm at the end of the day.

Is there any benefit of using Helm installation method while installing OpenEBS?

If the installation of OpenEBS can be completed with a single command, why would a developer use helm install ? (It is probably more a helm benefits question). I'd like to understand the additional benefits OpenEBS charts can present to a helm user, if any.
I guess you're looking at the two current supported options for OpenEBS installation and noting that the helm install section is much larger with more steps than the operator-based install option. If so, note that the helm section has two sub-sections - you only need one or the other and the one that uses the stable helm charts repo is just a single command. But one might still wonder why install helm in the first place.
One of the main advantages of helm is the availability of standard, reusable charts for a wide range of applications. This is including but not limited to the official charts repo. Relative to pure kubernetes descriptors, helm charts are easier to pass parameters into since they work as templates from which kubernetes descriptor files are generated.
Often the level of parameterisation that you get from templating is needed to ensure that an app can be installed to lots of different clusters and provide the full range of installation options that the app needs. Things like turning on or off certain permissions or pointing at storage. Different apps need different levels of configurability.
If you look at the OpenEBS non-helm deployment descriptor at https://openebs.github.io/charts/openebs-operator-0.7.0.yaml, you'll see it defines a list of resources. The same resources defined in https://github.com/helm/charts/tree/master/stable/openebs/templates Within the non-helm version the number of replicas for maya-apiserver is set at 1. To change this, you'd need to download the file and edit it or change it in your running kubernetes. With the helm version it's one of a range of parameters that you can set at install time (https://github.com/helm/charts/blob/master/stable/openebs/values.yaml#L19) as options on the helm install command