kubernetes: deploying kong helm chart - kubernetes

I deploy kong via helm on my kubernetes cluster but I can't configure it as I want.
helm install stable/kong -f values.yaml
value.yaml:
{
"persistence.size":"1Gi",
"persistence.storageClass":"my-kong-storage"
}
Unfortunately, the created persistenceVolumeClaim stays at 8G instead of 1Gi. Even adding "persistence.enabled":false has no effect on deployment. So I think my all my configuration is bad.
What should be a good configuration file?
I am using kubernetes rancher deployment on bare metal servers.
I use Local Persistent Volumes. (working well with mongo-replicaset deployment)

What you are trying to do is to configure a dependency chart (a.k.a subchart ) which is a little different from a main chart when it comes to writing values.yaml. Here is how you can do it:
As postgresql is a dependency chart for kong so you have to use the name of the dependency chart as a key then the rest of the options you need to modify in the following form:
The content of values.yaml does not need to be surrounded with curly braces. so you need to remove it from the code you posted in the question.
<dependcy-chart-name>:
<configuration-key-name>: <configuration-value>
For Rancher you have to write it as the following:
#values.yaml for rancher
postgresql.persistence.storageClass: "my-kong-storage"
postgresql.persistence.size: "1Gi"
Unlike if you are using helm itself with vanilla kubernetes - at least - you can write the values.yml as below:
#values.yaml for helm
postgresql:
persistence:
storageClass: "my-kong-storage"
size: "1Gi"
More about Dealing with SubChart values
More about Postgresql chart configuration

Please tell us which cluster setup you are using. A cloud managed service? Custom setup kubernetes?
The problem you are facing is that there is a "minimum size" of storage to be provisioned. For example in IBM Cloud it is 20 GB.
So even if 2GB are requested in the PVC , you will end up with a 20GB PV.
Please check the documentation of your NFS Provisioner / Storage Class

Related

How can I backup nifi processes when restarting Kuberntes pod?

I have deployed Nifi on Kuberntes using cetic/helm-nifi helm chart. We are facing a problem, If nifi pod restarts we lost the all processes that we created. Is there any way to keep a backup of the process in nifi canvas.
You might be deploying the helm with default config or you are tweaking any configs also?
i have not used nifi but i think enabling the PVC config for nifi might resolve your resolve.
https://github.com/cetic/helm-nifi/blob/master/values.yaml#L211
You can enable the PVC by changing the values.yaml line number.
persistence:
enabled: true
if you have already created volumes you can also use it.
Nifi keeps flow definitions in the /opt/nifi/nifi-current/conf/flow.xml.gz file in the docker image. It is strongly advised to have /conf folder persist .There are lots of kubernetes alternatives for this . check the official document
You can also find archives in the /conf folder for back-ups

How do I use crossplane to Install helm charts (with provider-helm) into other cluster

I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147

Common config in Kubernetes ConfigMap

Kubernetes already provides a way to manage configuration with ConfigMap.
However, I have a question/problem here.
If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?
There are two ways to do that.
Kustomize - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into kubectl command line). But currently it isn't mature enough if compare with helm chart
https://github.com/kubernetes-sigs/kustomize
Helm chart - The Kubernetes Package Manager. Its vaules.yaml can define the vaule for same configuration files (in your case, they are configmap) with variables.
https://helm.sh/

Unable to make elasticsearch as persistent volume in kubernetes cluster

I want to setup Elasticsearch on Kubernetes Cluster using Helm. I can setup Elasticsearch on Kubernetes Cluster without persistence. I am using below helm chart.
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging
However, i am not able to use it with Persistence. Moreover i am confused as i am using neither cloud based storage(aws,gce) nor nfs. I am using Local VM storage.
I added disk in my VM environment formated it under ext4. And now i am trying to use it as a persistent disk for my elasticsearch Deployment.
I tried lots of ways, not working much.
For any data if you need i would be helpful to provide.
But kindly get a solution which will work.
I just need help..
I don't believe this chart will support local storage.
Looking at the volumeClaimTemplate such as on the master-statefulset.yaml shows that it's missing key parameters for a local volume setup (such as path, nodeAffinity, volumeBindingMode) described here. If you are using a cloud deployment, just use a cloud volume claim. If you have deployed a cluster on a on-prem or just onto your computer, then you should fork the chart and adjust the volume claims to meet the requirements for local storage.
Either way on your future posts you should include relevant logs. With kubernetes errors it's helpful to see from all parts of the stack such as: kubernetes control plane logs, object events (like the output from describing the volume claim), helm logs, elasticsearch pod logs failing to discover a volume, etc etc.

What is the difference between the core os projects kube-prometheus and prometheus operator?

The github repo of Prometheus Operator https://github.com/coreos/prometheus-operator/ project says that
The Prometheus Operator makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.
kube-prometheus combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.
Can someone elaborate this?
I've always had this exact same question/repeatedly bumped into both, but tbh reading the above answer didn't clarify it for me/I needed a short explanation. I found this github issue that just made it crystal clear to me.
https://github.com/coreos/prometheus-operator/issues/2619
Quoting nicgirault of GitHub:
At last I realized that prometheus-operator chart was packaging
kube-prometheus stack but it took me around 10 hours playing around to
realize this.
**Here's my summarized explanation:
"kube-prometheus" and "Prometheus Operator Helm Chart" both do the same thing:
Basically the Ingress/Ingress Controller Concept, applied to Metrics/Prometheus Operator.
Both are a means of easily configuring, installing, and managing a huge distributed application (Kubernetes Prometheus Stack) on Kubernetes:**
What is the Entire Kube Prometheus Stack you ask? Prometheus, Grafana, AlertManager, CRDs (Custom Resource Definitions), Prometheus Operator(software bot app), IaC Alert Rules, IaC Grafana Dashboards, IaC ServiceMonitor CRDs (which auto-generate Prometheus Metric Collection Configuration and auto hot import it into Prometheus Server)
(Also when I say easily configuring I mean 1,000-10,000++ lines of easy for humans to understand config that generates and auto manage 10,000-100,000 lines of machine config + stuff with sensible defaults + monitoring configuration self-service, distributed configuration sharding with an operator/controller to combine config + generate verbose boilerplate machine-readable config from nice human-readable config.
If they achieve the same end goal, you might ask what's the difference between them?
https://github.com/coreos/kube-prometheus
https://github.com/helm/charts/tree/master/stable/prometheus-operator
Basically, CoreOS's kube-prometheus deploys the Prometheus Stack using Ksonnet.
Prometheus Operator Helm Chart wraps kube-prometheus / achieves the same end result but with Helm.
So which one to use?
Doesn't matter + they achieve the same end result + shouldn't be crazy difficult to start with 1 and switch to the other.
Helm tends to be faster to learn/develop basic mastery of.
Ksonnet is harder to learn/develop basic mastery of, but:
it's more idempotent (better for CICD automation) (but it's only a difference of 99% idempotent vs 99.99% idempotent.)
has built-in templating which means that if you have multiple clusters you need to manage / that you want to always keep consistent with each other. Then you can leverage ksonnet's templating to manage multiple instances of the Kube Prometheus Stack (for multiple envs) using a DRY code base with lots of code reuse. (If you only have a few envs and Prometheus doesn't need to change often it's not completely unreasonable to keep 4 helm values files in sync by hand. I've also seen Jinja2 templating used to template out helm values files, but if you're going to bother with that you may as well just consider ksonnet.)
Kubernetes operator are kubernetes specific application(pods) that configure, manage and optimize other Kubernetes deployments automatically. They are implemented as a custom controller.
According to official coreOS website:
Operators were introduced by CoreOS as a class of software that operates other software, putting operational knowledge collected by humans into software.
The prometheus operator provides the easy way to deploy configure and monitor your prometheus instances on kubernetes cluster. To do so, prometheus operator introduces three types of custom resource definition(CRD) in kubernetes.
Prometheus
Alertmanager
ServiceMonitor
Now, with the help of above CRD's, you can directly create a prometheus instance by providing kind: Prometheus and the prometheus instance is ready to serve, likewise you can do for AlertManager. Without this you would have to setup the deployment for prometheus with its image, configuration and many more things.
The Prometheus Operator serves to make running Prometheus on top of Kubernetes as easy as possible, while preserving Kubernetes-native configuration options.
Now, kube-prometheus implemented the prometheus operator and provides you minimum yaml files to create your basic setup of prometheus, alertmanager and grafana by running a single command.
git clone https://github.com/coreos/prometheus-operator.git
kubectl apply -f prometheus-operator/contrib/kube-prometheus/manifests/
By running above command in kube-prometheus directory, you will get a monitoring namespace which will have an instance of alertmanager, prometheus and grafana for UI. This is enough setup for most of the basic implementation and if you need any more specifics according to your application, you can add more yamls of exporter you need.
Kube-prometheus is more of a contribution to prometheus-operator project, which implements the prometheus operator functionality very well and provide you a complete monitoring setup for your kubernetes cluster. You can start with kube-prometheus and extend the functionality of your monitoring setup according to your application from there.
You can learn more about prometheus-operator here
As of today, 28-09-2020, this is the way to install Prometheus in a Kubernetes cluster
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
According to official documentation, kube-prometheus-stack is a rename of prometheus-operator.
As I understood, kube-prometheus-stack also has preinstalled grafana dashboards and prometheus rules.
Note: This chart was formerly named prometheus-operator chart, now
renamed to more clearly reflect that it installs the kube-prometheus
project stack, within which Prometheus Operator is only one component.
Taken from https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
Architecturally the container runs docker
The default container logs are managed by Docker, and the default log driver uses JSON-file
log-driver": "json-file
https://docs.docker.com/config/containers/logging/configure/
If the default jSON-file is used to manage container logs, log rotation is not performed by default. Therefore, the default JSON-file log driver the log files stored by the log driver can result in a large amount of disk space for containers that generate a large amount of output, which can cause disk space to run out.
In this case, save the log to ES, store it separately, and periodically delete the index using curator kubernetes
And run a scheduled task in K8S to delete the index periodically
Another solution for disk space is to periodically delete old logs from jSON-files
Typically we set the size and number of logs
This will set up a maximum of 10 log files, each with a maximum size of 20 Mb. Therefore, the container has a maximum of 200 Mb of logs
"log-driver": "json-file", "log-opts": { "max-size": "20m", "max-file": "10" },
Note: In general, the default Docker log is placed
/var/lib/docker/containers/
But in the same case kubernetes also saves logs and creates a directory structure to help you find pods-based logs, so you can find container logs for each Pod running on a node
/var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/
When removing pod, / var/lib/container under the docker/containers/log and k8s created under/var/log/pods/pod log will be deleted
For example, if the POD is restarted during production, the pod log will be deleted whether it is on the original node or jumped to another node
Therefore, this log needs to be saved in ES for centralized management. Many R&D projects will check the log for troubleshooting in most cases