How to make consul to use manually created PersistentVolumeClaim in Helm - kubernetes

When installing consul using Helm, it expects the cluster to dynamic provison the PersistentVolume requested by consul-helm chart. It is the default behavior.
I have the PV and PVC created manually and need to use this PV to be used by consul-helm charts. Is it posisble to install consul using helm to use manually created PV in kubernetes.

As #coderanger said
For this to be directly supported the chart author would have to provide helm variables you could set. Check the docs.
As showed on github docs there is no variables to change that.
If You have to change it, You would have to work with consul-statefulset.yaml, this chart provide dynamically volumes for each statefulset pod created.
volumeMounts
volumeClaimTemplates
Use helm fetch to download consul files to your local directory
helm fetch stable/consul --untar
Then i found a github answer with good explain and example about using one PV & PVC in all replicas of Statefulset, so I think it could actually work in consul chart.

Related

How to include a persistent volume claim when using Memgraph on Kubernetes?

I run Memgraph on Kubernetes using the sample service+deployment found in the memgraph/bolt-proxy repo. Unfortunately, that config doesn’t include a persistent volume claim. I'd like to keep Memgraph’s log and snapshots persistent in Kubernetes. How can I do that?
Configure a Pod to Use a PersistentVolume for Storage
This page shows you how to configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process:
View More Here
There are some Helm Charts for Memgraph that do include PersistentVolumeClaim.
Memgraph itself is a StatefulSet since it is not stateless. The StatefulSet is then provided with three volumes. Two of them are volume claims (lib and log), and the 3rd one is the config.

Persist Grafana using Helm Charts

Using the Helm Charts, is there a trick to get the Grafana to be in Persistence Mode. I have tried PVC and Stateful Sets. Neither will allow the pod to spin up. I have tried starting with no PV or PVC pre-created and with. Any pointers?

Installed prometheus-community / helm-charts but I can't get metrics on "default" namespace

I recently learned about helm and how easy it is to deploy the whole prometheus stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.
I started by creating a dedicates namespace on the cluster for monitoring with:
kubectl create namespace monitoring
Then, with helm, I added the prometheus-community repo with:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, I installed the chart with a prometheus release name:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
At this time I didn't pass any custom configuration because I'm still trying it out.
After the install is finished, it all looks good. I can access the prometheus dashboard with:
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the default namespace, where I actually have my services deployed.
I am looking at http://localhost:9090/graph to play around with the queries and I can't seem to use any that will give me metrics on my pods in the default namespace.
I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?
The Prometheus Operator includes several Custom Resource Definitions (CRDs) including ServiceMonitor (and PodMonitor). ServiceMonitor's are used to define services to the Operator to be monitored.
I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create ServiceMonitors to generate metrics for your apps in any (including default) namespace.
See: https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions
ServiceMonitors and PodMonitors are CRDs for Prometheus Operator. When working directly with Prometheus helm chart (without operator), you need have to configure your targets directly in values.yaml by editing the scrape_configs section.
It is more complex to do it, so take a deep breath and start by reading this: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config

How to install istio mutating webhook and istiod first ahead of other pods in Helm?

I am trying to use Helm 3 to install Kubeflow 1.3 with Istio 1.9 on Kubernetes 1.16. Kubeflow does not provide official Helm chart so I figured it out by myself.
But Helm does not guarantee order. Pods of other deployments and statefulsets could be up before Istio mutating webhook and istiod are up. For example, if A pod is up earlier without istio-proxy, B pod is up later with a istio-proxy, they cannot communicate with each other.
Are there any simple best practices so I can work this out as expected each time I deploy? That is say, make sure my installation with Helm is atomic?
Thank you in advance.
UPDATE:
I tried for three ways:
mark resources as pre-install, post-install, etc.
using subcharts
decouple one chart into several charts
And I adopted the third. The issue of the first is that helm hook is designed for Job, a resource could be marked as helm hook but it would not be deleted when using helm uninstall since a resource cannot hold two helm hooks at the same time(key conflict in annotations). The issue of the second is that helm installs subcharts and charts at the same time, helm call hooks of subcharts and charts at the same time as well.
Helm does not guarantee order.
Not completely. Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order:
Namespace
NetworkPolicy
ResourceQuota
LimitRange
PodSecurityPolicy
PodDisruptionBudget
ServiceAccount
Secret
SecretList
ConfigMap
StorageClass
PersistentVolume
PersistentVolumeClaim
CustomResourceDefinition
ClusterRole
ClusterRoleList
ClusterRoleBinding
ClusterRoleBindingList
Role
RoleList
RoleBinding
RoleBindingList
Service
DaemonSet
Pod
ReplicationController
ReplicaSet
Deployment
HorizontalPodAutoscaler
StatefulSet
Job
CronJob
Ingress
APIService
Additionally:
That is say, make sure my installation with Helm is atomic
you should to know that:
Helm does not wait until all of the resources are running before it exits.
You generally have no control over the order if you are using Helm. You can try to use Init Containers to validate your pods to check if they have all dependencies before they run. You can read more about it here. Another workaround will be to install a health check to make sure everything is okay. If not, it will restart until it is successful.
See also:
this article about checking your helm deployments.
question Helm Subchart order of execution in an umbrella chart with good explanation
this question
related topic on github

Reapply updated configuration to a statefulset, using Helm

I have a rather peculiar use case. Specifically, prior to the deployment of my statefulset I am deploying a ConfigMap which contains an environment variable setting (namely RECREATE_DATADIR) which instructs the pod's container to create a new data structure on the file system.
However, during the typical lifetime of the container the data structure should NOT be recreated. Hence, right after the pod is successfully running, I am changing the ConfigMap and then reapply it. Hence - if the pod ever fails, it won't recreate the data directory structure when it respawns.
How can I achieve this same result using Helm charts?
You can create a job as part of your helm chart, with the post-install helm hook which will have configmap edit permissions, will use a kubectl image (bitnami/kubectl for example), and it will patch the configmap to false using kubectl commands.