I used HELM to install the Prometheus operator and kube-prometheus into my kubernetes cluster using the following commands:
helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring --set rbacEnable=false
helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=false --namespace monitoring
Everything is running fine, however, I want to set up email alerts and in order to do so I must configure the SMTP settings in "custom.ini" file according to the grafana website. I am fairly new to Kuberenetes and using HELM charts, therefore I have no idea which command I would use to access this file or make updates to it? Is it possible to do so without having to redeploy?
Can anyone provide me with a command to update custom values?
You could pass grafana.env value to add SMTP-related settings:
GF_SMTP_ENABLED=true,GF_SMTP_HOST,GF_SMTP_USER and GF_SMTP_PASSWORD
should do the trick. The prometheus-operator chart relies on the upstream stable/grafana chart (although, still using the 1.25 version)
Related
I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".
I followed a bunch of tutorials on how to monitor Kubernetes with prometheus and Grafana
All referring to a deprecated helm operator
According to the tutorials Grafana comes out of the box complete with cluster monitoring.
In practice Grafana is not installed with the chart
helm install prometheus-operator stable/prometheus -n monitor
nor is it installed with the newer community repo
helm install prometheus-operator prometheus-community/prometheus -n monitor
I installed the Grafana chart independently
helm install grafana-operator grafana/grafana -n monitor
And through the UI tried to connect using inner cluster URLs
prometheus-operator-server.monitor.svc.cluster.local:80
prometheus-operator-alertmanager.monitor.svc.cluster.local:80
the UI test indicates success but produces no metrics.
Is there a ready made Helm operator with out of the box Grafana?
How can Grafana interact with Prometeus?
You've used the wrong charts. Currently the project is named kube-prometheus-stack:
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
If you look at values.yaml you'll notice switches for everything, including prometheus, all the exporters, grafana, all the standard dashboards, alerts for kubernetes and so on. It's all installed by one chart. And it's all linked together out of the box.
They only additional thing you might need is an Ingress/ELB for grafana, prometheus, and alertmanager to be able to open them without port-forwarding (don't forget to add ouath2-proxy or smth else cause it's all opened with no password by default).
I wouldn't bother, look at PaaS like Datadog, NewRelic etc. What you are describing becomes a costly nightmare at scale. It's just not worth the hassle for what you get ihmo.
While I'm trying to install IBM mq in the GCP Kubernetes engine using Helm charts, I got an error as shown in above figure. Anyone help me out from this...
Infrastructure: Google Cloud Platform
Kubectl version:
Client Version: v1.18.6
Server Version: v1.16.13-gke.1.
Helm version: v3.2.1+gfe51cd1
helm chart:
helm repo add ibm-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
Helm command:
$ helm install mqa ibm-charts/ibm-mqadvanced-server-dev --version 4.0.0 --set license=accept --set service.type=LoadBalancer --set queueManager.dev.secret.name=mysecret --set queueManager.dev.secret.adminPasswordKey=adminPassword --set security.initVolumeAsRoot=true
First, it appears it's not installing the right version of the Helm chart. You can follow the official installation instructions for the Chart.
Secondly, the messages are inconsistent. The error shows a GKE v1.15.12-gke.2 and also a GKE v1.16.13-gke.1. So I would make sure your client K8s context is pointing to the right cluster.
It also appears that the kubectl versions are not matching.
For example, you can download the v1.16.13 client so that it matches (Assuming that your client is on Linux):
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.13/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ ./kubectl version
✌️
IBM have provided a new sample MQ Helm chart here. Included are a number of samples for different Kubernetes distributions, and GKE can be found here. Worth highlighting that this sample deploys IBM MQ in it Cloud Native high availability topology called NativeHA.
Is there any command that can be used to apply new changes, because when I apply new changes with:
istioctl apply manifest --set XXX.XXXX=true
It overwrites the current value and set it to default.
That´s might not work because you have used istioctl manifest apply, which is deprecated and it´s istioctl install since istio 1.6 version.
Quoted from the documentation
Note that istioctl install and istioctl manifest apply are exactly the same command. In Istio 1.6, the simpler install command replaces manifest apply, which is deprecated and will be removed in 1.7.
AFAIK there are 2 ways to update new changes in istio
istioctl install
To enable the Grafana dashboard on top of the default profile, set the addonComponents.grafana.enabled configuration parameter with the following command:
$ istioctl install --set addonComponents.grafana.enabled=true
In general, you can use the --set flag in istioctl as you would with Helm. The only difference is you must prefix the setting paths with values. because this is the path to the Helm pass-through API in the IstioOperator API.
istio operator
In addition to installing any of Istio’s built-in configuration profiles, istioctl install provides a complete API for customizing the configuration.
The IstioOperator API
The configuration parameters in this API can be set individually using --set options on the command line. For example, to enable the control plane security feature in a default configuration profile, use this command:
$ istioctl install --set values.global.controlPlaneSecurityEnabled=true
Alternatively, the IstioOperator configuration can be specified in a YAML file and passed to istioctl using the -f option:
$ istioctl install -f samples/operator/pilot-k8s.yaml
For backwards compatibility, the previous Helm installation options, with the exception of Kubernetes resource settings, are also fully supported. To set them on the command line, prepend the option name with “values.”. For example, the following command overrides the pilot.traceSampling Helm configuration option:
$ istioctl install --set values.pilot.traceSampling=0.1
Helm values can also be set in an IstioOperator CR (YAML file) as described in Customize Istio settings using the Helm API, below.
If you want to set Kubernetes resource settings, use the IstioOperator API as described in Customize Kubernetes settings.
Related documentation and examples for istio operator.
https://istio.io/latest/docs/setup/install/istioctl/#customizing-the-configuration
https://istio.io/latest/docs/setup/install/standalone-operator/#update
https://stackoverflow.com/a/61865633/11977760
https://github.com/istio/operator/blob/master/samples/pilot-advanced-override.yaml
the way ive managed to upgrade is this:
do "istioctl upgrade" to upgrade the control plane in place
apply your custom configuration over the upgraded control plane
its not ideal by far but istio does a relatively bad job on dealing with customizations.
I am using the https://github.com/helm/charts/tree/master/stable/cert-manager.
It happens that I have a very frequent INFO logging. I was not yet able to find the source of that frequent logging. The problem is that the Google Cloud Platform Stackdriver feature is increasing in costs because of that high amount of logs.
Therefore I'd love to know how I can turn down INFO logging via the helm chart for the cert-manager.
I noticed that the Helm chart for cert-manager from community charts has been deprecated. The suggested official alternative does support config option to specify loglevel since release v0.7.2. see this pull request jetstack/cert-manager/1527.
So please use the official chart like:
$ helm repo add jetstack https://charts.jetstack.io
$ ## Install the cert-manager helm chart
$ helm install --name my-release --namespace cert-manager \
jetstack/cert-manager --set global.logLevel=1