How to deploy prometheus using prometheus operator? - kubernetes

I'm trying to deploy Prometheus using Prometheus operator. I have used the documentation and helm charts from https://github.com/prometheus-operator/prometheus-operator.
Since I need the charts for future reference, rather then directly installing the charts from repository I made a Chart.yaml file and added the repository as dependency.
apiVersion: v2
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
engine: gotpl
type: application
maintainers:
- name:
email:
name: kube-prometheus-stack
sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
version: 32.2.1
appVersion: 0.54.0
kubeVersion: ">=1.16.0-0"
home: https://github.com/prometheus-operator/kube-prometheus
keywords:
- operator
- prometheus
- kube-prometheus
annotations:
"artifacthub.io/operator": "true"
"artifacthub.io/links": |
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
- name: Upstream Project
url: https://github.com/prometheus-operator/kube-prometheus
dependencies:
- name: kube-state-metrics
version: "4.4.*"
repository: https://prometheus-community.github.io/helm-charts
condition: kubeStateMetrics.enabled
- name: prometheus-node-exporter
version: "2.5.*"
repository: https://prometheus-community.github.io/helm-charts
condition: nodeExporter.enabled
- name: grafana
version: "6.21.*"
repository: https://grafana.github.io/helm-charts
condition: grafana.enabled
Chart.yaml file
Then I execute the following cmds
hem dependency update
helm install <chartname> .
Every thing works fine but when I check the pods only the operator pod is created and running with other services and grafana.
Is this the default behavior of the Prometheus operator.
I thought it might be the default behavior of Prometheus so I tried to deploy redis-cluster using redis-cluster operator and also rabbitmq-cluster with rabitmq-cluster operator but each one creates only the operator pod and not cluster pods.

an operator pod acts as a controller that listens to events regarding specific custom resources. if you only deploy the operator, you have to seperately deploy the custom resource you wish to be created.
with the prometeus-operator, that would be a custom resource of kind "prometheus". if the helm chart you choose is capable to also deploy this (or not) should be indicated
in the charts values.yaml and documented on their github page.
you can also use the examples from the prometheus-operator repo to create prometheus instances. check out these files to do so: https://github.com/prometheus-operator/prometheus-operator/tree/main/example/rbac/prometheus

Related

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

How to choose dependency release name for custom Helm 3 chart

The syntax for adding a dependency to a helm 3 chart looks like this (inside of chart.yaml).
How can you specify a release name if you need multiple instances of a dependency?
apiVersion: v2
name: shared
description: Ingress Controller and Certificate Manager
type: application
version: 0.1.1
appVersion: 0.1.0
dependencies:
- name: cert-manager
version: ~0.13
repository: https://charts.jetstack.io
In the CLI it's just helm upgrade -i RELEASE_NAME CHART_NAME -n NAMESPACE
But inside of Chart.yaml the option to specify a release seems to be missing.
The next question I have is if there's a weird way to do it, how would you write the values for each instance in the values.yaml file?
After 5 more minutes of searching I found that there's an alias field that can be added, like so:
dependencies:
- name: cert-manager
alias: first-one
version: ~0.13
repository: https://charts.jetstack.io
- name: cert-manager
alias: second-one
version: ~0.13
repository: https://charts.jetstack.io
And in the values.yaml file
first-one:
# values go here
second-one:
# values go here
Reference https://helm.sh/docs/topics/charts/#the-chartyaml-file
Using cert-manager is just an example, I can't think of a use-case that would need two instances of that particular chart. I'm hoping to use it for brigade projects

prometheus operator - enable monitoring for everything in all namespaces

I want to monitor a couple applications running on a Kubernetes cluster in namespaces named development and production through prometheus-operator.
Installation command used (as per Github) is:
helm install prometheus-operator stable/prometheus-operator -n production --set prometheusOperator.enabled=true,prometheus.service.type=NodePort,prometheusOperator.service.type=NodePort,alertmanager.service.type=NodePort,grafana.service.type=NodePort,grafana.service.nodePort=30906
What parameters do I need to add to above command to have prometheus-operator discover and monitor all apps/services/pods running in all namespaces?
With this, Service Discovery only shows some prometheus-operator related services, but not the app that I am running within 'production' namespace even though prometheus-operator is installed in the same namespace.
Anything I am missing?
Note - Am running performing all actions using the same user (which uses the $HOME/.kube/config file), so I assume permissions are not an issue.
kubectl version - v1.17.3
helm version - 3.1.2
P.S. There are numerous articles on this on different forums, but am still not finding simple and direct answers for this.
I had the same problem. After some investigation answering with more details.
I've installed Prometheus stack via Helm charts which include Prometheus operator chart directly as a sub-project. Prometheus operator monitors namespaces specified by the following helm values:
prometheusOperator:
namespaces: ''
denyNamespaces: ''
prometheusInstanceNamespaces: ''
alertmanagerInstanceNamespaces: ''
thanosRulerInstanceNamespaces: ''
The namespaces value specifies monitored namespaces for ServiceMonitor and PodMonitor CRDs. Other CRDs have their own settings, which if not set, default to namespaces. Helm values are passed as command-line arguments to the operator. See here and here.
Prometheus CRDs are picked up by the operator from the mentioned namespaces, by default - everywhere. However, as the operator is designed with multiple simultaneous Prometheus releases in mind, what to pick up by a particular Prometheus app instance is controlled by the corresponding Prometheus CRD. CRDs selectors and corresponding namespaces selectors are controlled via the following Helm values:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: true
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
Similar values are present for other CRDs: alertmanagerConfigXXX, ruleNamespaceXXX, podMonitorXXX, probeXXX. XXXSelectorNilUsesHelmValues set to true, means to look for CRD with particular release label, e.g. release=myrelease. See here.
Empty selector (for a namespace, CRD, or any other object) means no filtering. So for Prometheus object to pick up a ServiceMonitor from the other namespaces there are few options:
Set serviceMonitorSelectorNilUsesHelmValues: false. This leaves serviceMonitorSelector empty.
Apply the release label, e.g. release=myrelease, to your ServiceMonitor CRD.
Set a non-empty serviceMonitorSelector that matches your ServiceMonitor.
For the curious ones here are links to the operator sources:
Enqueue of Prometheus CRD processing
Processing of Prometheus CRD
I used values.yaml from https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml, modified parameters *NilUsesHelmValues to False and it seems to work fine with that.
helm install prometheus-operator stable/prometheus-operator -n monitoring -f values.yaml
Also, like https://stackoverflow.com/users/7889479/anish-kumar-mourya stated, the services do show in Grafana dashboard even though they dont appear in Prometheus UI under Service Discovery or Targets.
Hope this helps other newbies like me.
no its fine but you can create new namespace for monitoring and install prometheus over there would be good to manage things related to monitoring.
helm install prometheus-operator stable/prometheus-operator -n monitoring
You need to create a service for the pod and a serviceMonitor custom resource to configure which services in which namespace need to be discovered by prometheus.
kube-state-metrics Service example
apiVersion: v1
kind: Service
metadata:
labels:
app: kube-state-metrics
k8s-app: kube-state-metrics
annotations:
alpha.monitoring.coreos.com/non-namespaced: "true"
name: kube-state-metrics
spec:
ports:
- name: http-metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: kube-state-metrics
This Service targets all Pods with the label k8s-app: kube-state-metrics.
Generic ServiceMonitor example
This ServiceMonitor targets all Services with the label k8s-app (spec.selector) any value, in the namespaces kube-system and monitoring (spec.namespaceSelector).
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: k8s-apps-http
labels:
k8s-apps: http
spec:
jobLabel: k8s-app
selector:
matchExpressions:
- {key: k8s-app, operator: Exists}
namespaceSelector:
matchNames:
- kube-system
- monitoring
endpoints:
- port: http-metrics
interval: 15s
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

How to set a different namespace for child helm charts?

When you install a chart with a child chart that doesn't specify a namespace, Helm will use the one specified on command line via --namespace. Is it possible to override this flag for a specific child chart?
For example if I have chart A which depends on chart B and I specify --namespace foo, I want to be able to customize the resources of chart B to be installed into some other namespace bar instead of foo.
Update 2:
Helm 3 added support for multi namespaces https://github.com/helm/helm/issues/2060
Update 1:
If a resource template specifies a metadata.namespace, then it will be installed in that namespace. For example, if I have a pod with metadata.namespace: x and I run helm install mychart --namespace y, that pod will be installed in x. I guess you could use regular helm templates with the namespace to parameterize it.
Original answer:
We do not plan on fully supporting multi-namespaced releases until Helm 3.0
https://github.com/kubernetes/helm/issues/2060#issuecomment-306847365
As a workaround, you install for each namespace individually using --skip-dependencies or with dependency conditions
If you already have different charts then you can use helmfile to achieve this.
Step 1:
create the following folder.
my-awesome-infrastructure/
helm
helmfile
helmfile.yaml
Where helm and helmfile are the binary executables.
Step 2: install the helm diff plugin which is needed used helmfile.
helm plugin install https://github.com/databus23/helm-diff
Step 3: declare your charts in the helmfile.yaml.
helmBinary: ./helm
repositories:
- name: ingress-nginx
url: https://kubernetes.github.io/ingress-nginx
- name: bitnami
url: https://charts.bitnami.com/bitnami
releases:
- name: nginx-ingress
namespace: nginx-ingress
createNamespace: true
chart: ingress-nginx/ingress-nginx
version: ~4.1.0
- name: jupyterhub
namespace: jupyterhub
createNamespace: true
chart: bitnami/jupyterhub
version: ~1.1.12
- name: metrics-server
namespace: metrics-server
createNamespace: true
chart: bitnami/metrics-server
version: ~5.11.9
Step 4: run helmfile to deploy all charts.
./helmfile apply
In the above example, you are deploying three separate charts to three separate namespaces.
Under the covers, helmfile will run helm install separately and create separate releases.