How to add additional scrape config to Prometheus - kubernetes

I want to add an additional scrape config into Prometheus. I have followed the below method.
https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md
First, created a file prometheus-additional.yaml and added the new config
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
Secondly, created a secret out of it.
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run -oyaml > additional-scrape-configs.yaml
Then created the secret using the below command
kubectl apply -f additional-scrape-configs.yaml -n monitoring
Then in the above link it says
"Finally, reference this additional configuration in your prometheus.yaml CRD."
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
Where I can find the above? Do I need to create a new CRD? Can't I update the existing running deployment?

This is somehow wrong in the Documentation, you have to use additionalScrapeConfigsSecret:
additionalScrapeConfigsSecret:
enabled: true
name: additional-scrape-configs
key: prometheus-additional.yaml
Else you get the error cannot unmarshal !!map into []yaml.MapSlice
Here is a better documentation:
https://github.com/prometheus-community/helm-charts/blob/8b45bdbdabd9b54766c4beb3c562b766b268a034/charts/kube-prometheus-stack/values.yaml#L2691
According to this, you could add scrape configs without packaging into a secret like this:
additionalScrapeConfigs: |
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]

That document applies to prometheus-operator. If you have deployed it, you should have your Prometheus CRD:
kubectl get prometheus -n monitoring
Then you can edit the Prometheus exactly as stated in above: adding additionalScrapeConfigs key in the spec. (after adding a secret)
After editing, new scrape configs will be reloaded and applied automatically (conf-reloaders).

Related

What is the role of ControllerManagerConfig generated in kubebuilder

I use kubebuilder to quickly develop k8s operator, and now I save the yaml deployed by kustomize to a file in the following way.
create: manifests kustomize ## Create chart
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default --output yamls
I found a configmap, but it is not referenced by other resources.
apiVersion: v1
data:
controller_manager_config.yaml: |
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 31568e44.ys7.com
kind: ConfigMap
metadata:
name: myoperator-manager-config
namespace: myoperator-system
I am a little curious what it does? can i delete it?
I really appreciate any help with this.
It is another way to provide controller configuration. Check that https://book.kubebuilder.io/component-config-tutorial/custom-type.html .
However, it should be mounted in your deployment and the file path must be provided in the --config flag (the name of the flag can be different in your case, but "config" is used in the tutorial - it depends on your code)

Azure Kubernetes - prometheus is deployed as a part of ISTIO not showing the deployments?

I have used the following configuration to setup the Istio
cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
and exposed the prometheus service as mentioned below
kubectl expose service prometheus --type=LoadBalancer --name=prometheus-svc --namespace istio-system
kubectl get svc prometheus-svc -n istio-system -o json
export PROMETHEUS_URL=$(kubectl get svc prometheus-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I have deployed an application however couldn't see the below deployments in prometheus
The standard prometheus installation by istio does not configure your pods to send metrics to prometheus. It just collects data from the istio resouces.
To add your pods to being scraped add the following annotations in the deployment.yml of your application:
apiVersion: apps/v1
kind: Deployment
[...]
spec:
template:
metadata:
annotations:
prometheus.io/scrape: true # determines if a pod should be scraped. Set to true to enable scraping.
prometheus.io/path: /metrics # determines the path to scrape metrics at. Defaults to /metrics.
prometheus.io/port: 80 # determines the port to scrape metrics at. Defaults to 80.
[...]
By the way: The prometheus instance installed with istioctl should not be used for production. From docs:
[...] pass --set values.prometheus.enabled=true during installation. This built-in deployment of Prometheus is intended for new users to help them quickly getting started. However, it does not offer advanced customization, like persistence or authentication and as such should not be considered production ready.
You should setup your own prometheus and configure istio to report to it. See:
Reference: https://istio.io/latest/docs/ops/integrations/prometheus/#option-1-metrics-merging
The following yaml provided by istio can be used as reference for setup of prometheus:
https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml
Furthermore, if I remember correctly, installation of addons like kiali, prometheus, ... with istioctl will be removed with istio 1.8 (release date december 2020). So you might want to setup your own instances with helm anyway.

prometheus operator - enable monitoring for everything in all namespaces

I want to monitor a couple applications running on a Kubernetes cluster in namespaces named development and production through prometheus-operator.
Installation command used (as per Github) is:
helm install prometheus-operator stable/prometheus-operator -n production --set prometheusOperator.enabled=true,prometheus.service.type=NodePort,prometheusOperator.service.type=NodePort,alertmanager.service.type=NodePort,grafana.service.type=NodePort,grafana.service.nodePort=30906
What parameters do I need to add to above command to have prometheus-operator discover and monitor all apps/services/pods running in all namespaces?
With this, Service Discovery only shows some prometheus-operator related services, but not the app that I am running within 'production' namespace even though prometheus-operator is installed in the same namespace.
Anything I am missing?
Note - Am running performing all actions using the same user (which uses the $HOME/.kube/config file), so I assume permissions are not an issue.
kubectl version - v1.17.3
helm version - 3.1.2
P.S. There are numerous articles on this on different forums, but am still not finding simple and direct answers for this.
I had the same problem. After some investigation answering with more details.
I've installed Prometheus stack via Helm charts which include Prometheus operator chart directly as a sub-project. Prometheus operator monitors namespaces specified by the following helm values:
prometheusOperator:
namespaces: ''
denyNamespaces: ''
prometheusInstanceNamespaces: ''
alertmanagerInstanceNamespaces: ''
thanosRulerInstanceNamespaces: ''
The namespaces value specifies monitored namespaces for ServiceMonitor and PodMonitor CRDs. Other CRDs have their own settings, which if not set, default to namespaces. Helm values are passed as command-line arguments to the operator. See here and here.
Prometheus CRDs are picked up by the operator from the mentioned namespaces, by default - everywhere. However, as the operator is designed with multiple simultaneous Prometheus releases in mind, what to pick up by a particular Prometheus app instance is controlled by the corresponding Prometheus CRD. CRDs selectors and corresponding namespaces selectors are controlled via the following Helm values:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: true
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
Similar values are present for other CRDs: alertmanagerConfigXXX, ruleNamespaceXXX, podMonitorXXX, probeXXX. XXXSelectorNilUsesHelmValues set to true, means to look for CRD with particular release label, e.g. release=myrelease. See here.
Empty selector (for a namespace, CRD, or any other object) means no filtering. So for Prometheus object to pick up a ServiceMonitor from the other namespaces there are few options:
Set serviceMonitorSelectorNilUsesHelmValues: false. This leaves serviceMonitorSelector empty.
Apply the release label, e.g. release=myrelease, to your ServiceMonitor CRD.
Set a non-empty serviceMonitorSelector that matches your ServiceMonitor.
For the curious ones here are links to the operator sources:
Enqueue of Prometheus CRD processing
Processing of Prometheus CRD
I used values.yaml from https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml, modified parameters *NilUsesHelmValues to False and it seems to work fine with that.
helm install prometheus-operator stable/prometheus-operator -n monitoring -f values.yaml
Also, like https://stackoverflow.com/users/7889479/anish-kumar-mourya stated, the services do show in Grafana dashboard even though they dont appear in Prometheus UI under Service Discovery or Targets.
Hope this helps other newbies like me.
no its fine but you can create new namespace for monitoring and install prometheus over there would be good to manage things related to monitoring.
helm install prometheus-operator stable/prometheus-operator -n monitoring
You need to create a service for the pod and a serviceMonitor custom resource to configure which services in which namespace need to be discovered by prometheus.
kube-state-metrics Service example
apiVersion: v1
kind: Service
metadata:
labels:
app: kube-state-metrics
k8s-app: kube-state-metrics
annotations:
alpha.monitoring.coreos.com/non-namespaced: "true"
name: kube-state-metrics
spec:
ports:
- name: http-metrics
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: kube-state-metrics
This Service targets all Pods with the label k8s-app: kube-state-metrics.
Generic ServiceMonitor example
This ServiceMonitor targets all Services with the label k8s-app (spec.selector) any value, in the namespaces kube-system and monitoring (spec.namespaceSelector).
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: k8s-apps-http
labels:
k8s-apps: http
spec:
jobLabel: k8s-app
selector:
matchExpressions:
- {key: k8s-app, operator: Exists}
namespaceSelector:
matchNames:
- kube-system
- monitoring
endpoints:
- port: http-metrics
interval: 15s
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md

stable/prometheus-operator - adding persistent grafana dashboards

I am trying to add a new dashboard to the below helm chart
https://github.com/helm/charts/tree/master/stable/prometheus-operator
The documentation is not very clear.
I have added a config map to the name space like the below -
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
etcd-dashboard.json: |-
{JSON}
According to the documentation, this should just be "picked" up and added, but its not.
https://github.com/helm/charts/tree/master/stable/grafana#configuration
The sidecar option in my values.yaml looks like -
grafana:
enabled: true
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: password
ingress:
## If true, Grafana Ingress will be created
##
enabled: false
## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
path: /
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
# - secretName: grafana-general-tls
# hosts:
# - grafana.example.com
#dashboardsConfigMaps:
#sidecarProvider: sample-grafana-dashboard
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
I have also tried adding this to the value.yml
dashboardsConfigMaps:
- sample-grafana-dashboard
Which, doesn't work.
Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
To sum up:
For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled
Install prometheus-operator witch sidecard enabled:
helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring
Add new dashboard, for example
MongoDB_Overview:
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Now the tricky part, you have to set a correct label for your
configmap, by default grafana.sidecar.dashboards.label is set
tografana_dashboard, so:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard.
The dashboard is persisted and safe, stored in configmap.
UPDATE:
January 2021:
Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so:
Create namespace:
kubectl create namespace monitoring
Install prometheus-operator from helm chart:
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
Add Mongodb dashboard as an example
wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.json
kubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json
Lastly, label the dashboard:
kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
You have to:
define you dashboard json as a configmap (as you have done, but see below for an easier way)
define a provider: to tell where to load the dashboard
map the two together
from values.yml:
dashboardsConfigMaps:
application: application
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: application
orgId: 1
folder: "Application Metrics"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/application
Now the application config map should create files in this directory in the pod, and as has been discussed the sidecar should load them into an Application Metrics folder, seen in the GUI.
That probably answers your issue as written, but as long as your dashboards aren't too big using kustonmise mean you can have the json on disk without needing to include the json in another file thus:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# May choose to enable this if need to refer to configmaps outside of kustomize
generatorOptions:
disableNameSuffixHash: true
namespace: monitoring
configMapGenerator:
- name: application
files:
- grafana-dashboards/application/api01.json
- grafana-dashboards/application/api02.json
For completeness sake you can also load dashboards from url or from the Grafana site, although I don't believe mixing method in the same folder works.
So:
dashboards:
kafka:
kafka01:
url: https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/latest/resources/grafana-dashboard.json
folder: "KUDO Kafka"
datasource: Prometheus
nginx:
nginx1:
gnetId: 9614
datasource: Prometheus
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: kafka
orgId: 1
folder: "KUDO Kafka"
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/kafka
- name: nginx
orgId: 1
folder: Nginx
type: file
disableDeletion: true
editable: false
options:
path: /var/lib/grafana/dashboards/nginx
Creates two new folders containing a dashboard each, from external sources, or maybe you point this at your git repo you de-couple your dashboard commits from your deployment.
If you do not change the settings in the helm chart. The default user/password for grafana is:
user: admin
password: prom-operator

How to specify a GKE node pool configuration in a YAML file instead of using gcloud container node-pools create?

It seems that the only way to create node pools on Google Kubernetes Engine is with the command gcloud container node-pools create. I would like to have all the configuration in a YAML file instead. What I tried is the following:
apiVersion: v1
kind: NodeConfig
metadata:
annotations:
cloud.google.com/gke-nodepool: ares-pool
spec:
diskSizeGb: 30
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
But kubectl apply fails with:
error: unable to recognize "ares-pool.yaml": no matches for kind "NodeConfig" in version "v1"
I am surprised that Google yields almost no relevant results for all my searches. The only documentation that I found was the one on Google Cloud, which is quite incomplete in my opinion.
Node pools are not Kubernetes objects, they are part of the Google Cloud API. Therefore Kubernetes does not know about them, and kubectl apply will not work.
What you actually need is a solution called "infrastructure as code" - a code that will tell GCP what kind of node pool it wants.
If you don't strictly need YAML, you can check out Terraform that handles this use case. See: https://terraform.io/docs/providers/google/r/container_node_pool.html
You can also look into Google Deployment Manager or Ansible (it has GCP module, and uses YAML syntax), they also address your need.
I don' know if it answers accurately your needs but if you want to do IAC in general with Kubernetes, you can use Crossplane CRDs. If you already have a running cluster, you just have to install their helm chart and you can provision a cluster this way:
apiVersion: container.gcp.crossplane.io/v1beta1
kind: GKECluster
metadata:
name: gke-crossplane-cluster
spec:
forProvider:
initialClusterVersion: "1.19"
network: "projects/development-labs/global/networks/opsnet"
subnetwork: "projects/development-labs/regions/us-central1/subnetworks/opsnet"
ipAllocationPolicy:
useIpAliases: true
defaultMaxPodsConstraint:
maxPodsPerNode: 110
And then you can define an associated node pool as follows:
apiVersion: container.gcp.crossplane.io/v1alpha1
kind: NodePool
metadata:
name: gke-crossplane-np
spec:
forProvider:
autoscaling:
autoprovisioned: false
enabled: true
maxNodeCount: 2
minNodeCount: 1
clusterRef:
name: gke-crossplane-cluster
config:
diskSizeGb: 100
# diskType: pd-ssd
imageType: cos_containerd
labels:
test-label: crossplane-created
machineType: n1-standard-4
oauthScopes:
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
- "https://www.googleapis.com/auth/servicecontrol"
- "https://www.googleapis.com/auth/service.management.readonly"
- "https://www.googleapis.com/auth/trace.append"
initialNodeCount: 2
locations:
- us-central1-a
management:
autoRepair: true
autoUpgrade: true
If you want you can find a full example of a GKE provisionning with Crossplane here.