Limit prometheus to discover pods in specific namespaces ONLY - kubernetes

I am trying to run Prometheus to ONLY monitor pods in specific namespaces (in openshift cluster).
I am getting "cannot list pods at the cluster scope" - But I have tried to set it to not use ClusterScope (only look in specific namespaces instead)..
I've set:
prometheus.yml: |
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: kubernetes-pods
kubernetes_sd_configs:
- namespaces:
names:
- api-mytestns1
- api-mytestns2
role: pod
relabel_configs:
[cut]
I get this error - even if I remove the -job_name: kubernetes-pods entirely.. so maybe its something else in prometheus, that needs disabling?

I found that one had to overwrite server.alertmanagers with a complete copy of the settings in charts/prometheus/templates/server-configmap.yaml - to override the hardcoded default in those, to try and scrape cluster-wide.

Related

How to access metrics which is located in another namespace in prometheus in Kubernetes

Suppose there is an Application which is located in namespace called "API" and the Prometheus Server which is located in namespace "prometheus", how can I access my Application from Prometheus Server if both of the Server and Application are in different namespaces?
I've tried to specify following construction <application-service-name>.API.svc.cluster.local:<application-service-port> as a reference to the Application, but it does not seems to work
And the Prometheus responds in the UI with Connection Refused.
scrape_configs:
- job_name: 'some-job'
kubernetes_sd_configs:
namespaces:
names: 'API'
scrape_interval: 10s
scrape_timeout: 5s
static_configs:
- targets: ['application-service-name>.API.svc.cluster.local:<application-service-port>']

Prometheus configuration for monitoring Orleans in Kubernetes

I'm trying to get Prometheus functioning with my Orleans silos...
I use this consumer to expose Orleans metrics for Prometheus on port 8082. With a local Prometheus instance and using the grafana.json from the same repository I see that it works.
_ = builder.AddPrometheusTelemetryConsumerWithSelfServer(port: 8082);
Following this guide to install Prometheus on Kubernetes on a different namespace that my silos are deployed.
Following instructions I added the prometheus labels to my orleans deployment yaml:
spec:
replicas: 2
selector:
matchLabels:
app: mysilo
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8082'
labels:
app: mysilo
My job in prometheus yml:
- job_name: "orleans"
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- orleans
selectors:
- role: "pod"
label: "app=mysilo"
According to the same guide, all the pods metrics get discovered if "the pod metadata is annotated with prometheus.io/scrape and prometheus.io/port annotations.". I assume I don't need any extra installations.
With all this, and port forwarding my prometheus pod, I can see prometheus is working in http://localhost:9090/metrics but no metrics are being shown in my grafana dashboard (again, I could make it work in local machine with only one silo).
When exploring grafana I find that it seems it can't find the instances:
sum(rate(process_cpu_seconds_total{job=~"orleans", instance=~"()"}[3m])) * 100
The aim is to monitor resources my orleans silos are using (not the pods metrics themselves, but orleans metrics), but I'm missing something :(
Thanks to #BozoJoe's comment I could debug this.
The problem was that it was trying to scrape ports 30000 and 1111 instead of 8082 like I said before. I could see this thanks to the Prometheus dashboard at localhost:9090/targets
So I went to prometheus config file and make sure to start scrapping the correct port (also I added some restrictions to the search for name):
- job_name: "orleans"
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- orleans
selectors:
- role: "pod"
label: "app=mysilo"
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
action: keep
regex: 'my-silo-name*'
- source_labels: [__address__]
action: replace
regex: ([^:]+):.*
replacement: $1:8081
target_label: __address__

Getting duplicate metrics when doing querying from the Prometheus Server

I am getting metrics exposed by kube-state-metrics by querying Prometheus-server but the issue is I am getting duplicate metrics with difference only in the job field. . I am doing query such as :
curl 'http://10.101.202.25:80/api/v1/query?query=kube_pod_status_phase'| jq
The only difference is coming the job field. Metrics coming when querying Prometheus-Server
All pods running in the cluster: https://imgur.com/PKIc3ug
Any help is appreciated.
Thank You
prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first.rules"
# - "second.rules"
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
You are running (or at least ingesting) two copies of kube-state-metrics. Probably one you installed and configured yourself and another from something like kube-prometheus-stack?
I was able to get what I wanted eventually. What I did was to remove the scraping config of prometheus-kube-state-metrics from the value.yml and defining that in the config file i.e. prometheus.yml. For now it's working fine. Thank You #SYN and #coderanger for the help.

How to configure my k8s prometheus operator to scrape a target who's IP depends on the pod deployment

I'm looking to add my own custom prometheus exporter as a scrape target for my prometheus operator thats running on my k8s cluster. I've written a prometheus resource yaml to configure prometheus, but can't decide what to put for target.
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first.rules"
# - "second.rules"
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
- job_name: my-custom-node-exporter
static_configs:
- targets: ['????:8080']
does this not work for prometheus operators?

Prometheus + Heapster

I saw there is no sink configuration for Prometheus in this heapster document. Is there any simple way to combine these two and monitor.
Prometheus uses a pull model to retrieve the data, while Heapster is tool, which pushes their metrics to a certain endpoint (pull model).
I assume you want to get Kubernetes metrics into Prometheus. You don't need heapster for that, since the cadvicor has an Prometheus endpoint which can be scraped directly. Also the kubelet itself provides some metrics.
The Prometheus config would look like this:
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__meta_kubernetes_node_address_InternalIP]
target_label: __address__
regex: (.*)
replacement: $1:4194
Assuming you are using the default cadvisort port 4194. Also Prometheus should be able to detect the correct kubelet port.
Additional Note: The job for scraping cAdvisor is only required when using a Kubernetes version >= 1.7. Before that the cAdvisor metrics accidentally got exposed via the Kubelet.