"no token found" when scraping from Promethues - kubernetes

I am using Promethues to monitor my Kubernetes cluster. All my microservices can be accessed using my HA Proxy.
My base Promethues config is :
- job_name: 'kubernetes_pods'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- api_server: http://172.29.219.102:8080
role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_host_ip]
target_label: __address__
regex: (.*)
replacement: 172.29.219.110:8080
Where 172.29.219.110:8080 is the IP & Port of my standalone HA Proxy.
The endpoint that I am trying to monitor using Prometheus is /auth/health.
When I do a simple curl command from anywhere, I see :
# curl http://172.29.219.110:8080/auth/health
{"status":"UP"}
But when Prometheus tries to do it, the logs indicate :
level=warn ts=2017-12-15T16:40:48.301741927Z caller=scrape.go:673 component="target manager" scrape_pool=kubernetes_pods target=http://172.29.219.110:8080/auth/health msg="append failed" err="no token found"
This endpoint is publicly exposed and requires no authentication what so ever. So why does Promethues say :

{"status":"UP"}
Prometheus requires data to be in its format, and cannot handle other arbitrary data. The error you are getting is a parse error due to this.
You should instrument your code using a client library, and have it expose data in the Prometheus text format.

Related

Prometheus kuberentes-pods Get "https:// xx.xx.xx:443 /metrics": dial tcp xx.xx.xx:443: connect: connection refused

I have configured Prometheus on one of the kubernetes cluster nodes using [this][1]. After that I added following prometheus.yml file. I can list nodes and apiservers but for pods, all the pods shows down and error:
Get "https:// xx.xx.xx:443 /metrics": dial tcp xx.xx.xx:443: connect: connection refused and for some pods the status is unknown.
Can someone point me what am I doing wrong here?
Cat prometheus.yml
global:
scrape_interval: 1m
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: \['localhost:9090'\]
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
bearer_token_file: /dfgdjk/token
api_server: https://masterapi.com:3343
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: \[__meta_kubernetes_namespace\]
action: replace
target_label: kubernetes_namespace
- source_labels: \[__meta_kubernetes_pod_name\]
action: replace
target_label: kubernetes_pod_name
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://masterapi.com:3343
bearer_token_file: /dfgdjk/token
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- source_labels: \[__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name\]
action: keep
regex: default;kubernetes;https][1]
[1]: https://devopscube.com/install-configure-prometheus-linux/
It's impossible to get metrics to external prometheus server without having any prometheus components inside the kubernetes cluster. This happens because cluster network is isolated with host's network and it's not possible to scrape metrics from pods directly from outside the cluster.
Please refer to Monitoring kubernetes with prometheus from outside of k8s cluster GitHub issue
There options which can be done:
install prometheus inside the cluster using prometheus operator or manually - example
use proxy solutions, for example this one from the same thread on GitHub - k8s-prometheus-proxy
on top of the prometheus installed within the cluster, it's possible to have external prometheus in federation so all logs are saved outside of the cluster. Please refer to prometheus federation.
Also important part is kube state metrics should be installed as well in kubernetes cluster. How to set it up.
Edit: also you can refer to another SO question/answer which confirms that only with additional steps or OP resolved it by another proxy solution.

Prometheus targets: server returned HTTP status 403 Forbidden

I have setup prometheus, running in my kubernetes cluster , And I configured the certificate of kubernetes in the configuration file of Prometheus, but for some targets I am getting back a "server returned HTTP status 403 Forbidden". this is part of my config:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /etc/k8spem/ca.pem
cert_file: /etc/k8spem/admin.pem
key_file: /etc/k8spem/admin.key
bearer_token_file: /etc/k8spem//token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
I have already configured the certificate, why still 403 ?
by the way, I can get results on CLI by executing this command curl -k --cacert /work/deploy/kubernetes/security/ca.pem --cert /work/deploy/kubernetes/security/admin.pem --key /work/deploy/kubernetes/security/admin.key --cert-type PEM https://172.16.5.150:6443/metrics
I don't know why, I just mount a new directory, delete the old configMap and recreate it. And it' work. I think maybe i just forgot to reapply the configMap.

kube-state-metrics - Failed to list *v1.Pod: serializer for text/html; charset=utf-8 doesn't exist

I'm trying to setup a monitoring of our Kubernetes cluster but it's not that easy. In the first time I tried on a dedicated VM to scrap all metrics following configs I can find on Internet and prometheus.io but I read several time it's not the best way to do it. I found a suggestion to use kube-state-metrics, it's done, the pod is running and metrics are reachable from outside (Azure infra). so http://xxx.xxx.xxx.xxx:8080/metrics is showing me a correct result.
When I add this to the config:
- job_name: 'Kubernetes-Nodes'
scheme: http
#tls_config:
#insecure_skip_verify: true
kubernetes_sd_configs:
- api_server: 'http://xxx.xxx.xxx.xxx:8080'
role: endpoints
namespaces:
names: [default]
#tls_config:
#insecure_skip_verify: true
bearer_token: %VERYLONGLINE%
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
The log I can find is :
Sep 25 06:53:59 monitoring001 prometheus[59005]: level=error ts=2018-09-25T06:53:59.636669498Z caller=main.go:234 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:288: Failed to list *v1.Pod: serializer for text/html; charset=utf-8 doesn't exist"
Anyone has an idea ?
Thank you,
Finally found the issue ! My Prometheus is located on a dedicated VM outside Kubernetes cluster.
Kube-state-metrics is exposing metrics from an IP outside of the cluster,because of this, it's not necessary to scrap metrics like a kubernetes object, it's just necessary to scrap metrics like a simple target

Prometheus Outside Kubernetes Cluster

I'm trying to configure Prometheus outside Kubernetes Cluster.
Below is my Prometheus config.
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://10.0.4.155:6443
scheme: https
tls_config:
insecure_skip_verify: true
basic_auth:
username: kube
password: Superkube01
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
These is how it looks:
root#master01:~# kubectl cluster-info
Kubernetes master is running at https://10.0.4.155:6443
root#master01:~# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.4.103:6443,10.0.4.138:6443,10.0.4.155:6443 11h
netchecker-service 10.2.0.10:8081 11h
root#master01:~#
But, when starting Prometheus, i'm getting below error.
level=error ts=2018-05-29T13:55:08.171451623Z caller=main.go:216 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:270: Failed to list *v1.Pod: Get https://10.0.4.155:6443/api/v1/pods?resourceVersion=0: x509: certificate signed by unknown authority"
Could anyone please tell me, what wrong i'm doing here?
Thanks,
Pavanasam R
The error indicates that Prometheus is using a different certificate to sign its metric collection request than the one expected by your apiserver.
You really need to format your code in a code block so we can see the yaml formatting. kubernetes_sd_configs seems to be the wrong home for insecure_skip_verify and basic_auth according to this link. Might want to move them and try scraping again.
As of now your insecure_skip_verify is a part of kubernetes_sd_configs:. Add it in api_server context as well.
kubernetes_sd_configs:
- api_server: https://<ip>:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
in order to access kubernetes api endpoint you need to authenticate the client either through basic_auth, bearer_token, tls_config. please go through this , it will be helpful.

Prometheus + Heapster

I saw there is no sink configuration for Prometheus in this heapster document. Is there any simple way to combine these two and monitor.
Prometheus uses a pull model to retrieve the data, while Heapster is tool, which pushes their metrics to a certain endpoint (pull model).
I assume you want to get Kubernetes metrics into Prometheus. You don't need heapster for that, since the cadvicor has an Prometheus endpoint which can be scraped directly. Also the kubelet itself provides some metrics.
The Prometheus config would look like this:
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__meta_kubernetes_node_address_InternalIP]
target_label: __address__
regex: (.*)
replacement: $1:4194
Assuming you are using the default cadvisort port 4194. Also Prometheus should be able to detect the correct kubelet port.
Additional Note: The job for scraping cAdvisor is only required when using a Kubernetes version >= 1.7. Before that the cAdvisor metrics accidentally got exposed via the Kubelet.