Querying external etcd with prometheus - kubernetes

I've got prometheus running ontop of kubernetes with the following scrape config, as described by the documentation. Where the .pem files are located on disk within the prometheus container.
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#
scrape_configs:
- job_name: etcd
static_configs:
- targets: ['10.0.0.222:2379','10.0.0.221:2379','10.0.0.220:2379']
tls_config:
# CA certificate to validate API server certificate with.
ca_file: /prometheus/ca.pem
cert_file: /prometheus/cert.pem
key_file: /prometheus/key.pem
I see that etcd as a target in prometheus, however its returning garbage.
https://i.imgur.com/rdRI4V7.png
I am able to hit the metrics endpoint doing a local curl by passing in the client certificate information like so.
What am I doing wrong?
sudo curl --cacert /etc/ssl/etcd/ssl/ca.pem https://127.0.0.1:2379/metrics -L --cert /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2.pem --key /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2-key.pem^C

You need to add scheme: https for HTTPS scraping.

Related

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

Prometheus targets: server returned HTTP status 403 Forbidden

I have setup prometheus, running in my kubernetes cluster , And I configured the certificate of kubernetes in the configuration file of Prometheus, but for some targets I am getting back a "server returned HTTP status 403 Forbidden". this is part of my config:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /etc/k8spem/ca.pem
cert_file: /etc/k8spem/admin.pem
key_file: /etc/k8spem/admin.key
bearer_token_file: /etc/k8spem//token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
I have already configured the certificate, why still 403 ?
by the way, I can get results on CLI by executing this command curl -k --cacert /work/deploy/kubernetes/security/ca.pem --cert /work/deploy/kubernetes/security/admin.pem --key /work/deploy/kubernetes/security/admin.key --cert-type PEM https://172.16.5.150:6443/metrics
I don't know why, I just mount a new directory, delete the old configMap and recreate it. And it' work. I think maybe i just forgot to reapply the configMap.

Access etcd metrics for Prometheus

I have set up a v1.13 Kubernetes cluster using Kube spray.
Our etcd is running as docker containers outside the K8s cluster. If I check the etcd certificates, I can see each etcd has its own ca, client cert and key.
If I want to scrape the /metrics endpoints of these etcd conatiners for Prometheus, which certificates to use for the HTTPS endpoints?
I am not yet sure, if this is the most secured way or not.
But I took the ca.pem, cert and key that one of the etcd uses.
I created a Kubernetes secret object out of the three:
kubectl create secret generic etcd-metrics -n monitoring --from-file=etcd-secrets/
Then I added the secrets as configmaps in Prometheus config and below as my scrape
targets:
- job_name: etcd
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
static_configs:
- targets:
- 172.xxxxx:2379
- 172.xxxxx:2379
- 172.xxxxx:2379
tls_config:
ca_file: /etc/ssl/etcd/ca.pem
cert_file: /etc/ssl/etcd/etcd-node.pem
key_file: /etc/ssl/etcd/etcd-key.pem
insecure_skip_verify: false
While not exactly what you asked, I had great success pushing that authentication down onto the actual machine by using socat running in a sidecar container listening on etcd's prometheus port :9379 and then you can just point prometheus at http://${etcd_hostname}:9379/metrics without having to deal with authentication for those metrics endpoints.
I don't have the socat invocation in front of me, but something like:
socat tc4-listen:9379,reuseaddr,fork \
openssl:127.0.0.1:2379,capath=/etc/kubernetes/pki/etcd/cacert.crt,key=/etc/kubernetes/pki/etcd/peer.key,cert=/etc/kubernetes/pki/etcd/peer.crt

Prometheus Outside Kubernetes Cluster

I'm trying to configure Prometheus outside Kubernetes Cluster.
Below is my Prometheus config.
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://10.0.4.155:6443
scheme: https
tls_config:
insecure_skip_verify: true
basic_auth:
username: kube
password: Superkube01
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
These is how it looks:
root#master01:~# kubectl cluster-info
Kubernetes master is running at https://10.0.4.155:6443
root#master01:~# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.4.103:6443,10.0.4.138:6443,10.0.4.155:6443 11h
netchecker-service 10.2.0.10:8081 11h
root#master01:~#
But, when starting Prometheus, i'm getting below error.
level=error ts=2018-05-29T13:55:08.171451623Z caller=main.go:216 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:270: Failed to list *v1.Pod: Get https://10.0.4.155:6443/api/v1/pods?resourceVersion=0: x509: certificate signed by unknown authority"
Could anyone please tell me, what wrong i'm doing here?
Thanks,
Pavanasam R
The error indicates that Prometheus is using a different certificate to sign its metric collection request than the one expected by your apiserver.
You really need to format your code in a code block so we can see the yaml formatting. kubernetes_sd_configs seems to be the wrong home for insecure_skip_verify and basic_auth according to this link. Might want to move them and try scraping again.
As of now your insecure_skip_verify is a part of kubernetes_sd_configs:. Add it in api_server context as well.
kubernetes_sd_configs:
- api_server: https://<ip>:6443
role: node
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
insecure_skip_verify: true
in order to access kubernetes api endpoint you need to authenticate the client either through basic_auth, bearer_token, tls_config. please go through this , it will be helpful.

Kubernetes: how to enable API Server Bearer Token Auth?

I've been trying to enabled token auth for HTTP REST API Server access from a remote client.
I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.
I then tried to enable token auth via running:
echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
this gives me a token. I then added the token to a token file on my controller containing a token and default user:
$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:
- --token-auth-file=/etc/kubernetes/token
to the startup param list
I then reboot (not sure the best way to restart API Server by itself??)
At this point, kubectl from a remote server quits working(won't connect). I then look at docker ps on the controller and see the api server. I run docker logs container_id and get no output. If I look at other docker containers I see output like:
E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
So it appears that my api-server.yaml config it preventing the API Server from starting properly....
Any suggestions on the proper way to configure API Server for bearer token REST auth?
It is possible to have both TLS configuration and Bearer Token Auth configured, right?
Thanks!
I think your kube-apiserver dies because it's can't find the /etc/kubernetes/token. That's because on your deployment the apiserver is a static pod therefore running in a container which in turn means it has a different root filesystem than that of the host.
Look into /etc/kubernetes/manifests/kube-apiserver.yaml and add a volume and a volumeMount like this (I have omitted the lines that do not need changing and don't help in locating the correct section):
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- name: kube-apiserver
command:
- ...
- --token-auth-file=/etc/kubernetes/token
volumeMounts:
- mountPath: /etc/kubernetes/token
name: token-kubernetes
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/token
name: token-kubernetes
One more note: the file you quoted as token should not end in . (dot) - maybe that was only a copy-paste mistake but check it anyway. The format is documented under static token file:
token,user,uid,"group1,group2,group3"
If your problem perists execute the command below and post the output:
journalctl -u kubelet | grep kube-apiserver