Prometheus returns error context deadline exceeded - kubernetes

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)

I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Related

istio sidecar is not restricting pod connections as desired

I want to see how an istio sidecar may restrict a pod's connections (I am learning istio through its references)
so I am working with the bookinfo example, after installing the example (having a Docker Desktop) - I wrote a simple sidecar resource the restricts the connections of ratings to reviews and details services as following:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: bookinfo-ratings-sidecar
spec:
workloadSelector:
labels:
app: ratings
egress:
- hosts:
- "./details.default.svc.cluster.local"
- "./reviews.default.svc.cluster.local"
when I run the following command istioctl proxy-config clusters ratings-v1-5f9699cfdf-hb2gd I really see that it includes only details.default.svc.cluster.local , reviews.default.svc.cluster.local (from the bookinfo services)
but if I run kubectl exec ratings-v1-5f9699cfdf-hb2gd -- curl -sS productpage:9080 I get an html result i.e. it doesn't refuse the connection with productpage as if the sidecar doesn't exist
What am I missing ?
(p.s this The result of sidecar injection was not what I expected didn't help)

ingress nginx how to debug 502 page even though the ports in service and Ingress are correct?

i have a web application running inside cluster ip on worker node on port 5001,i'm also using k3s for cluster deployment, i checked the cluster connection it's running fine
the deployment has the container port set to 5001:
ports:
- containerPort:5001
Here is the service file:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: user-ms
name: user-ms
spec:
ports:
- name: http
port: 80
targetPort: 5001
selector:
io.kompose.service: user-ms
status:
loadBalancer: {}
and here is the ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ms-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-ms
port:
number: 80
i'm getting 502 Bad Gateway error whenever i type in my worker or master ip address
expected: it should return the web application page
i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:
try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked
the ports are running correct, why is the ingress returning 502?
here is the ingress describe:
and here is the describe of nginx ingress controller pod:
the nginx ingress pod running normally:
here is the logs of the nginx ingress pod:
sorry for the images, but i'm using a streaming machine to access the terminal so i can't copy paste
How should i go with debugging this error?
ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay
I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.
once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:
metadata:
name: user-ms-ingress
annotitations:
kubernetes.io/ingress.class: "nginx"
now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:
i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.
Finally check once more and i can see that i can access my api and swagger page normally.
TLDR:
If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
don't forget to set ingress class inside your ingress configuration
don't setup network policy because nginx pod won't be able to call the other pods in that network
You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)
Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.
Kubectl edit cm -n namespace kubernetes-ingress
apiVersion: v1
data:
disable-access-log: "false" #turn this to false
map-hash-bucket-size: "128"
ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap

Limit access of services deployed in Kubernetes namespace

Let us assume we are the owners of a Kubernetes cluster and we give other users in our organization access to individual namespaces, so they are not supposed to know what is going on in other namespaces.
If user A deploys a certain ressource like a Grafana-Prometheus monitoring stack to namespace A, how do we ensure that he cannot see with the monitoring stack anything from namespace B, where he should not have any access to.
Of course, we will have to limit the rights of user A anyhow, but how do we automatically limit the rights of it's deployed ressources in namespace A? In case you have any suggestions perhaps with some Kubernetes configuration examples, that would be great.
The most important aspect of this question is to control the access permission of the service accounts which will be used in the Pods and a network policy which will limit the traffic within the namespace.
Hence we arrive to this algorithm:
Prerequisite:
Creating the user and namespace
sudo useradd user-a
kubectl create ns ns-user-a
limiting access permission of user-a to the namespace ns-user-a.
kubectl create clusterrole permission-users --verb=* --resource=*
kubectl create rolebinding permission-users-a --clusterrole=permission-users --user=user-a --namespace=ns-user-a
limiting all the service accounts access permission of namespace ns-user-a.
kubectl create clusterrole permission-serviceaccounts --verb=* --resource=*
kubectl create rolebinding permission-serviceaccounts --clusterrole=permission-serviceaccounts --namespace=ns-user-a --group=system:serviceaccounts:ns-user-a
kubectl auth can-i create pods --namespace=ns-user-a --as-group=system:serviceaccounts:ns-user-a --as sa
A network policy in namespace ns-user-a to limit incoming traffic from other namespaces.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
Edit: Allowing traffic from selective namespaces
Assign a label to the monitoring namespace with a custom label.
kubectl label ns monitoring nsname=monitoring
Or, use the following reserved labels from kubernetes to make sure nobody can edit or update this label. So by convention this label should have "monitoring" as assigned value for the "monitoring" namespace.
https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name
kubernetes.io/metadata.name
Applying a network policy to allow traffic from internal and monitoring namespace.
Note: Network policies always add up. So you can keep both or you can only keep the new one. I am keeping both here, for example purposes.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-only-monitoring-and-inernal
namespace: ns-user-a
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # allows traffic from ns-user-a namespace (same as earlier)
- namespaceSelector: # allows traffic from monitoring namespace
matchLabels:
kubernetes.io/metadata.name: monitoring

Whitelisting IP addresses for network traffic through Istio gateways

I tried whitelisting IP address/es in my kubernetes cluster's incoming traffic using this example :
Although this works as expected, wanted to go a step further and try if I can use istio gateways or virtual service when I set up Istio rule, instead of Loadbalancer(ingressgateway).
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
namespace: my-namespace
spec:
match: source.labels["app"] == "my-app"
actions:
- handler: whitelistip.listchecker
instances:
- sourceip.listentry
---
Where my-app is of kind: Gateway with a certain host and port, and labelled app=my-app.
Am using istio version 1.1.1
Also my cluster has all the istio-system running with envoy sidecars on almost all service pods.
You confuse one thing that, in above rule, match: source.labels["app"] == "my-app" is not referring to any resource's label, but to pod's label.
From OutputTemplate Documentation:
sourceLabels | Refers to source pod labels.
attributebindings can refer to this field using $out.sourcelabels
And you can verify by looking for resources with "app=istio-ingressgateway" label via:
kubectl get pods,svc -n istio-system -l "app=istio-ingressgateway" --show-labels
You can check this blog from istio about Mixer Adapter Model, to understand complete mixer model, its handlers,instances and rules.
Hope it helps!

Kubernetes NetworkPolicy allow loadbalancer

I have a Kubernetes cluster running on Google Kubernetes Engine (GKE) with network policy support enabled.
I created an nginx deployment and load balancer for it:
kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
Then I created this network policy to make sure other pods in the cluster won't be able to connect to it anymore:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 80
Now other pods in my cluster can't reach it (as intended):
kubectl run busybox --rm -ti --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.63.254.50:80)
wget: download timed out
However, it surprised me that using my external browser I also can't connect anymore to it through the load balancer:
open http://$(kubectl get svc nginx --output=jsonpath={.status.loadBalancer.ingress[0].ip})
If I delete the policy it starts to work again.
So, my question is: how do I block other pods from reaching nginx, but keep access through the load balancer open?
I talked about this in my Network Policy recipes repository.
"Allowing EXTERNAL load balancers while DENYING local traffic" is not a use case that makes sense, therefore it's not possible to using network policy.
For Service type=LoadBalancer and Ingress resources to work, you must allow ALL traffic to the pods selected by these resources.
If you REALLY want you can use the from.ipBlock.cidr and from.ipBlock.cidr.except resources to allow traffic from 0.0.0.0/0 (all IPv4) and then excluding 10.0.0.0/8 (or whatever private IP range GKE uses).
I recently had to do something similar. I needed a policy that didn't allow pods from other namespaces to talk to prod, but did allow the LoadBalancer services to reach pods in prod. Here's what worked (based on Ahmet's post):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-prod
namespace: prod
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
I'd like to share solution that builds on the excellent answers of #ahmetb-google and #tammer-saleh
The situation: 1 cluster, 4 namespaces, a public HTTPS-terminating Ingress for 3 of the namespaces that allows specific traffic inbound and routes it appropriately.
Goal: Block all inter-namespace traffic, allow only public traffic coming in via the Ingress.
Problem: When deploying a "deny from other namespaces" rule, that it also denied traffic from my Ingress so the pods were not accessible from the outside.
Solution:
I created an additional policy to allow only port 80 traffic targetting pods with the label role=web. It uses the allow/except trick to still block traffic from other namespaces but allow it from the public ingresses.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-from-ingress
spec:
podSelector:
matchLabels:
role: web
ingress:
- from:
- ipBlock:
cidr: '0.0.0.0/0'
except: ['10.0.0.0/8']
ports:
- port: 80
With this deployed, traffic still flows from the public, via the Ingress, to the web-serving pods. All inter-namespace traffic is blocked, including HTTP.
This is a useful setup when for example you're using namespaces for different deployment stages (production, testing, edge, etc) and you have private HTTP APIs that you would not want to accidentally hit cross-stage.