Kubernetes unable to retrieve logs - kubernetes

I have kubeadm cluster deployed in CentOS VM. while trying to deploy ingress controller following github i noticed that i'm unable to see logs:
kubectl logs -n ingress-nginx nginx-ingress-controller-697f7c6ddb-x9xkh --previous
Error from server: Get https://192.168.56.34:10250/containerLogs/ingress-nginx/nginx-ingress-controller-697f7c6ddb-x9xkh/nginx-ingress-controller?previous=true: dial tcp 192.168.56.34:10250: getsockopt: connection timed out
In 192.168.56.34 (node1) netstat returns:
tcp6 0 0 :::10250 :::* LISTEN 1068/kubelet
In fact i'm unable to see any logs despite the status of the pod.
I disabled both the firewalld and SELinux.
I used proxy to enable kubernertes to download images, now i removed the proxy.
When navigating to the url in the error above i get Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
I'm also able to fetch my nodes:
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 32d v1.9.3
k8s-node1 Ready <none> 30d v1.9.3
k8s-node2 NotReady <none> 32d v1.9.3

getsockopt: connection timed out
Is 99.99999% a firewall issue. If it was "connection refused" then showing the output of netstat would be meaningful, but (as you can see) kubelet is listening on that port just fine -- it's the networking configuration between the machine that is running kubectl and "192.168.56.34" that is incorrectly configured to allow traffic.
The apiserver expects that everyone who would want to view logs (or use kubectl exec) can reach that port on every Node in the cluster; so be sure you don't just fix the firewall rule(s) for that one Node -- fix it for all of them.

This message is from the apiserver running on your master. The command kubectl logs, running on your local machine, fetches logs via the apiserver. So the error message reveals a firewall misconfiguration between the master and the node(s) (port 10250)

Related

Kubernetes: Calio not working on remote worker, local ok

I setup a Kubernetes cluster with calico.
The setup is "simple"
1x master (local network, ok)
1x node (local network, ok)
1x node (cloud server, not ok)
All debian buster with docker 19.03
On the cloud server the calico pods do not come up:
calico-kube-controllers-token-x:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 47m (x50 over 72m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedMount 43m kubelet MountVolume.SetUp failed for volume "calico-kube-controllers-token-x" : failed to sync secret cache: timed out waiting for the condition
Normal SandboxChanged 3m41s (x78 over 43m) kubelet Pod sandbox changed, it will be killed and re-created.
calico-node-x:
Warning Unhealthy 43m (x5 over 43m) kubelet Liveness probe failed: calico/node is not ready: Felix is not live: Get "http://localhost:9099/liveness": dial tcp [::1]:9099: connect: connection refused
Warning Unhealthy 14m (x77 over 43m) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
Warning BackOff 4m26s (x115 over 39m) kubelet Back-off restarting failed container
My guess is that there is something wrong with IP/Network config, but did not figure out which.
Required ports (k8s&BGP) are forwarded from the router, also tried the master directly connected to the internet
--control-plane-endpoint is a hostname and public resolveable
Calico is using BGP peering (using public ip as peer)
This entry does worry me the most:
displayes local ip: kubectl get --raw /api
I tried to find a way to change this to the public IP of the master, without success.
Anyone got a clue what to try next?
After an additional time spend with analysis the problem happend to be the distributed api ip address was the local one, not the dns-name.
Created a vpn with wireguard from the cloud node to the local master, so the local ip of the master is reachable from the cloud node.
Don't know if that is the cleanest solution, but it works.
Run this command to verify if IP_AUTODETECTION_METHOD environment variable in calico daemonset has been set
kubectl get daemonset/calico-node -n kube-system --output json | jq '.spec.template.spec.containers[].env[] | select(.name | startswith("IP"))'
Run this command in each of your k8s nodes to find the valid network interface
ifconfig
Explicitly set the IP_AUTODETECTION_METHOD environment variable, to make sure the calico node communicates to the correct network interface of the K8s node.
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=en.*

Can we setup a k8s bare matal server to run Bind DNS server (named) and have an access to it from the outside on port 53?

I have setup a k8s cluster using 2 bare metal servers (1 master and 1 worker) using kubespray with default settings (kube_proxy_mode: iptables and dns_mode: coredns) and I would like to run a BIND DNS server inside to manage a couple of domain names.
I deployed with helm 3 an helloworld web app for testing. Everything works like a charm (HTTP, HTTPs, Let's Encrypt thought cert-manager).
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 22d v1.16.7
k8sslave Ready <none> 21d v1.16.7
I deployed with an Helm 3 chart an image of my BIND DNS Server (named) in default namespace; with a service exposing the port 53 of the bind app container.
I have tested the DNS resolution with a pod and the bind service; it works well. Here is the test of the bind k8s service from the master node:
kubectl -n default get svc bind -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
bind ClusterIP 10.233.31.255 <none> 53/TCP,53/UDP 4m5s app=bind,release=bind
kubectl get endpoints bind
NAME ENDPOINTS AGE
bind 10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more... 4m12s
export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"\n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server: 10.233.31.255
Address: 10.233.31.255#53
Name: www.example.com
Address: 176.31.XXX.XXX
So the bind DNS app is deployed and is working fine through the bind k8s service.
For the next step; I followed the https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ documentation to setup the Nginx Ingress Controller (both configmap and service) to handle tcp/udp requests on port 53 and to redirect them to the bind DNS app.
When I test the name resolution from an external computer it does not work:
nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached
I digg into k8s configuration, logs, etc. and I found a warning message in kube-proxy logs:
ps auxw | grep kube-proxy
root 19984 0.0 0.2 141160 41848 ? Ssl Mar26 19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster
journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>
KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}
I0326 22:26:03.491900 1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957 1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453 1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647 1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663 1 config.go:313] Starting service config controller
I0326 22:26:03.493669 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679 1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986 1 shared_informer.go:204] Caches are synced for endpoints config
I0326 22:26:03.593992 1 shared_informer.go:204] Caches are synced for service config
E0411 17:02:48.113935 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378 1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use
Then I look for who was already using the port 53...
netstat -lpnt | grep 53
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1682/systemd-resolv
tcp 0 0 87.98.XXX.XXX:53 0.0.0.0:* LISTEN 19984/kube-proxy
tcp 0 0 169.254.25.10:53 0.0.0.0:* LISTEN 14448/node-cache
tcp6 0 0 :::9253 :::* LISTEN 14448/node-cache
tcp6 0 0 :::9353 :::* LISTEN 14448/node-cache
A look on the proc 14448/node-cache:
cat /proc/14448/cmdline
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns
So coredns is already handling the port 53 which is normal cos it's the k8s internal DNS service.
In coredns documentation (https://github.com/coredns/coredns/blob/master/README.md) they talk about a -dns.port option to use a distinct port... but when I look into kubespray (which has 3 jinja templates https://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates for creating the coredns configmap, services etc. similar to https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns) everything is hardcoded with port 53.
So my question is : Is there a k8s cluster configuration/workaround so I can run my own DNS Server and exposed it to port 53?
Maybe?
Setup the coredns to use a a different port than 53 ? Seems hard and I'm really not sure this makes sense!
I can setup my bind k8s service to expose port 5353 and configure the nginx ingress controller to handle this 5353 port and redirect to the app 53 port. But this would require to setup iptables to route external DSN requests* received on port 53 to my bind k8s service on port 5353 ? What would be the iptables config (INPUT / PREROUTING or FORWARD)? Does this kind of network configuration would breakes coredns?
Regards,
Chris
I suppose Your nginx-ingress doesn't work as expected. You need Load Balancer provider, such as MetalLB, to Your bare metal k8s cluster to receive external connections on ports like 53. And You don't need nginx-ingress to use with bind, just change bind Service type from ClusterIP to LoadBalancer and ensure you got an external IP on this Service. Your helm chart manual may help to switch to LoadBalancer.

Can not login to kubernetes dashboard dial tcp 172.17.0.6:8443: connect: connection refused

I deployed Kubernetes v1.15.2 dashboard successfully. Checking the cluster info:
$ kubectl cluster-info
Kubernetes master is running at http://172.19.104.231:8080
kubernetes-dashboard is running at http://172.19.104.231:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
When I access the dashboard, the result is:
[root#ops001 ~]# curl -L http://172.19.104.231:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Error: 'dial tcp 172.17.0.6:8443: connect: connection refused'
Trying to reach: 'https://172.17.0.6:8443/'
This is dashboard status:
[root#ops001 ~]# kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-74d7cc788-mk9c7 1/1 Running 0 92m
What should I do to access the dashboard? When I using proxy to access dashboard UI:
$ kubectl proxy --address='localhost' --port=8086 --accept-hosts='^*$'
Starting to serve on 127.0.0.1:8086
the result is:
[root#ops001 ~]# curl -L http://127.0.0.1:8086/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Error: 'dial tcp 172.17.0.6:8443: connect: connection refused'
Trying to reach: 'https://172.17.0.6:8443/'
What should I do to fix this problem?
I finally find the problem is kubernetes dashboard pod container does not communicate with the proxy nginx container. Because the proxy container is deployed before kubernetes flannel,and not in the same network.Trying to add proxy nginx container to flannel network would solve the problem.Check current flannel network:
[root#ops001 conf.d]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.224.1/21
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
generate start parameter of docker:
./mk-docker-opts.sh -d /run/docker_opts.env -c
check the parameter:
[root#ops001 conf.d] cat /run/docker_opts.env
DOCKER_OPTS=" --bip=172.30.224.1/21 --ip-masq=false --mtu=1450"
add paramter to docker service:
# vim /lib/systemd/system/docker.service
EnvironmentFile=/run/docker_opts.env
ExecStart=/usr/bin/dockerd $DOCKER_OPTS -H fd://
restart docker and the container would join the flannel network,could communicate with each other:
systemctl daemon-reload
systemctl restart docker
hope this may help you!

Kube-state-metrics error: Failed to create client: ... i/o timeout

I'm running Kubernetes in virtual machines and going through the basic tutorials, currently Add logging and metrics to the PHP / Redis Guestbook example. I'm trying to install kube-state-metrics:
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
but it fails.
kubectl describe pod --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7
...
Warning Unhealthy 28m (x8 over 30m) kubelet, kubernetes-node1 Readiness probe failed: Get http://192.168.129.102:8080/healthz: dial tcp 192.168.129.102:8080: connect: connection refused
kubectl logs --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7 -c kube-state-metrics
I0514 17:29:26.980707 1 main.go:85] Using default collectors
I0514 17:29:26.980774 1 main.go:93] Using all namespace
I0514 17:29:26.980780 1 main.go:129] metric white-blacklisting: blacklisting the following items:
W0514 17:29:26.980800 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0514 17:29:26.983504 1 main.go:169] Testing communication with server
F0514 17:29:56.984025 1 main.go:137] Failed to create client: ERROR communicating with apiserver: Get https://10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout
I'm unsure if this 10.96.0.1 IP is correct. My virtual machines are in a bridged network 10.10.10.0/24 and a host-only network 192.168.59.0/24. When initializing Kubernetes I used the argument --pod-network-cidr=192.168.0.0/16 so that's one more IP range that I'd expect. But 10.96.0.1 looks unfamiliar.
I'm new to Kubernetes, just doing the basic tutorials, so I don't know what to do now. How to fix it or investigate further?
EDIT - additonal info:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes-master Ready master 15d v1.14.1 10.10.10.11 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node1 Ready <none> 15d v1.14.1 10.10.10.5 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node2 Ready <none> 15d v1.14.1 10.10.10.98 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
The command I used to initialize the cluster:
sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=192.168.0.0/16
The reason for this is probably overlapping of Pod network with Node network - you set Pod network CIDR to 192.168.0.0/16 which your host-only network will be included into as its address is 192.168.59.0/24.
To solve this you can either change the pod network CIDR to 192.168.0.0/24 (it is not recommended as this will give you only 255 addresses for your pod networking)
You can also use different range for your Calico. If you want to do it on a running cluster here is an instruction.
Also other way I tried:
edit Calico manifest to different range (for example 10.0.0.0/8) - sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=10.0.0.0/8) and apply it after the init.
Another way would be using different CNI like Flannel (which uses 10.244.0.0/16).
You can find more information about ranges of CNI plugins here.

kubernetes service IPs not reachable

So I've got a Kubernetes cluster up and running using the Kubernetes on CoreOS Manual Installation Guide.
$ kubectl get no
NAME STATUS AGE
coreos-master-1 Ready,SchedulingDisabled 1h
coreos-worker-1 Ready 54m
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1
kube-system busybox 1/1 Running 0 55m 10.2.26.3 coreos-worker-1
kube-system kube-apiserver-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-controller-manager-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-worker-1 1/1 Running 0 58m 192.168.0.204 coreos-worker-1
kube-system kube-scheduler-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.3.0.1 <none> 443/TCP 1h
As with the guide, I've setup a service network 10.3.0.0/16 and a pod network 10.2.0.0/16. Pod network seems fine as busybox and curl containers get IPs. But the services network has problems. Originally, I've encountered this when deploying kube-dns: the service IP 10.3.0.1 couldn't be reached, so kube-dns couldn't start all containers and DNS was ultimately not working.
From within the curl pod, I can reproduce the issue:
[ root#curl-2421989462-h0dr7:/ ]$ curl https://10.3.0.1
curl: (7) Failed to connect to 10.3.0.1 port 443: No route to host
[ root#curl-2421989462-h0dr7:/ ]$ ip route
default via 10.2.26.1 dev eth0
10.2.0.0/16 via 10.2.26.1 dev eth0
10.2.26.0/24 dev eth0 src 10.2.26.4
It seems ok that there's only a default route in the container. As I understood it, the request (to default route) should be intercepted by the kube-proxy on the worker node, forwarded to the the proxy on the master node where the IP is translated via iptables to the masters public IP.
There seems to be a common problem with a bridge/netfilter sysctl setting, but that seems fine in my setup:
core#coreos-worker-1 ~ $ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
I'm having a real hard time to troubleshoot, as I lack the understanding of what the service IP is used for, how the service network is supposed to work in terms of traffic flow and how to best debug this.
So here're the questions I have:
What is the 1st IP of the service network (10.3.0.1 in this case) used for?
Is above description of the traffic flow correct? If not, what steps does it take for a container to reach a service IP?
What are the best ways to debug each step in the traffic flow? (I can't get any idea what's wrong from the logs)
Thanks!
The Sevice network provides fixed IPs for Services. It is not a routeable network (so don't expect ip ro to show anything nor will ping work) but a collection iptables rules managed by kube-proxy on each node (see iptables -L; iptables -t nat -L on the nodes, not Pods). These virtual IPs (see the pics!) act as load balancing proxy for endpoints (kubectl get ep), which are usually ports of Pods (but not always) with a specific set of labels as defined in the Service.
The first IP on the Service network is for reaching the kube-apiserver itself. It's listening on port 443 (kubectl describe svc kubernetes).
Troubleshooting is different on each network/cluster setup. I would generally check:
Is kube-proxy running on each node? On some setups it's run via systemd and on others there is a DeamonSet that schedules a Pod on each node. On your setup it is deployed as static Pods created by the kubelets thrmselves from /etc/kubernetes/manifests/kube-proxy.yaml
Locate logs for kube-proxy and find clues (can you post some?)
Change kube-proxy into userspace mode. Again, the details depend on your setup. For you it's in the file I mentioned above. Append --proxy-mode=userspace as a parameter on each node
Is the overlay (pod) network functional?
If you leave comments I will get back to you..
I had this same problem, and the ultimate solution that worked for me was enabling IP forwarding on all nodes in the cluster, which I had neglected to do.
$ sudo sysctl net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
Service IPs and DNS started working immediately afterwards.
I had the same issue, turned out to be a configuration issue in kube-proxy.yaml For the "master" parameter I had the ip address as in - --master=192.168.3.240 but it actually required to be a url like - --master=https://192.168.3.240
FYI my kube-proxy sucessfully uses --proxy-mode=iptables (v1.6.x)