k8s - pod can ping external ip, but cannot wget? - kubernetes

Running a clean install of microk8s 1.19 on Fedora 32, I am able to ping an external IP address, but when I try to wget, I get "no route to host" (this is the output of commands run from a busybox pod):
/ # wget x.x.x.x
Connecting to x.x.x.x (x.x.x.x:80)
wget: can't connect to remote host (x.x.x.x): No route to host
/ # ping x.x.x.x
PING x.x.x.x (x.x.x.x): 56 data bytes
64 bytes from x.x.x.x: seq=0 ttl=127 time=1.209 ms
64 bytes from x.x.x.x: seq=1 ttl=127 time=0.765 ms

Finally found https://github.com/ubuntu/microk8s/issues/408
Had to enable masquerade in the firewall zone associated with the bridge interface, or in my case, my ethernet connection.

Related

ubuntu 20.04 in vagrant open port to private network

Running 2VMs by Vagrant within the private network like:
host1: 192.168.1.1/24
host2: 192.168.1.2/24
In host1, the app listens port 6443. But cannot access in host2:
# host1
root#host1:~# ss -lntp | grep 6443
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=10537,fd=7))
# host2
root#host2:~# nc -zv -w 3 192.168.1.2 6443
nc: connect to 192.168.1.2 port 6443 (tcp) failed: Connection refused
(Actually, the app is the "kube-apiserver" and fail to join the host2 as a worker node with kubeadm)
What am I missed?
Both are ubuntu focal (box_version '20220215.1.0') and ufw are inactivated.
After change the IP of hosts, it works:
host1: 192.168.1.1/24 -> 192.168.1.2/24
host2: 192.168.1.2/24 -> 192.168.1.3/24
I guess it is caused by using the reserved IP as the gateway, the first IP of the subnet, 192.168.1.1.
I'll update the references about that here later, I have to setup the k8s cluster for now.

AKS Inter communication between Pods not working

Recently I have created private AKS via Terraform, every thing went OK, how is it possible that two pods within the same namespace are unable to communicate with each other?
AKS version= 1.19.11
coredns:1.6.6
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5d18h
Cluster has been created with below resources:
Network type (plugin)=Kubenet
Pod CIDR=10.x.x.x/16
Service CIDR=10.x.x.0/16
DNS service IP=10.x.x.10
Docker bridge CIDR=172.x.x.1/16
Network Policy=Calico
Ping response:
/ # ping 10.x.x.89
PING 10.x.x.89 (10.x.x.89): 56 data bytes
^C
--- 10.x.x.89 ping statistics ---
25 packets transmitted, 0 packets received, 100% packet loss
/ # ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: seq=0 ttl=241 time=27.840 ms
64 bytes from 10.0.0.1: seq=1 ttl=241 time=28.790 ms
64 bytes from 10.0.0.1: seq=2 ttl=241 time=28.725 ms
^C
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 27.840/28.451/28.790 ms
/ # ping kubernetes
ping: bad address 'kubernetes'
/ # nslookup kubernetes
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'kubernetes': Name does not resolve
/ #
Network policy was the issue
Kubectl get netpol -n namespace

connection timed out; no servers could be reached when connect CoreDNS server

When I using dig command to test the CoreDNS server,it shows: connection timed out; no servers could be reached:
[root#ops001 ~]# /opt/k8s/bin/kubectl exec -ti soa-user-service-5c8b744d6d-7p9hr -n dabai-fat /bin/sh
/ # dig -t A kubernetes.default.svc.cluster.local. #10.254.0.2
; <<>> DiG 9.12.4-P2 <<>> -t A kubernetes.default.svc.cluster.local. #10.254.0.2
;; global options: +cmd
;; connection timed out; no servers could be reached
when I ping server,it success.
[root#ops001 ~]# /opt/k8s/bin/kubectl exec -ti soa-user-service-5c8b744d6d-7p9hr -n dabai-fat /bin/sh
/ # ping 10.254.0.2
PING 10.254.0.2 (10.254.0.2): 56 data bytes
64 bytes from 10.254.0.2: seq=0 ttl=64 time=0.100 ms
64 bytes from 10.254.0.2: seq=1 ttl=64 time=0.071 ms
64 bytes from 10.254.0.2: seq=2 ttl=64 time=0.094 ms
64 bytes from 10.254.0.2: seq=3 ttl=64 time=0.087 ms
why the dig could not connect to DNS server althrough the network is ok?This is my CoreDNS service:
when azshara-k8s03‘s node connection to CoreDNS server:
/ # telnet 10.254.0.2 53
Connection closed by foreign host
when azshara-k8s02‘s and azshara-k8s01‘s node connection to CoreDNS server:
/ # telnet 10.254.0.2 53
telnet: can't connect to remote host (10.254.0.2): Connection refused
I just confusing why port 53 is not open,when I scan from host using same command,the port 53 is open:
I finally find some server's kube-proxy not start,and the route foward rule not refresh,using this command to start kube-proxy fix this problem:
systemctl start kube-proxy

Wanted to communicate the pod with cluster to run some kubectl commands

I am trying to run kubectl commands from inside pod to communicate with cluster and to delete pod with --graceperiod=0 through a monitoring script after holding the pod in delete state through extended grace period to hold the pod deletion with prestop hook. But was not able to connect to cluster IP and neither able to ping the pod itself.
[root#pod01 /]# kubectl exec dnsutils cat /etc/resolv.conf
Unable to connect to the server: dial tcp 196.19.0.1:443: connect: network is unreachable
[root#pod01 /]# cat /etc/resolv.conf
nameserver 196.19.0.2
search namespace.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
[root#pod01 /]# kubectl exec -ti dnsutils -- nslookup kubernetes.default
Unable to connect to the server: dial tcp 196.19.0.1:443: connect: network is unreachable
[root#pod01 /]# ping namespace.svc.cluster.local
ping: namespace.svc.cluster.local: Name or service not known
[root#pod01 /]# ^C
[root#pod01 /]# nslookup Cluster_IP
bash: nslookup: command not found
[root#pod01 /]# ping Cluster_IP port
connect: Network is unreachable
[root#pod01 /]# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
196.18.9.8 pod01
[root#pod01 /]# ping 196.18.9.8
connect: Network is unreachable
[root#pod01 /]# nslookup 196.18.9.8
bash: nslookup: command not found
[root#pod01 /]# ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.014 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.013 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.015 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.015 ms
^C
--- localhost ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3065ms
any help.? I am not able to run any of these to get the pod interact outside and I am on CentOS 7
The network components are down in a pod with terminated state and won't be able to communicate with cluster.

How can I access the Openshift service through ClusterIP from nodes

I am trying to access a Flask server running in one Openshift pod from other.
For that I created a service as below.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-web-app ClusterIP 172.30.216.112 <none> 8080/TCP 8m
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.216.112
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.20.203.104:5000,172.20.49.150:5000
Session Affinity: None
Events: <none>
1)
First, I ping ed from one pod to other pod and got response.
(app-root) sh-4.2$ ping 172.20.203.104
PING 172.20.203.104 (172.20.203.104) 56(84) bytes of data.
64 bytes from 172.20.203.104: icmp_seq=1 ttl=64 time=5.53 ms
64 bytes from 172.20.203.104: icmp_seq=2 ttl=64 time=0.527 ms
64 bytes from 172.20.203.104: icmp_seq=3 ttl=64 time=3.10 ms
64 bytes from 172.20.203.104: icmp_seq=4 ttl=64 time=2.12 ms
64 bytes from 172.20.203.104: icmp_seq=5 ttl=64 time=0.784 ms
64 bytes from 172.20.203.104: icmp_seq=6 ttl=64 time=6.81 ms
64 bytes from 172.20.203.104: icmp_seq=7 ttl=64 time=18.2 ms
^C
--- 172.20.203.104 ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 6012ms
rtt min/avg/max/mdev = 0.527/5.303/18.235/5.704 ms
But, when I tried curl, it is not responding.
(app-root) sh-4.2$ curl 172.20.203.104
curl: (7) Failed connect to 172.20.203.104:80; Connection refused
(app-root) sh-4.2$ curl 172.20.203.104:8080
curl: (7) Failed connect to 172.20.203.104:8080; Connection refused
2)
After that I tried to reach cluster IP from one pod. In this case, both ping, curl not reachable.
(app-root) sh-4.2$ ping 172.30.216.112
PING 172.30.216.112 (172.30.216.112) 56(84) bytes of data.
From 172.20.49.1 icmp_seq=1 Destination Host Unreachable
From 172.20.49.1 icmp_seq=4 Destination Host Unreachable
From 172.20.49.1 icmp_seq=2 Destination Host Unreachable
From 172.20.49.1 icmp_seq=3 Destination Host Unreachable
^C
--- 172.30.216.112 ping statistics ---
7 packets transmitted, 0 received, +4 errors, 100% packet loss, time 6002ms
pipe 4
(app-root) sh-4.2$ curl 172.30.216.112
curl: (7) Failed connect to 172.30.216.112:80; No route to host
Please let me know where I am going wrong here. Why the above cases #1, #2 failing. How to access the clusterIP services.
I am completely new to the services and accessing them and hence I might be missing some basics.
I gone through other answers How can I access the Kubernetes service through ClusterIP 1. But, it is for Nodeport which is not helping me.
Updates based on below comment from Graham Dumpleton, below are my observations.
This is the Flask server log which I am running in the pods for information.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [14/Nov/2019 04:54:53] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Nov/2019 04:55:05] "GET / HTTP/1.1" 200 -
Is your pod listening on external interfaces on pod 8080?
If I understand question correctly, my intention here is just to communicate between pods via clusterIP service. I am not looking for accessing pods from external interfaces(other projects or through web urls as load balancer service)
If you get into the pod, can you do curl $HOSTNAME:8080?
Yes, if I am running as localhost or 127.0.0.1, I am getting response from the same pod where I run this as expected.
(app-root) sh-4.2$ curl http://127.0.0.1:5000/
Hello World!
(app-root) sh-4.2$ curl http://localhost:5000/
Hello World!
But, if I tried with my-web-app or service IP(clusterIP). I am not getting response.
(app-root) sh-4.2$ curl http://172.30.216.112:5000/
curl: (7) Failed connect to 172.30.216.112:5000; No route to host
(app-root) sh-4.2$ curl my-web-app:8080
curl: (7) Failed connect to my-web-app:8080; Connection refused
(app-root) sh-4.2$ curl http://my-web-app:8080/
curl: (7) Failed connect to my-web-app:8080; Connection refused
With pod IP also I am not getting response.
(app-root) sh-4.2$ curl http://172.20.49.150:5000/
curl: (7) Failed connect to 172.20.49.150:5000; Connection refused
(app-root) sh-4.2$ curl 172.20.49.150
curl: (7) Failed connect to 172.20.49.150:80; Connection refused
I am answering my own question. Here is how my issue got resolved based on inputs from Graham Dumpleton.
Initially, I used to connect Flask server as below.
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run()
This bind the server to http://127.0.0.1:5000/ by default.
As part of resolution I changed the bind to 0.0.0.0 as below
from flask import Flask
application = Flask(__name__)
if __name__ == "__main__":
application.run(host='0.0.0.0')
And log as below after that.
* Serving Flask app "wsgi" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
After that pods successfully communicated via clusterIP. Below are the service details(increased one more pod)
$ oc describe svc my-web-app
Name: my-web-app
Namespace: neo4j-sys
Labels: app=my-web-app
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=my-web-app,deploymentconfig=my-web-app
Type: ClusterIP
IP: 172.30.4.250
Port: 8080-tcp 8080/TCP
TargetPort: 5000/TCP
Endpoints: 172.20.106.184:5000,172.20.182.118:5000,172.20.83.40:5000
Session Affinity: None
Events: <none>
Below is the successful response.
(app-root) sh-4.2$ curl http://172.30.4.250:8080 //with clusterIP which is my expectation
Hello World!
(app-root) sh-4.2$ curl http://172.20.106.184:5000 // with pod IP
Hello World!
(app-root) sh-4.2$ curl $HOSTNAME:5000 // with $HOSTNAME
Hello World!