Connecting pod to external world - kubernetes

Newbie to kubernetes so might be a silly question, bear with me -
I created a cluster with one node, applied a sample deployment like below
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffedep
spec:
selector:
matchLabels:
app: coffedepapp
template:
metadata:
labels:
app: coffedepapp
spec:
containers:
- name: coffepod
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80'
Now I want to ping/connect an external website/entity from this pod, so I was hoping my ping would fail as one needs services applied like NodePort/LoadBalancer to connect to the outside world. But surprisingly, ping passed? I know I am horribly wrong somewhere, please correct my understanding here.
Pod's interfaces and trace route -
/ # traceroute google.com
traceroute to google.com (172.217.194.138), 30 hops max, 46 byte packets
1 * * *
2 10.244.0.1 (10.244.0.1) 0.013 ms 0.006 ms 0.004 ms
3 178.128.80.254 (178.128.80.254) 1.904 ms 178.128.80.253 (178.128.80.253) 0.720 ms 178.128.80.254 (178.128.80.254) 5.185 ms
4 138.197.250.254 (138.197.250.254) 0.995 ms 138.197.250.248 (138.197.250.248) 0.634 ms 138.197.250.252 (138.197.250.252) 0.523 ms
5 138.197.245.12 (138.197.245.12) 5.295 ms 138.197.245.14 (138.197.245.14) 0.956 ms 138.197.245.0 (138.197.245.0) 1.160 ms
6 103.253.144.255 (103.253.144.255) 1.396 ms 0.857 ms 0.763 ms
7 108.170.254.226 (108.170.254.226) 1.391 ms 74.125.242.35 (74.125.242.35) 0.963 ms 108.170.240.164 (108.170.240.164) 1.679 ms
8 66.249.95.248 (66.249.95.248) 2.136 ms 72.14.235.152 (72.14.235.152) 1.727 ms 66.249.95.248 (66.249.95.248) 1.821 ms
9 209.85.243.180 (209.85.243.180) 2.813 ms 108.170.230.73 (108.170.230.73) 1.831 ms 74.125.252.254 (74.125.252.254) 2.293 ms
10 209.85.246.17 (209.85.246.17) 2.758 ms 209.85.245.135 (209.85.245.135) 2.448 ms 66.249.95.23 (66.249.95.23) 4.538 ms
11^Z[3]+ Stopped traceroute google.com
/ #
/ #
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
16: eth0#if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether ee:97:21:eb:98:bc brd ff:ff:ff:ff:ff:ff
inet 10.244.0.183/32 brd 10.244.0.183 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ec97:21ff:feeb:98bc/64 scope link
valid_lft forever preferred_lft forever
Node's interfaces -
root#pool-3mqi2tbi6-b3dc:~# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 3a:c1:6f:8d:0f:45 brd ff:ff:ff:ff:ff:ff
inet 178.128.82.251/20 brd 178.128.95.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::38c1:6fff:fe8d:f45/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 06:88:c4:23:4b:cc brd ff:ff:ff:ff:ff:ff
inet 10.130.227.173/16 brd 10.130.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::488:c4ff:fe23:4bcc/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:61:08:39:8a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: cilium_net#cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:3c:d3:35:b3:35 brd ff:ff:ff:ff:ff:ff
inet6 fe80::983c:d3ff:fe35:b335/64 scope link
valid_lft forever preferred_lft forever
6: cilium_host#cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:13:c5:6e:52:bf brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/32 scope link cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::5013:c5ff:fe6e:52bf/64 scope link
valid_lft forever preferred_lft forever
7: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 4a:ab:3b:3b:0d:b5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::48ab:3bff:fe3b:db5/64 scope link
valid_lft forever preferred_lft forever
9: cilium_health#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:2f:45:83:e0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b42f:45ff:fe83:e044/64 scope link
valid_lft forever preferred_lft forever
11: lxc1408c930131e#if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:45:4d:7b:94:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8c45:4dff:fe7b:94e5/64 scope link
valid_lft forever preferred_lft forever
13: lxc0cef46c3977c#if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:eb:36:8b:fb:45 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::14eb:36ff:fe8b:fb45/64 scope link
valid_lft forever preferred_lft forever
15: lxca02c5de95d1c#if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:9d:0c:34:0f:11 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::609d:cff:fe34:f11/64 scope link
valid_lft forever preferred_lft forever
17: lxc32eddb70fa07#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:1a:08:95:fb:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::d81a:8ff:fe95:fbf2/64 scope link
valid_lft forever preferred_lft forever

You don't need services, nodeport, or loadbalancer to connect to the outside world. If your network policies allow pods to talk to outside, you can.
You need services to access your pods from within your cluster. You need loadbalancers, or nodeports to connect to your cluster from outside.

Related

Why K8S pods have 32bits subnet mask?

on POD A the ip a shows:
3: eth0#if82: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether ee:15:e4:1e:02:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.1.134/32 scope global eth0
valid_lft forever preferred_lft forever
on another POD B ip a shows:
3: eth0#if685: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 3e:0d:ce:77:b0:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.0.178/32 scope global eth0
valid_lft forever preferred_lft forever
However, I can ping B using ping 10.42.0.178. This is wired since POD A and POD B are in different networks based on the CIDR notation.
Why is the pod ip's network mask prefix is 32 bits instead of, 24bits?

Accessing istio-service-mesh from outside

I have installed minikube and docker to run a kubernetes cluster on a virtual machine (Ubuntu 20.04). After that, I installed istio and deployed the sample book application. So far, so good. The problem I have is that i can't access the application from outside the virtual machine (my local computer). I've done everything like the istio tutorial said, but at the point where they say it is accessible now I simply can't reach it.
The istio-ingressgateway receives an IP-address, but it's the same as the cluster IP, therefore my local computer can't find it of course as it only knows the address of the node (VM). And the minikube IP is also different to the VM IP.
istio-ingressgateway:
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.110.61.129 <none> 80/TCP,443/TCP,15443/TCP 12m
istio-ingressgateway LoadBalancer 10.106.72.253 10.106.72.253 15021:32690/TCP,80:32155/TCP,443:30156/TCP,31400:31976/TCP,15443:31412/TCP 12m
istiod ClusterIP 10.98.94.248 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 13m
minikube ip:
$ echo $(minikube ip)
192.168.49.2
Node addresses:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:95:be:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.164.131/24 brd 192.168.164.255 scope global dynamic ens33
valid_lft 1069sec preferred_lft 1069sec
inet6 fe80::20c:29ff:fe95:be75/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:d8:bc:fc brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e2ff:fed8:bcfc/64 scope link
valid_lft forever preferred_lft forever
6: br-1bb350499bef: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:19:18:56:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-1bb350499bef
valid_lft forever preferred_lft forever
inet6 fe80::42:19ff:fe18:561b/64 scope link
valid_lft forever preferred_lft forever
12: vethec1def3#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-1bb350499bef state UP group default
link/ether 56:bd:8c:fe:4c:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::54bd:8cff:fefe:4c6b/64 scope link
valid_lft forever preferred_lft forever
The address which should be working according to the istio documentation is http://192.168.49.2:32155/productpage (in my configuration), but this ends in a connection timeout.
Am I missing something?

CentOS VM(managed by Openstack) add a seconary IP but seconary IP cannot ping other host's IP

I'd like add a secondary IP address for 'eth0' from CentOS VM managed by Openstack. The result is: cannot ping another VM's IP from secondary IP. Could you help?
Steps to reproduce:
ip -f inet addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.22.42.220/24 brd 172.22.42.255 scope global noprefixroute dynamic eth0
valid_lft 83609sec preferred_lft 83609sec
ping -I 172.22.42.220 172.22.42.1 is OK
add a secondary IP by :ip -f inet addr add 172.22.42.222/32 brd 172.22.42.255 dev eth0
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.22.42.220/24 brd 172.22.42.255 scope global noprefixroute dynamic eth0
valid_lft 83368sec preferred_lft 83368sec
inet 172.22.42.222/32 brd 172.22.42.255 scope global eth0
valid_lft forever preferred_lft forever
ping -I 172.22.42.220 172.22.42.222 and ping -I 172.22.42.222 172.22.42.220 are OK (-I means source IP)
ping -I 172.22.42.220 172.22.42.1 is OK but ping -I 172.22.42.222 172.22.42.1 fails
you have to set the additional (multiple) ip address to the same port-id in openstack first.
Here is an example:
neutron port-update a5e93de7-927a-5402-b545-17f79538d3a6 --allowed_address_pairs list=true type=dict mac_address=ce:9e:5d:ad:6d:80,ip_address=10.101.11.5 ip_address=10.101.11.6
then you can check with :
neutron port-show a5e93de7-927a-5402-b545-17f79538d3a6
so, if you are aware of your openstack server instance name , then you can find the port id with :
openstack port list --server testserver01
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| a5e93de7-927a-5402-b545-17f79538d3a6| | fa:16:3e:5d:73:24 | ip_address='10.10.0.1', subnet_id='89387a48-5c5e-4dd0-8e0a-2181c97ec19a' | ACTIVE |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+

I can't access the pod which scheduled to the another node. But i can access the pod which scheduled to the current node

I can't access the pod which scheduled to the another node. But i can access the pod which scheduled to the current node, vice versa, when I on the another node, I only can access the pod which scheduled on current node, And can't access the pod which scheduled to another node. And the route rules on the current node is different from other node(In fact, all three nodes in my cluster have different route rules). some info are list below:
on the master node 172.16.5.150:
[root#localhost test-deploy]# kubectl get node
NAME STATUS ROLES AGE VERSION
172.16.5.150 Ready <none> 9h v1.16.2
172.16.5.151 Ready <none> 9h v1.16.2
172.16.5.152 Ready <none> 9h v1.16.2
[root#localhost test-deploy]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-controller-5qvwn 1/1 Running 0 46m
default nginx-controller-kgjwm 1/1 Running 0 46m
kube-system calico-kube-controllers-6dbf77c57f-kcqtt 1/1 Running 0 33m
kube-system calico-node-5zdt7 1/1 Running 0 33m
kube-system calico-node-8vqhv 1/1 Running 0 33m
kube-system calico-node-w9tq8 1/1 Running 0 33m
kube-system coredns-7b6b59774c-lzfh7 1/1 Running 0 9h
[root#localhost test-deploy]#
[root#localhost test-deploy]# kcp -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-controller-5qvwn 1/1 Running 0 23m 192.168.102.135 172.16.5.151 <none> <none>
nginx-controller-kgjwm 1/1 Running 0 23m 192.168.102.134 172.16.5.150 <none> <none>
[root#localhost test-deploy]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens32
172.0.0.0 0.0.0.0 255.0.0.0 U 100 0 0 ens32
192.168.102.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.102.129 0.0.0.0 255.255.255.255 UH 0 0 0 calia42aeb87aa8
192.168.102.134 0.0.0.0 255.255.255.255 UH 0 0 0 caliefbc513267b
[root#localhost test-deploy]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.0.1 <none> 443/TCP 9h
nginx-svc ClusterIP 10.10.189.192 <none> 8088/TCP 23m
[root#localhost test-deploy]# curl 192.168.102.135
curl: (7) Failed to connect to 192.168.102.135: 无效的参数
[root#localhost test-deploy]# curl 192.168.102.134
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root#localhost test-deploy]# curl 10.10.189.192:8088
curl: (7) Failed connect to 10.10.189.192:8088; 没有到主机的路由
[root#localhost test-deploy]# curl 10.10.189.192:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root#localhost test-deploy]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:4b:76:b7 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.150/8 brd 172.255.255.255 scope global noprefixroute ens32
valid_lft forever preferred_lft forever
inet6 fe80::92f8:9957:1651:f41/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 12:00:37:16:be:95 brd ff:ff:ff:ff:ff:ff
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether b2:9f:49:ff:31:3f brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/32 brd 10.10.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.0.200/32 brd 10.10.0.200 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.189.192/32 brd 10.10.189.192 scope global kube-ipvs0
valid_lft forever preferred_lft forever
5: tunl0#NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.102.128/32 brd 192.168.102.128 scope global tunl0
valid_lft forever preferred_lft forever
6: calia42aeb87aa8#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
7: caliefbc513267b#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
On the another node 172.16.5.150:
[root#localhost ~]# curl 10.10.189.192:8088
curl: (7) Failed connect to 10.10.189.192:8088; 没有到主机的路由
[root#localhost ~]# curl 10.10.189.192:8088
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root#localhost ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens192
172.16.5.0 0.0.0.0 255.255.255.0 U 100 0 0 ens192
192.168.102.128 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.102.135 0.0.0.0 255.255.255.255 UH 0 0 0 cali44ab0f7df0f
[root#localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:38:a2:95 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.151/24 brd 172.16.5.255 scope global noprefixroute ens192
valid_lft forever preferred_lft forever
inet6 fe80::e24a:6e5c:3a44:a7ee/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 76:91:46:b1:06:a7 brd ff:ff:ff:ff:ff:ff
4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 1a:0d:f4:cf:ab:69 brd ff:ff:ff:ff:ff:ff
inet 10.10.0.1/32 brd 10.10.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.0.200/32 brd 10.10.0.200 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.10.189.192/32 brd 10.10.189.192 scope global kube-ipvs0
valid_lft forever preferred_lft forever
5: tunl0#NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.102.128/32 brd 192.168.102.128 scope global tunl0
valid_lft forever preferred_lft forever
8: cali44ab0f7df0f#if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
The route table doesn't have route for tunl0 interface. You can include the environement IP_AUTODETECTION_METHOD on calico.yaml file under the calico-node section.
Example:
containers:
- name: calico-node
image: xxxxxxx
env:
- name: IP_AUTODETECTION_METHOD
value: interface=ens192

How to map interface name of container in docker-compose with network created?

I am setting up with docker-compose (version 1.21.1) 3 docker containers and two networks:
version: '2.1'
services:
app1:
build:
context: .
dockerfile: "Dockerfile"
depends_on:
- redis
networks:
- pub
- default
redis:
build:
context: "tests/redis"
networks:
- default
app2:
build:
context: "tests/app2"
networks:
- pub
- default
networks:
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "fe80::42:acff:fe10:ee04/64"
default:
In app1:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
696: eth0#if697: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:10:ee:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.238.3/24 brd 172.16.238.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe10:ee03/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::3/64 scope link nodad
valid_lft forever preferred_lft forever
698: eth1#if699: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:f0:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.240.4/20 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
However, I want eth1 to support IPv6, or both eth0 and eth1.
The documentation doesn't mention anything about that, neither I could find an option in the network options.
Is there a way to do this?
I had to enable ipv6 for both networks and do subnetting to the IPv6 subnet.
For the CIDR part I used an online subnet calculator for IPv6, however I am not sure yet why this worked :p
This is the configuration that worked:
networks:
default:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8:8000::/33"
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8::/33"