Having trouble configuring static IPs to PODs attached with MACVLAN interface - kubernetes

Here is the scenario.
There is a deployment set through which 2 PODs are created. I am attaching a MACVLAN interface to these PODs for external communication.
Macvlan definition
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: test-macvlandef01
spec:
config: '{
"cniVersion": "0.3.0",
"name": "test-macvlandef01",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"datastore": "kubernetes",
"kubernetes": { "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig" },
"range": "192.168.0.0/24",
"range_start": "192.168.0.44",
"range_end": "192.168.0.45"
}
}'
Deployment Set
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test
spec:
replicas: 2
selector:
matchLabels:
app: centos
template:
metadata:
labels:
app: centos
annotations:
k8s.v1.cni.cncf.io/networks: "test-macvlandef01"
spec:
nodeSelector:
test: "true"
containers:
- name: centos
image: centos
imagePullPolicy: IfNotPresent
command: ["bin/bash", "-c", "sleep 100000" ]
ports:
- containerPort: 80
Result. Both PODs have IPs from the allocated pool.
[master1 ~]# kubectl exec -it centos-test-64f8fbf47f-wrjr7 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0#if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 72:ef:ca:2c:31:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.20.14.176/32 scope global eth0
valid_lft forever preferred_lft forever
5: net1#if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default
link/ether 52:2f:bd:f9:03:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.44/24 brd 192.168.0.255 scope global net1
valid_lft forever preferred_lft forever
[master1 ~]# kubectl exec -it centos-test-64f8fbf47f-vtkst ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0#if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ae:e6:4e:95:2a:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.20.14.175/32 scope global eth0
valid_lft forever preferred_lft forever
5: net1#if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default
link/ether 72:fb:b5:90:d0:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.0.45/24 brd 192.168.0.255 scope global net1
valid_lft forever preferred_lft forever
Now what I need to configure is, a bigger allocation pool in macvlan definition file, but have only specific 2 IPs to be assigned to the PODs.
I tried below configuration.
Macvlan definition
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: test-macvlandef01
spec:
config: '{
"cniVersion": "0.3.0",
"name": "test-macvlandef01",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"datastore": "kubernetes",
"kubernetes": { "kubeconfig": "/etc/cni/net.d/whereabouts.d/whereabouts.kubeconfig" },
"range": "192.168.0.0/24",
"range_start": "192.168.0.40",
"range_end": "192.168.0.50"
}
}'
Deployment Set
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-test
spec:
replicas: 2
selector:
matchLabels:
app: centos
template:
metadata:
labels:
app: centos
annotations:
k8s.v1.cni.cncf.io/networks: '[{ "name": "test-macvlandef01","ips": "192.168.0.44"},{"name": "test-macvlandef01","ips": "192.168.0.45"}]'
spec:
nodeSelector:
test: "true"
containers:
- name: centos
image: centos
imagePullPolicy: IfNotPresent
command: ["bin/bash", "-c", "sleep 100000" ]
ports:
- containerPort: 80
PODs are coming up without MACVLAN interface and also I see no error associated with the POD.
[master1 ~]# kubectl exec -it centos-test-b59db89f7-2vvqx ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0#if65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 62:31:fc:64:8f:5b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.20.14.180/32 scope global eth0
valid_lft forever preferred_lft forever
[master1 ~]# kubectl exec -it centos-test-b59db89f7-6c75h ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0#if64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether e6:23:30:ff:bf:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.20.14.179/32 scope global eth0
valid_lft forever preferred_lft forever
Please suggest any modifications or additions that would help with the requirement.
Thanks in advance.

I want to pay your attention into 2 points below. Partial answer.
From your post I see that you want to use special IP addresses. To use such functionality, according to Extention convention from CNI you may need to use "capabilities": {"ips": true} capability in your Macvlan definition. Something like this:
spec:
config: '{
"cniVersion": "0.3.0",
"name": "test-macvlandef01",
"type": "macvlan",
"capabilities": {"ips": true}
"master": "eth0",
"mode": "bridge",
You can also find good explanation with examples in Attaching a pod to an additional network documentation.
I suppose that you use whereabouts plugin, since "type": "whereabouts" presents in your Macvlan definition. It supports exclusions:
You can also specify ranges to exclude from assignment, so if for
example you'd like to assign IP addresses within the range
192.168.2.0/24, you can exclude IP addresses within it by adding
them to an exclude list. For example, if you decide to exclude the
range 192.168.2.0/28, the first IP address assigned in the range
will be 192.168.2.16.
Knowing this fact, you can specify ranges of IPs to exclude from your configuration in accordance with Whereabouts IPAM Config example. Try to add exclude field in Macvlan definition with necessary IPs/subnets, which should be excluded.
Possible solution for your particular case:
spec:
config: '{
"cniVersion": "0.3.0",
"name": "test-macvlandef01",
"type": "macvlan",
"capabilities": {"ips": true}
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "whereabouts",
"range": "192.168.0.0/24",
"range_start": "192.168.0.40",
"range_end": "192.168.0.50"
"exclude": [
"192.168.0.40/32",
"192.168.0.41/32",
...
]
}
}'

Related

Why K8S pods have 32bits subnet mask?

on POD A the ip a shows:
3: eth0#if82: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether ee:15:e4:1e:02:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.1.134/32 scope global eth0
valid_lft forever preferred_lft forever
on another POD B ip a shows:
3: eth0#if685: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
link/ether 3e:0d:ce:77:b0:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.42.0.178/32 scope global eth0
valid_lft forever preferred_lft forever
However, I can ping B using ping 10.42.0.178. This is wired since POD A and POD B are in different networks based on the CIDR notation.
Why is the pod ip's network mask prefix is 32 bits instead of, 24bits?

Single node Microk8s multus master interface cannot be reached

I have a single node Microk8s with calico.
I have deployed Multus sucessfully and I can create PODs with the 2nd network interface created succesfuly in the pod because can see the interfaces and the IP address correctly assigned. The pods can reach each other on the 2nd interface successfuly but I cannot reach host eno8 ( ip address 10.128.1.244), the multus master interface from the pods. I also cannot reach the pods from outside.
Am new to this kind of deployment and need help to figure out where the problem is?
Thanks.
Here is some details about my environment:
ubuntu#test:$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
test Ready <none> 9d v1.21.4-3+e5758f73ed2a04
Ip a on HOST
ubuntu#test:$ip a
8: eno8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:6c:2c:ff brd ff:ff:ff:ff:ff:ff
inet 10.128.1.244/24 brd 10.128.1.255 scope global eno8
valid_lft forever preferred_lft forever
inet6 fe80::3eec:efff:fe6c:2cff/64 scope link
valid_lft forever preferred_lft forever
ubuntu#test:$ kubectl get pods --all-namespaces | grep -i multus
kube-system kube-multus-ds-amd64-dz42s 1/1 Running 0 175m
Network Deployment:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: test-network
spec:
config: '{
"cniVersion": "{{ .Values.Multus_cniVersion}}",
"name": "test-network",
"type": "{{ .Values.Multus_driverType}}",
"master": "{{ .Values.Multus_master_interface}}",
"mode": "{{ .Values.Multus_interface_mode}}",
"ipam": {
"type": "{{ .Values.Multus_ipam_type}}",
"subnet": "{{ .Values.Multus_ipam_subnet}}",
"rangeStart": "{{ .Values.Multus_ipam_rangeStart}}",
"rangeEnd": "{{ .Values.Multus_ipam_rangeStop}}",
"routes": [
{ "dst": "{{ .Values.Multus_defaultRoute}}" }
],
"dns": {"nameservers": ["{{ .Values.Multus_DNS}}"]},
"gateway": "{{ .Values.Multus_ipam_gw}}"
}
}'
Multus_cniVersion: 0.3.1
Multus_driverType: macvlan
Multus_master_interface: eno8
Multus_interface_mode: bridge
Multus_ipam_type: host-local
Multus_ipam_subnet: 10.128.1.0/24
Multus_ipam_rangeStart: 10.128.1.147
Multus_ipam_rangeStop: 10.128.1.156
Multus_defaultRoute: 0.0.0.0/0
Multus_DNS: 10.128.1.1
Multus_ipam_gw: 10.128.1.1
ubuntu#test:$ kubectl get network-attachment-definitions
NAME AGE
test-network 8m39s
Network description:
ubuntu#test:$ kubectl describe network-attachment-definitions.k8s.cni.cncf.io test-network
Name: test-network
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: test-demo
meta.helm.sh/release-namespace: default
API Version: k8s.cni.cncf.io/v1
Kind: NetworkAttachmentDefinition
Metadata:
Creation Timestamp: 2021-09-24T12:15:08Z
Generation: 1
Managed Fields:
API Version: k8s.cni.cncf.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:meta.helm.sh/release-name:
f:meta.helm.sh/release-namespace:
f:labels:
.:
f:app.kubernetes.io/managed-by:
f:spec:
.:
f:config:
Manager: Go-http-client
Operation: Update
Time: 2021-09-24T12:15:08Z
Resource Version: 1062851
Self Link: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/test-network
UID: c96f3a0f-b30f-4972-9271-6b2871adf299
Spec:
Config: { "cniVersion": "0.3.1", "name": "test-network", "type": "macvlan", "master": "eno8", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "10.128.1.0/24", "rangeStart": "10.128.1.147", "rangeEnd": "10.128.1.156", "routes": [ { "dst": "0.0.0.0/0" } ], "dns": {"nameservers": ["10.128.1.1"]}, "gateway": "10.128.1.1" } }
Events: <none>
ip a in POD
root#test-deployment-6465bdfccc-k2sst:# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0#if505: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 22:a8:17:13:35:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.19.149/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20a8:17ff:fe13:3539/64 scope link
valid_lft forever preferred_lft forever
4: eth1#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether de:c1:d7:67:08:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.128.1.149/24 brd 10.128.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::dcc1:d7ff:fe67:893/64 scope link
valid_lft forever preferred_lft forever
Ping to eno8 in POD
root#test-deployment-6465bdfccc-g8bd4:# ping 10.128.1.244
PING 10.128.1.244 (10.128.1.244) 56(84) bytes of data.
^X^C
--- 10.128.1.244 ping statistics ---
14 packets transmitted, 0 received, 100% packet loss, time 13313ms
Ping to multus gateway
root#test-deployment-6465bdfccc-k2sst:# ping 10.128.1.1
PING 10.128.1.1 (10.128.1.1) 56(84) bytes of data.
From 10.128.1.149 icmp_seq=1 Destination Host Unreachable
From 10.128.1.149 icmp_seq=2 Destination Host Unreachable
From 10.128.1.149 icmp_seq=3 Destination Host Unreachable
From 10.128.1.149 icmp_seq=4 Destination Host Unreachable
From 10.128.1.149 icmp_seq=5 Destination Host Unreachable
From 10.128.1.149 icmp_seq=6 Destination Host Unreachable
^C
--- 10.128.1.1 ping statistics ---
8 packets transmitted, 0 received, +6 errors, 100% packet loss, time 7164ms
pipe 4
Netstat in the POD
root#test-deployment-6465bdfccc-k2sst:# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
10.128.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
ip r in the POD
root#test-deployment-6465bdfccc-g8bd4:# ip r
default via 169.254.1.1 dev eth0
10.128.1.0/24 dev eth1 proto kernel scope link src 10.128.1.149
169.254.1.1 dev eth0 scope link
Your problem may stem from the fact that MACVLAN interfaces cannot be reached from the same host's default route interface. Let's say your PC has interface eth0 with IP 10.0.0.2 and you use MACVLAN to map an interface in a container as a parent interface eth0, or a sub-interface eth0.1 etc., by using an IP 10.0.0.3. You won't be able to reach services running on 10.0.0.3 from the same host, but you will from another host. To resolve this, either use IPVLAN in Layer-3 mode to have fully routable plane. Note that you can't do port forwarding to access the container, because MACVLAN separates the communication on lower layers or use a sub interface with trunking mode 802.1q (but you will need a switch that supports promiscuous mode on the ports to be able to pass VLAN-tagged traffic).

Accessing istio-service-mesh from outside

I have installed minikube and docker to run a kubernetes cluster on a virtual machine (Ubuntu 20.04). After that, I installed istio and deployed the sample book application. So far, so good. The problem I have is that i can't access the application from outside the virtual machine (my local computer). I've done everything like the istio tutorial said, but at the point where they say it is accessible now I simply can't reach it.
The istio-ingressgateway receives an IP-address, but it's the same as the cluster IP, therefore my local computer can't find it of course as it only knows the address of the node (VM). And the minikube IP is also different to the VM IP.
istio-ingressgateway:
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.110.61.129 <none> 80/TCP,443/TCP,15443/TCP 12m
istio-ingressgateway LoadBalancer 10.106.72.253 10.106.72.253 15021:32690/TCP,80:32155/TCP,443:30156/TCP,31400:31976/TCP,15443:31412/TCP 12m
istiod ClusterIP 10.98.94.248 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 13m
minikube ip:
$ echo $(minikube ip)
192.168.49.2
Node addresses:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:95:be:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.164.131/24 brd 192.168.164.255 scope global dynamic ens33
valid_lft 1069sec preferred_lft 1069sec
inet6 fe80::20c:29ff:fe95:be75/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:d8:bc:fc brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e2ff:fed8:bcfc/64 scope link
valid_lft forever preferred_lft forever
6: br-1bb350499bef: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:19:18:56:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-1bb350499bef
valid_lft forever preferred_lft forever
inet6 fe80::42:19ff:fe18:561b/64 scope link
valid_lft forever preferred_lft forever
12: vethec1def3#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-1bb350499bef state UP group default
link/ether 56:bd:8c:fe:4c:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::54bd:8cff:fefe:4c6b/64 scope link
valid_lft forever preferred_lft forever
The address which should be working according to the istio documentation is http://192.168.49.2:32155/productpage (in my configuration), but this ends in a connection timeout.
Am I missing something?

Connecting pod to external world

Newbie to kubernetes so might be a silly question, bear with me -
I created a cluster with one node, applied a sample deployment like below
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffedep
spec:
selector:
matchLabels:
app: coffedepapp
template:
metadata:
labels:
app: coffedepapp
spec:
containers:
- name: coffepod
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80'
Now I want to ping/connect an external website/entity from this pod, so I was hoping my ping would fail as one needs services applied like NodePort/LoadBalancer to connect to the outside world. But surprisingly, ping passed? I know I am horribly wrong somewhere, please correct my understanding here.
Pod's interfaces and trace route -
/ # traceroute google.com
traceroute to google.com (172.217.194.138), 30 hops max, 46 byte packets
1 * * *
2 10.244.0.1 (10.244.0.1) 0.013 ms 0.006 ms 0.004 ms
3 178.128.80.254 (178.128.80.254) 1.904 ms 178.128.80.253 (178.128.80.253) 0.720 ms 178.128.80.254 (178.128.80.254) 5.185 ms
4 138.197.250.254 (138.197.250.254) 0.995 ms 138.197.250.248 (138.197.250.248) 0.634 ms 138.197.250.252 (138.197.250.252) 0.523 ms
5 138.197.245.12 (138.197.245.12) 5.295 ms 138.197.245.14 (138.197.245.14) 0.956 ms 138.197.245.0 (138.197.245.0) 1.160 ms
6 103.253.144.255 (103.253.144.255) 1.396 ms 0.857 ms 0.763 ms
7 108.170.254.226 (108.170.254.226) 1.391 ms 74.125.242.35 (74.125.242.35) 0.963 ms 108.170.240.164 (108.170.240.164) 1.679 ms
8 66.249.95.248 (66.249.95.248) 2.136 ms 72.14.235.152 (72.14.235.152) 1.727 ms 66.249.95.248 (66.249.95.248) 1.821 ms
9 209.85.243.180 (209.85.243.180) 2.813 ms 108.170.230.73 (108.170.230.73) 1.831 ms 74.125.252.254 (74.125.252.254) 2.293 ms
10 209.85.246.17 (209.85.246.17) 2.758 ms 209.85.245.135 (209.85.245.135) 2.448 ms 66.249.95.23 (66.249.95.23) 4.538 ms
11^Z[3]+ Stopped traceroute google.com
/ #
/ #
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
16: eth0#if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether ee:97:21:eb:98:bc brd ff:ff:ff:ff:ff:ff
inet 10.244.0.183/32 brd 10.244.0.183 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ec97:21ff:feeb:98bc/64 scope link
valid_lft forever preferred_lft forever
Node's interfaces -
root#pool-3mqi2tbi6-b3dc:~# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 3a:c1:6f:8d:0f:45 brd ff:ff:ff:ff:ff:ff
inet 178.128.82.251/20 brd 178.128.95.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::38c1:6fff:fe8d:f45/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 06:88:c4:23:4b:cc brd ff:ff:ff:ff:ff:ff
inet 10.130.227.173/16 brd 10.130.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::488:c4ff:fe23:4bcc/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:61:08:39:8a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: cilium_net#cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:3c:d3:35:b3:35 brd ff:ff:ff:ff:ff:ff
inet6 fe80::983c:d3ff:fe35:b335/64 scope link
valid_lft forever preferred_lft forever
6: cilium_host#cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:13:c5:6e:52:bf brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/32 scope link cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::5013:c5ff:fe6e:52bf/64 scope link
valid_lft forever preferred_lft forever
7: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 4a:ab:3b:3b:0d:b5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::48ab:3bff:fe3b:db5/64 scope link
valid_lft forever preferred_lft forever
9: cilium_health#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:2f:45:83:e0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b42f:45ff:fe83:e044/64 scope link
valid_lft forever preferred_lft forever
11: lxc1408c930131e#if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:45:4d:7b:94:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8c45:4dff:fe7b:94e5/64 scope link
valid_lft forever preferred_lft forever
13: lxc0cef46c3977c#if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:eb:36:8b:fb:45 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::14eb:36ff:fe8b:fb45/64 scope link
valid_lft forever preferred_lft forever
15: lxca02c5de95d1c#if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:9d:0c:34:0f:11 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::609d:cff:fe34:f11/64 scope link
valid_lft forever preferred_lft forever
17: lxc32eddb70fa07#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:1a:08:95:fb:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::d81a:8ff:fe95:fbf2/64 scope link
valid_lft forever preferred_lft forever
You don't need services, nodeport, or loadbalancer to connect to the outside world. If your network policies allow pods to talk to outside, you can.
You need services to access your pods from within your cluster. You need loadbalancers, or nodeports to connect to your cluster from outside.

How to map interface name of container in docker-compose with network created?

I am setting up with docker-compose (version 1.21.1) 3 docker containers and two networks:
version: '2.1'
services:
app1:
build:
context: .
dockerfile: "Dockerfile"
depends_on:
- redis
networks:
- pub
- default
redis:
build:
context: "tests/redis"
networks:
- default
app2:
build:
context: "tests/app2"
networks:
- pub
- default
networks:
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "fe80::42:acff:fe10:ee04/64"
default:
In app1:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
696: eth0#if697: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:10:ee:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.238.3/24 brd 172.16.238.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe10:ee03/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::3/64 scope link nodad
valid_lft forever preferred_lft forever
698: eth1#if699: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:f0:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.240.4/20 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
However, I want eth1 to support IPv6, or both eth0 and eth1.
The documentation doesn't mention anything about that, neither I could find an option in the network options.
Is there a way to do this?
I had to enable ipv6 for both networks and do subnetting to the IPv6 subnet.
For the CIDR part I used an online subnet calculator for IPv6, however I am not sure yet why this worked :p
This is the configuration that worked:
networks:
default:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8:8000::/33"
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8::/33"