minikube ingress remote access - kubernetes

I have a Nginx service running in the minikuve VM having ip 192.168.99.106
kubectl get ingress
`NAME CLASS HOSTS ADDRESS PORTS AGE`
`ingress-service <none> * 192.168.99.106 80 153m`
kubectl describe ingress
Name: ingress-service
Namespace: default
Address: 192.168.99.106
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/ fe-cluster-ip-service:3000 (172.17.0.20:3000)
/login/ login-cluster-ip-service:9090 (172.17.0.18:9090)
Annotations: kubernetes.io/ingress.class: nginx
Events: <none>
I want to expose the port 192.168.99.106:80 to the outside world so that the I will be able to access the app from 10.105.230.34:8888
enp129s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet **10.105.230.34** netmask 255.255.255.0 broadcast 10.105.230.255
inet6 fe80::2be:75ff:fee1:57ce prefixlen 64 scopeid 0x20<link>
ether 00:be:75:e1:57:ce txqueuelen 1000 (Ethernet)
RX packets 3441670 bytes 4623846194 (4.3 GiB)
RX errors 0 dropped 38 overruns 0 frame 0
TX packets 971511 bytes 235934965 (225.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfbc00000-fbcfffff
Is it possible to achieve this functionality. I tried tunneling but could not make it work

I sorted it out by using a reverse proxy nginx to the minikube ip.

Related

Single node Microk8s multus master interface cannot be reached

I have a single node Microk8s with calico.
I have deployed Multus sucessfully and I can create PODs with the 2nd network interface created succesfuly in the pod because can see the interfaces and the IP address correctly assigned. The pods can reach each other on the 2nd interface successfuly but I cannot reach host eno8 ( ip address 10.128.1.244), the multus master interface from the pods. I also cannot reach the pods from outside.
Am new to this kind of deployment and need help to figure out where the problem is?
Thanks.
Here is some details about my environment:
ubuntu#test:$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
test Ready <none> 9d v1.21.4-3+e5758f73ed2a04
Ip a on HOST
ubuntu#test:$ip a
8: eno8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 3c:ec:ef:6c:2c:ff brd ff:ff:ff:ff:ff:ff
inet 10.128.1.244/24 brd 10.128.1.255 scope global eno8
valid_lft forever preferred_lft forever
inet6 fe80::3eec:efff:fe6c:2cff/64 scope link
valid_lft forever preferred_lft forever
ubuntu#test:$ kubectl get pods --all-namespaces | grep -i multus
kube-system kube-multus-ds-amd64-dz42s 1/1 Running 0 175m
Network Deployment:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: test-network
spec:
config: '{
"cniVersion": "{{ .Values.Multus_cniVersion}}",
"name": "test-network",
"type": "{{ .Values.Multus_driverType}}",
"master": "{{ .Values.Multus_master_interface}}",
"mode": "{{ .Values.Multus_interface_mode}}",
"ipam": {
"type": "{{ .Values.Multus_ipam_type}}",
"subnet": "{{ .Values.Multus_ipam_subnet}}",
"rangeStart": "{{ .Values.Multus_ipam_rangeStart}}",
"rangeEnd": "{{ .Values.Multus_ipam_rangeStop}}",
"routes": [
{ "dst": "{{ .Values.Multus_defaultRoute}}" }
],
"dns": {"nameservers": ["{{ .Values.Multus_DNS}}"]},
"gateway": "{{ .Values.Multus_ipam_gw}}"
}
}'
Multus_cniVersion: 0.3.1
Multus_driverType: macvlan
Multus_master_interface: eno8
Multus_interface_mode: bridge
Multus_ipam_type: host-local
Multus_ipam_subnet: 10.128.1.0/24
Multus_ipam_rangeStart: 10.128.1.147
Multus_ipam_rangeStop: 10.128.1.156
Multus_defaultRoute: 0.0.0.0/0
Multus_DNS: 10.128.1.1
Multus_ipam_gw: 10.128.1.1
ubuntu#test:$ kubectl get network-attachment-definitions
NAME AGE
test-network 8m39s
Network description:
ubuntu#test:$ kubectl describe network-attachment-definitions.k8s.cni.cncf.io test-network
Name: test-network
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: test-demo
meta.helm.sh/release-namespace: default
API Version: k8s.cni.cncf.io/v1
Kind: NetworkAttachmentDefinition
Metadata:
Creation Timestamp: 2021-09-24T12:15:08Z
Generation: 1
Managed Fields:
API Version: k8s.cni.cncf.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:meta.helm.sh/release-name:
f:meta.helm.sh/release-namespace:
f:labels:
.:
f:app.kubernetes.io/managed-by:
f:spec:
.:
f:config:
Manager: Go-http-client
Operation: Update
Time: 2021-09-24T12:15:08Z
Resource Version: 1062851
Self Link: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/test-network
UID: c96f3a0f-b30f-4972-9271-6b2871adf299
Spec:
Config: { "cniVersion": "0.3.1", "name": "test-network", "type": "macvlan", "master": "eno8", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "10.128.1.0/24", "rangeStart": "10.128.1.147", "rangeEnd": "10.128.1.156", "routes": [ { "dst": "0.0.0.0/0" } ], "dns": {"nameservers": ["10.128.1.1"]}, "gateway": "10.128.1.1" } }
Events: <none>
ip a in POD
root#test-deployment-6465bdfccc-k2sst:# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0#if505: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 22:a8:17:13:35:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.1.19.149/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20a8:17ff:fe13:3539/64 scope link
valid_lft forever preferred_lft forever
4: eth1#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether de:c1:d7:67:08:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.128.1.149/24 brd 10.128.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::dcc1:d7ff:fe67:893/64 scope link
valid_lft forever preferred_lft forever
Ping to eno8 in POD
root#test-deployment-6465bdfccc-g8bd4:# ping 10.128.1.244
PING 10.128.1.244 (10.128.1.244) 56(84) bytes of data.
^X^C
--- 10.128.1.244 ping statistics ---
14 packets transmitted, 0 received, 100% packet loss, time 13313ms
Ping to multus gateway
root#test-deployment-6465bdfccc-k2sst:# ping 10.128.1.1
PING 10.128.1.1 (10.128.1.1) 56(84) bytes of data.
From 10.128.1.149 icmp_seq=1 Destination Host Unreachable
From 10.128.1.149 icmp_seq=2 Destination Host Unreachable
From 10.128.1.149 icmp_seq=3 Destination Host Unreachable
From 10.128.1.149 icmp_seq=4 Destination Host Unreachable
From 10.128.1.149 icmp_seq=5 Destination Host Unreachable
From 10.128.1.149 icmp_seq=6 Destination Host Unreachable
^C
--- 10.128.1.1 ping statistics ---
8 packets transmitted, 0 received, +6 errors, 100% packet loss, time 7164ms
pipe 4
Netstat in the POD
root#test-deployment-6465bdfccc-k2sst:# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
10.128.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
ip r in the POD
root#test-deployment-6465bdfccc-g8bd4:# ip r
default via 169.254.1.1 dev eth0
10.128.1.0/24 dev eth1 proto kernel scope link src 10.128.1.149
169.254.1.1 dev eth0 scope link
Your problem may stem from the fact that MACVLAN interfaces cannot be reached from the same host's default route interface. Let's say your PC has interface eth0 with IP 10.0.0.2 and you use MACVLAN to map an interface in a container as a parent interface eth0, or a sub-interface eth0.1 etc., by using an IP 10.0.0.3. You won't be able to reach services running on 10.0.0.3 from the same host, but you will from another host. To resolve this, either use IPVLAN in Layer-3 mode to have fully routable plane. Note that you can't do port forwarding to access the container, because MACVLAN separates the communication on lower layers or use a sub interface with trunking mode 802.1q (but you will need a switch that supports promiscuous mode on the ports to be able to pass VLAN-tagged traffic).

netcat listerning pod in kubernetes namespace unable to connect

I am running kubernetes v 19.4 with weave-net ( image: weaveworks/weave-npc:2.7.0)
There are no network policies active in the default namespace
I want to run a netcat listener on pod1 port 8080, and want to connect to pod1 port 8080 by pod2
[root#node01 ~]# kubectl run pod1 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod1:/# apt update ; apt install netcat-openbsd -y
........
root#pod1:/# nc -l -p 8080
I verify the port is listening on pod1 by :
root#node01 ~]# kubectl exec -i -t pod1 -- /bin/bash
root#pod1:/# apt install net-tools -y
...........
root#pod1:/# netstat -tulpen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 0 213960 263/nc
root#pod1:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.3 netmask 255.240.0.0 broadcast 10.47.255.255
ether a2:b9:3e:bc:6e:25 txqueuelen 0 (Ethernet)
RX packets 8429 bytes 17438639 (17.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4217 bytes 284639 (284.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I install pod2 witn netcat on it:
[root#node01 ~]# kubectl run pod2 -i -t --image=ubuntu -- /bin/bash
If you don't see a command prompt, try pressing enter.
root#pod2:/# apt update ; apt install netcat-openbsd -y
I test my netcat listener on pod1 from pod2:
root#pod2:/# nc 10.32.0.3 8080
....times out
So i decided to create a service of port 8080 on pod1:
kubectl expose pod pod1 --port=8080 ; kubectl get svc ; kubectl get netpol
[root#node01 ~]# kubectl expose pod pod1 --port=8080 ; kubectl get svc
service/pod1 exposed
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache ClusterIP 10.104.218.123 <none> 80/TCP 20d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
nginx ClusterIP 10.98.221.196 <none> 80/TCP 13d
pod1 ClusterIP 10.105.194.196 <none> 8080/TCP 2s
No resources found in default namespace.
Retry from pod2 now by service:
ping pod1
PING pod1.default.svc.cluster.local (10.105.194.196) 56(84) bytes of data.
root#pod2:/# nc pod1 8080
....times out
I also tried this with the regular netcat package.
For good measure i try to expose port 8080 on the pod as nodeport:
root#node01 ~]# kubectl delete svc pod1 ; kubectl expose pod pod1 --port=8080 --type=NodePort ; kubectl get svc
So when i try to access that port from outside kubernetes i am unable to connect, for good measure i also test the ssh port to verify my base connectivity is ok
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 30743
nc: connect to 10.10.70.112 port 30743 (tcp) failed: Connection refused
user#DESKTOP-7TIH9:~$ nc -zv 10.10.70.112 22
Connection to 10.10.70.112 22 port [tcp/ssh] succeeded!
Can anybody tell me if i am doing something, have the wrong expectation or advice me how to proceed.
Thank you in advance.
Trying to solve this i somehow decided to enable the firewall on the k8s hosts.
This lead me to a broken cluster. I decided to reinit the cluster, make sure all the fw ports are opened. Including this one : https://www.weave.works/docs/net/latest/faq#ports
All is working now1

How to add IP route(s) So Kubernetes cluster addresses go via through appropriate adapter

I have installed Kubernetes cluster(one Master and one Worker- Node) on CentOS-8 OS stand-alone server separately as per the below link instructions.
https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
Weave-Net - CNI plugin installed as per above link. Now I can see below new network adapter in our K8s Master & Worker-Node server.
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::a07d:21ff:fef1:4656 prefixlen 64 scopeid 0x20<link>
ether a2:7d:21:f1:46:56 txqueuelen 1000 (Ethernet)
RX packets 141 bytes 13322 (13.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48 bytes 4896 (4.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But the problem is from host server unable to ping (Or) access any of our remote site/location IPs (ping response given below). whereas Local IPs are pinging & accessible.
ping -c 4 120.121.5.48
PING 120.121.5.48 (120.121.5.48) 56(84) bytes of data.
From 10.32.0.1 icmp_seq=1 Destination Host Unreachable
From 10.32.0.1 icmp_seq=2 Destination Host Unreachable
From 10.32.0.1 icmp_seq=3 Destination Host Unreachable
From 10.32.0.1 icmp_seq=4 Destination Host Unreachable
--- 120.121.5.48 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms
pipe 4
Also from host server tried to connect our remote LDAP server through telnet it shows below error message.
# telnet 120.121.5.48 389
Trying 120.121.5.48...
telnet: connect to address 120.121.5.48: No route to host
In our K8s Master & Worker-Node server have 23 network adapters, Statically network IP have configured, So any additional configuration need to be configured for K8s CNI reachable in default routing?
ip route show & route -n output as follows.
# ip route show
default via 45.46.47.1 dev ens1f0 proto static metric 100
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
45.46.47.0/24 dev ens1f0 proto kernel scope link src 45.46.47.48 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 45.46.47.1 0.0.0.0 UG 100 0 0 ens1f0
10.32.0.0 0.0.0.0 255.255.255.0 U 10 0 0 ens1f0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
45.46.47.0 0.0.0.0 255.255.255.0 U 100 0 0 ens1f0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Tried to change the weave route to default with below command. it executed successfully, But still same problem.
ip route add 10.32.0.0/24 via 45.46.47.1 dev ens1f0 metric 100
Suppose if i run ifconfig weave down everything is working fine. But to use Kubernetes cluster i need Weave-net network adapter. So please help me to add IP route(s) So that my Kubernetes cluster addresses go via through appropriate adapter, So that i will be able to access both our local & remote location server.
I have changed the CNI-Weave-Net plugin to Flannel, now it is working as excepted.

Pods on different nodes can't ping each other

I set up 1 master 2 nodes k8s cluster in according to documentation. A pod can ping the other pod on the same node but can't ping the pod on the other node.
To demonstrate the problem I deployed below deployments which has 3 replica. While two of them sits on the same node, the other pod sits on the other node.
$ cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-21-115.us-west-2.compute.internal Ready master 20m v1.11.2
ip-172-31-26-62.us-west-2.compute.internal Ready 19m v1.11.2
ip-172-31-29-204.us-west-2.compute.internal Ready 14m v1.11.2
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-deployment-966857787-22qq7 1/1 Running 0 11m 10.244.2.3 ip-172-31-29-204.us-west-2.compute.internal
nginx-deployment-966857787-lv7dd 1/1 Running 0 11m 10.244.1.2 ip-172-31-26-62.us-west-2.compute.internal
nginx-deployment-966857787-zkzg6 1/1 Running 0 11m 10.244.2.2 ip-172-31-29-204.us-west-2.compute.internal
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 21m
nginx-svc ClusterIP 10.105.205.10 80/TCP 11m
Everything looks fine.
Let me show you containers.
# docker exec -it 489b180f512b /bin/bash
root#nginx-deployment-966857787-zkzg6:/# ifconfig
eth0: flags=4163 mtu 8951
inet 10.244.2.2 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::cc4d:61ff:fe8a:5aeb prefixlen 64 scopeid 0x20
root#nginx-deployment-966857787-zkzg6:/# ping 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
64 bytes from 10.244.2.3: icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from 10.244.2.3: icmp_seq=2 ttl=64 time=0.055 ms
^C
So it pings its neighbor pod on the same node.
root#nginx-deployment-966857787-zkzg6:/# ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
^C
--- 10.244.1.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1059ms
And can't ping its replica on the other node.
Here is host interfaces:
# ifconfig
cni0: flags=4163 mtu 8951
inet 10.244.2.1 netmask 255.255.255.0 broadcast 0.0.0.0
docker0: flags=4099 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
eth0: flags=4163 mtu 9001
inet 172.31.29.204 netmask 255.255.240.0 broadcast 172.31.31.255
flannel.1: flags=4163 mtu 8951
inet 10.244.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
veth09fb984a: flags=4163 mtu 8951
inet6 fe80::d819:14ff:fe06:174c prefixlen 64 scopeid 0x20
veth87b3563e: flags=4163 mtu 8951
inet6 fe80::d09c:d2ff:fe7b:7dd7 prefixlen 64 scopeid 0x20
# ifconfig
cni0: flags=4163 mtu 8951
inet 10.244.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
docker0: flags=4099 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
eth0: flags=4163 mtu 9001
inet 172.31.26.62 netmask 255.255.240.0 broadcast 172.31.31.255
flannel.1: flags=4163 mtu 8951
inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
veth9733e2e6: flags=4163 mtu 8951
inet6 fe80::8003:46ff:fee2:abc2 prefixlen 64 scopeid 0x20
Processes on the nodes:
# ps auxww|grep kube
root 4059 0.1 2.8 43568 28316 ? Ssl 00:31 0:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 4260 0.0 3.4 358984 34288 ? Ssl 00:31 0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 4455 1.1 9.6 760868 97260 ? Ssl 00:31 0:14 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
Because of this network problem clusterIP is also unreachable:
$ curl 10.105.205.10:80
Any suggestion?
Thanks.
I found the problem.
Flannel uses UDP port 8285 and 8472 which was being blocked by AWS security groups. I had only opened TCP ports.
I enable UDP port 8285 and UDP port 8472 as well as TCP 6443, 10250, 10256.
The docker virtual bridge interface docker0 is now have IP 172.17.0.1 on both host.
But as per the docker/flannel integration guide, the docker0virtual bridge should be in flannel network on each host.
A highlevel workflow of flannel/docker networking integrations below
Flannel creates /run/flannel/subnet.env as per the etcd network configuration during flanneld startup.
Docker refers the file /run/flannel/subnet.env and set --bip flag during dockerd startup and assign IP from flannel network to docker0
Refer docker/flannel integration doc for more details:
http://docker-k8s-lab.readthedocs.io/en/latest/docker/docker-flannel.html#restart-docker-daemon-with-flannel-network

expose kuberentes api to the rest of the network

ss -tnulp|grep 8443
tcp LISTEN 0 128 172.16.1.4:8443 *:* users:(("kube-apiserver",pid=29513,fd=5))
i have my api server running and i want to expose it to the rest of the network, this is the network config on my cluster :
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.1.4 netmask 255.255.255.0 broadcast 172.16.1.255
inet6 fe80::f816:3eff:feb5:93a3 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:b5:93:a3 txqueuelen 1000 (Ethernet)
RX packets 218935 bytes 2518654013 (2.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 160281 bytes 33994810 (32.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 139.54.130.39 netmask 255.255.254.0 broadcast 139.54.131.255
inet6 3ffe:302:11:2:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:10:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fd12:1f4b:e0bf:1:f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x0<global>
inet6 fe80::f816:3eff:fe46:ab28 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:46:ab:28 txqueuelen 1000 (Ethernet)
RX packets 3227129 bytes 845879874 (806.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1072031 bytes 132806957 (126.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
the VM has an external ip 139.54.130.39
Any leads how to do that ?
Did you try using this option
- --apiserver-advertise-address=139.54.130.39
Kubectl over this network will be able to handshake 139.54.130.39
you can apply this depends of your installation:
.......
In case .. you installed apiserver as pod
just you can change apiserver-advertise-address parameter in
/etc/kubernetes/manifests/kube-apiserver.yaml
or
check/list kube-system pods you have to get actual apiserver name and edit it (carefully )
kubectl get pod -n kube-system
kubectl edit pod -n kube-system kube-apiserver
........
In case .. you installed apiserver as service, edit systemd script
ex:
vim /etc/systemd/system/kube-apiserver.service
Edit
ExecStart=/usr/local/bin/kube-apiserver
--bind-address=0.0.0.0
--advertise_address=139.54.130.39