we currently have following Kubernetes setup (v1.13.1, setup with kubeadm) with connectivity set up between them:
Master node (bare metal)
5 worker nodes (bare metal)
2 worker nodes (cloud)
There is no proxy in between to access cluster, currently we are accessing services via hostname:NodePort
We are experiencing issue with accessing services via NodePort on 2 cloud worker nodes. What is happening is that service is accessible via IPv6, but not via IPv4:
IPv6:
telnet localhost6 30005
Trying ::1...
Connected to localhost6.
Escape character is '^]'.
IPv4:
telnet localhost4 30005
Trying 127.0.0.1...
Thing is that both are working on bare metal nodes. If I use netstat -napl | grep 30005, I can see kube-proxy is listening on this port (tcp6). I presumed this means that it does not listen on tcp, but aparently this is not the case (I have same picture on bare metal worker nodes):
tcp6 7 0 :::30005 :::* LISTEN 24658/kube-proxy
I have also read that services are using IPv6, but based on bare metal worker nodes, it seems there should not be a problem using IPv4 there as well.
Any idea what would cause that issue and how to solve it?
Thank you and best regards,
Bostjan
In case someone stumbles upon same issue, there was issue with unopened ports on FW for flannel network overlay:
8285 UDP - flannel UDP backend
8472 UDP - flannel vxlan backend
Related
I have a machine X with a lot of IPs, podman-compose with OpenSearch and OpenSearch Dashboards links the images to the wrong (unexposed) IP. I tried to force the IP but if I do so, podman-compose would break. How can I do so?
I tried to add an IPv4 in the docker-compose.yml, I tried to modify the images and force the right IP whenever I found 0.0.0.0, but it keeps breaking.
Docker / Podman container IPs are not accessible from external clients.
You need to expose TCP or UDP ports from your container to the host system and then clients will connect to :.
The host port and the container port do not need to be the same port.
i.e. you can run multiple web server containers all using port 80 however you will need to pick unique ports on your host OS that are not used by other services to port-map to the containers. i.e 80->80, 81->80, 8080->80 etc.
Once you create the port definitions in your container configuration Podman will handle the port forwarding from the host to the container.
You might need to open the ports on the host firewall to allow clients to connect. 0.0.0.0 is another way of representing the local host.
Let say your host is 10.1.1.20 and your OpenSearch Dashboards container is 172.16.8.4 and your dashboard web app is configured to listen on port 5001/TCP.
You will need a ports directive in your docker-compose.yml file to map the host port 5001 to the container port 5001 similar to the below.
containers:
opensearch-dashboard:
ports:
- "5001:5001"
As long as port 5001 is permitted on your host firewall, the client should be able to connect using https://10.1.1.20:5001/
According to the referrence, two of the options kube-apiserver takes are --bind-address and --advertise-address It appears to me that they conflict each other.
What is/are the actual difference(s) between the two?
Is --bind-address the address that the kube-apiserver process will listen on?
Is --advertise-address the address that kube-apiserver will advertise as the address that it will be listening on? If so, how does it advertise? Does it do some kind of a broadcast over the network?
According to the reference-kube-apiserver that you are referencing:
--advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
and
--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
Those parameters are configurable, but please keep in mind they should be specified during cluster bootstrapping.
API server ports and IP addresses
default “Secure port” is 6443, but can be changed with the
--secure-port flag. As described in the documentation - master node should expose secure port for other cluster components to communicate with the Kubernetes API server.
default IP is first non-localhost network interface, but can be
changed with the --bind-address flag.
Above mentioned parameters (--secure-port and --bind-address) allow you to configure network interface with secure port for Kubernetes API.
As stated before, if you don't specify any values:
By default it would be default IP is first non-localhost network interface and 6443 port.
Please note that:
--advertise-address will be used by kube-apiserver to advertise this address for kubernetes controller which are responsible for preparing endpoints for kubernetes.default.svc (core Service responsible for communication between internal applications and the the API server). This Kubernetes Service VIP is configured for per-node load-balancing by kube-proxy.
More information on kubernetes.default.svc and kubernetes controller can be found here.
Cluster <-> Master communication
All communication paths from the cluster to the master terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443)
The kubernetes service is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Additionally, you can find out more about communication within the cluster by reading master-node-communication and control-plane-node-communication.
My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - Destination Host unreachable. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you
The LoadbalancerIP or the External SVC IP will never be pingable.
When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.
And that is the only thing your External SVC IP will respond to.
A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.
You can do an nc -v <ExternalIP> 8080 to test it.
OR
use a tool like mtr and pass --tcp --port 8080 to do your tests
Actually, during installation of metal LB, we need to assign a ip range from which metal LB can assign ip. Those ip must be in range of your dhcp network. for example in virtual box, network ip is assigned from the Virtualbox host-only adapter dhcp server if you use host-only adapter.
The components of metal LB are:
The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
when you change the service type loadbalancer, Metal LB will assigned a ip address from its ip pools which is basically maping of kubernets internal ip with the metal LB assigned ip.
you can see this by
kubect get svc -n namespaces
For more details, please check this document.
We are running Kubernetes cluster in AWS. We setup the cluster with Kubeadm without the cloud provider option (like bare metal).
Nginx-ingress controller is exposed as a service over 32000 port as NodePort service. We have configured AWS ALB to pass the external request to the K8s worker node over port 32000.
We have been noticing that worker nodes turn up unhealthy. On investigating further, looks like the NodePort connection seems to be inconsistent. As you can see below, connecting to the same IP on port 32000 works most of the time but just sits in "Trying to connect" often. I am not able to see any error message related to this. Any help is highly appreciated.
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
Connected to 10.35.3.76.
Escape character is '^]'.
^CConnection closed by foreign host.
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
^C
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
Connected to 10.35.3.76.
Escape character is '^]'.
I am trying to create k8s cluster. Is it necessary to establish ssh connection between hosts ?
If so, should we make them passwordless ssh enabled ?
Kubernetes does not use SSH that I know of. It's possible your deployer tool could require it, but I don't know of any that works that way. It's generally recommended you have some process for logging in to the underlying machines in case you need to debug very low-level failures, but this is usually very rare. For my team, we need to log in to a node about once every month or two.
Ports required are mentioned here https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
They are as below
Control-plane node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self
Worker node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
You don't need SSH access between hosts.