I am trying to create k8s cluster. Is it necessary to establish ssh connection between hosts ?
If so, should we make them passwordless ssh enabled ?
Kubernetes does not use SSH that I know of. It's possible your deployer tool could require it, but I don't know of any that works that way. It's generally recommended you have some process for logging in to the underlying machines in case you need to debug very low-level failures, but this is usually very rare. For my team, we need to log in to a node about once every month or two.
Ports required are mentioned here https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
They are as below
Control-plane node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self
Worker node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
You don't need SSH access between hosts.
Related
I am looking for detailed info on how K8s service/s handle TCP connection. Does K8s service handle the TCP connection locally i.e., establish TCP connection each towards client and application pod? Couldn't find any official K8s documentation on this, any reference / input would help.
Also, how K8s service handle HTTP persistent connections? Got one article for IPTables use case, but then service can be configured to use IPVS proxy mode. Is there any article capturing TCP processing in K8s Service in detail?
According to the referrence, two of the options kube-apiserver takes are --bind-address and --advertise-address It appears to me that they conflict each other.
What is/are the actual difference(s) between the two?
Is --bind-address the address that the kube-apiserver process will listen on?
Is --advertise-address the address that kube-apiserver will advertise as the address that it will be listening on? If so, how does it advertise? Does it do some kind of a broadcast over the network?
According to the reference-kube-apiserver that you are referencing:
--advertise-address ip The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.
and
--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0)
Those parameters are configurable, but please keep in mind they should be specified during cluster bootstrapping.
API server ports and IP addresses
default “Secure port” is 6443, but can be changed with the
--secure-port flag. As described in the documentation - master node should expose secure port for other cluster components to communicate with the Kubernetes API server.
default IP is first non-localhost network interface, but can be
changed with the --bind-address flag.
Above mentioned parameters (--secure-port and --bind-address) allow you to configure network interface with secure port for Kubernetes API.
As stated before, if you don't specify any values:
By default it would be default IP is first non-localhost network interface and 6443 port.
Please note that:
--advertise-address will be used by kube-apiserver to advertise this address for kubernetes controller which are responsible for preparing endpoints for kubernetes.default.svc (core Service responsible for communication between internal applications and the the API server). This Kubernetes Service VIP is configured for per-node load-balancing by kube-proxy.
More information on kubernetes.default.svc and kubernetes controller can be found here.
Cluster <-> Master communication
All communication paths from the cluster to the master terminate at the apiserver (none of the other master components are designed to expose remote services). In a typical deployment, the apiserver is configured to listen for remote connections on a secure HTTPS port (443)
The kubernetes service is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver.
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Additionally, you can find out more about communication within the cluster by reading master-node-communication and control-plane-node-communication.
My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - Destination Host unreachable. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you
The LoadbalancerIP or the External SVC IP will never be pingable.
When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.
And that is the only thing your External SVC IP will respond to.
A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.
You can do an nc -v <ExternalIP> 8080 to test it.
OR
use a tool like mtr and pass --tcp --port 8080 to do your tests
Actually, during installation of metal LB, we need to assign a ip range from which metal LB can assign ip. Those ip must be in range of your dhcp network. for example in virtual box, network ip is assigned from the Virtualbox host-only adapter dhcp server if you use host-only adapter.
The components of metal LB are:
The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
when you change the service type loadbalancer, Metal LB will assigned a ip address from its ip pools which is basically maping of kubernets internal ip with the metal LB assigned ip.
you can see this by
kubect get svc -n namespaces
For more details, please check this document.
I am making my first steps with Kubernetes and I have some difficulties.
I have a pod with it's service defined as NodePort at port 30010.
I have a load balancer configured in front of this Kubernetes cluster where port 8443 directs traffic to this 30010 port.
when I try to access this pod from outside the cluster to port 8443 the pod is not getting any connections but I can see the incoming connections via tcptrack in the host in port 30010 with means the load balancer is doing it's job.
when I do curl -k https://127.0.0.1:30010 in the host I get a response from the pods.
what am I missing?
how can I debug it?
thanks
we currently have following Kubernetes setup (v1.13.1, setup with kubeadm) with connectivity set up between them:
Master node (bare metal)
5 worker nodes (bare metal)
2 worker nodes (cloud)
There is no proxy in between to access cluster, currently we are accessing services via hostname:NodePort
We are experiencing issue with accessing services via NodePort on 2 cloud worker nodes. What is happening is that service is accessible via IPv6, but not via IPv4:
IPv6:
telnet localhost6 30005
Trying ::1...
Connected to localhost6.
Escape character is '^]'.
IPv4:
telnet localhost4 30005
Trying 127.0.0.1...
Thing is that both are working on bare metal nodes. If I use netstat -napl | grep 30005, I can see kube-proxy is listening on this port (tcp6). I presumed this means that it does not listen on tcp, but aparently this is not the case (I have same picture on bare metal worker nodes):
tcp6 7 0 :::30005 :::* LISTEN 24658/kube-proxy
I have also read that services are using IPv6, but based on bare metal worker nodes, it seems there should not be a problem using IPv4 there as well.
Any idea what would cause that issue and how to solve it?
Thank you and best regards,
Bostjan
In case someone stumbles upon same issue, there was issue with unopened ports on FW for flannel network overlay:
8285 UDP - flannel UDP backend
8472 UDP - flannel vxlan backend