K8s service functioning at TCP level - kubernetes

I am looking for detailed info on how K8s service/s handle TCP connection. Does K8s service handle the TCP connection locally i.e., establish TCP connection each towards client and application pod? Couldn't find any official K8s documentation on this, any reference / input would help.
Also, how K8s service handle HTTP persistent connections? Got one article for IPTables use case, but then service can be configured to use IPVS proxy mode. Is there any article capturing TCP processing in K8s Service in detail?

Related

Kubernetes service proxy

Kuberenetes newbie here, we have a jms server outside the cluster thats only accessible through our cluster, how can I create a port forward proxy on the cluster so I can connect to it via my local pc?
Proxy is an application layer function or feature, whereas port forwarding is really just a manual entry in one of the NAPT tables. A proxy understands the application protocol and can be used as a single entry point for multiple exposed servers.
The NGINX Ingress Controller for Kubernetes (as a proxy) is compatible with the NGINX web server. If you want to access workloads that are already running on your cluster from outside of it. Creating an Ingress resource is the standard procedure. In your workload cluster, add an ingress controller. For installation instructions, see this page.
Kubernetes port forwarding:
This is especially useful when you want to directly communicate with a specific port on a Pod from your local machine, according to the official kubernetes Connect with Port Forwarding documentation. Additionally, you don't have to manually expose services to accomplish this. Kubectl port-forward, on the other hand, moves connections from a local port to a pod port. Kubectl port-forward is more general than kubectl proxy because it can forward TCP traffic while kubectl proxy can only forward HTTP traffic. Although Kubectl simplifies port forwarding, it should only be utilized for debugging.
You can learn more about how to use port-forward to access applications in a cluster and another similar info link & SO aids in better comprehension.
Finally, for more information, see Port-Forwarding and Proxy Server and Client Deployment.

Expose TCP and UDP on a k8s cluster with one LoadBalancer

I've got a single-node k8s cluster running on a VPS with Traefik configured as it's Ingress controller and MetalLB as the LoadBalancer.
This is working great for all my TCP servers, however, I would like to host a dedicated game server in the cluster, which needs to be exposed over UDP.
Now, I know Traefik supports UDP as well as TCP, but the problem is getting it to Traefik.
I cannot expose multiple protocols over one Service of type LoadBalancer, meaning that that option will not work.
I could try exposing the service through NodePorts, but that will change the mapping of the ports which I want to prevent. Also, using port-forward is not possible, as this does not support UDP.
What other options do I have?
I just found out that MetalLB supports IP sharing under some conditions!.
I'll try this out.

How doe's Kubernetes port forward work? is it a secure and responsive method to view GUI?

I have a Kubernetes cluster which doesn't need to expose ports to the public. I am installing monitoring and logging (Prometheus & Loki or Elastic) for in house use and would like to use their GUI. I could provision https ingress and limit IP access but port forwarding seems to work.
How Does port forwarding work, under the hood?
Is port forwarding as secure as my kubectl connection?
Is the connection as fast as an ingress load balancer based HTTPs connection?
In Kubernetes documentation you can find information that port-forward command allows you to access and interact with internal Kubernetes cluster processes from your localhost. Also it's one of the best tools to debugging.
Forward one or more local ports to a pod. This command requires the node to have 'socat' installed.
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
If there are multiple pods matching the criteria, a pod will be selected automatically. The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.
1. How Does port forwarding work, under the hood?
This information can be found in How Does Kubernetes Port Forwarding Work? article.
The whole process is simplified by the fact that kubectl already has a built-in port forwarding functionality.
A user interacts with Kubernetes using the kubectl command-line on their local machine.
The port-forward command specifies the cluster resource name and defines the port number to port-forward to.
As a result, the Kubernetes API server establishes a single HTTP connection between your localhost and the resource running on your cluster.
The user is now able to engage that specific pod directly, either to diagnose an issue or debug if necessary.
Port forwarding is a work-intensive method. However, in some cases, it is the only way to access internal cluster resources.
2. Is port forwarding as secure as my kubectl connection?
For this question, you can find answer in Is kubectl port-forward encrypted?. As pointed by #iomv
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
or #neokyle
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities. The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
Kubecelt port-forward is encrypted.
3. Is the connection as fast as an ingress load balancer based HTTPs connection
As connection is inside the cluster, it should be faster than connection from outside the cluster to the cluster.
In addition, there was similar Stackoverflow thread about kubectl port-forward.

Is ssh connection between hosts necessary to create kubernetes cluster?

I am trying to create k8s cluster. Is it necessary to establish ssh connection between hosts ?
If so, should we make them passwordless ssh enabled ?
Kubernetes does not use SSH that I know of. It's possible your deployer tool could require it, but I don't know of any that works that way. It's generally recommended you have some process for logging in to the underlying machines in case you need to debug very low-level failures, but this is usually very rare. For my team, we need to log in to a node about once every month or two.
Ports required are mentioned here https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
They are as below
Control-plane node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self
Worker node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
You don't need SSH access between hosts.

How do we debug networking issues within istio pods?

I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug