I know that in kubernetes, we can't use a Service Node Port below 30000, because these ports are used by kubernetes. Can I use "kubectl port-forward svc/someservice 80:80" for instance... without causing conflict with the kubernetes ports below 30000?
In short - yes, you can.
In your question though it's clear that you're missing the understanding of NodePort type of service and the what the kubectl port-forward essentially does.
kubectl port-forward doesn't send the traffic through the port defined in the .spec.type: NodePort stanza in the Service resource. In fact using kubectl port-forward you're able to target a ClusterIP type of service (which doesn't have a .spec.type: NodePort stanza by definition).
Could you please describe what is the reason to have such a setup?
kubectl port-forward svc/someservice 80:80 merely forwards your local_machine:80 to port:80 of endpoints for someservice .
In other words, connections made to local port 80 are forwarded to port 80 of the pod that is running your app. With this connection in place you can use your local workstation to debug the app that is running in the pod.
Due to known limitations, port forward today only works for TCP protocol. The support to UDP protocol is being tracked in issue 47862.
As of now (Feb-2020) the issue is still open.
Node Port is used for totally different stuff. It is used for cases when you shall reach pods by sending traffic to particular port on any node in your cluster.
That is why the answer for your question is "Definitely you can do that"; however, as I said before, it is not clear why you shall do that. Without that inf it is hard to provide a guidance on "what is the best way to achieve the required functionality"
Hope that helps.
Related
I have a Kubernetes cluster which doesn't need to expose ports to the public. I am installing monitoring and logging (Prometheus & Loki or Elastic) for in house use and would like to use their GUI. I could provision https ingress and limit IP access but port forwarding seems to work.
How Does port forwarding work, under the hood?
Is port forwarding as secure as my kubectl connection?
Is the connection as fast as an ingress load balancer based HTTPs connection?
In Kubernetes documentation you can find information that port-forward command allows you to access and interact with internal Kubernetes cluster processes from your localhost. Also it's one of the best tools to debugging.
Forward one or more local ports to a pod. This command requires the node to have 'socat' installed.
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
If there are multiple pods matching the criteria, a pod will be selected automatically. The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.
1. How Does port forwarding work, under the hood?
This information can be found in How Does Kubernetes Port Forwarding Work? article.
The whole process is simplified by the fact that kubectl already has a built-in port forwarding functionality.
A user interacts with Kubernetes using the kubectl command-line on their local machine.
The port-forward command specifies the cluster resource name and defines the port number to port-forward to.
As a result, the Kubernetes API server establishes a single HTTP connection between your localhost and the resource running on your cluster.
The user is now able to engage that specific pod directly, either to diagnose an issue or debug if necessary.
Port forwarding is a work-intensive method. However, in some cases, it is the only way to access internal cluster resources.
2. Is port forwarding as secure as my kubectl connection?
For this question, you can find answer in Is kubectl port-forward encrypted?. As pointed by #iomv
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
or #neokyle
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities. The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
Kubecelt port-forward is encrypted.
3. Is the connection as fast as an ingress load balancer based HTTPs connection
As connection is inside the cluster, it should be faster than connection from outside the cluster to the cluster.
In addition, there was similar Stackoverflow thread about kubectl port-forward.
My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - Destination Host unreachable. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you
The LoadbalancerIP or the External SVC IP will never be pingable.
When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.
And that is the only thing your External SVC IP will respond to.
A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.
You can do an nc -v <ExternalIP> 8080 to test it.
OR
use a tool like mtr and pass --tcp --port 8080 to do your tests
Actually, during installation of metal LB, we need to assign a ip range from which metal LB can assign ip. Those ip must be in range of your dhcp network. for example in virtual box, network ip is assigned from the Virtualbox host-only adapter dhcp server if you use host-only adapter.
The components of metal LB are:
The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
when you change the service type loadbalancer, Metal LB will assigned a ip address from its ip pools which is basically maping of kubernets internal ip with the metal LB assigned ip.
you can see this by
kubect get svc -n namespaces
For more details, please check this document.
Whats the difference between kubectl port-forwarding (which forwards port from local host to the pod in the cluster to gain access to cluster resources) and NodePort Service type ?
You are comparing two completely different things. You should compare ClusterIP, NodePort, LoadBalancer and Ingress.
The first and most important difference is that NodePort expose is persistent while by doing it using port-forwarding, you always have to run kubectl port-forward ... and kept it active.
kubectl port-forward is meant for testing, labs, troubleshooting and not for long term solutions. It will create a tunnel between your machine and kubernetes so this solution will serve demands from/to your machine.
NodePort can give you long term solution and it can serve demands from/to anywhere inside the network your nodes reside.
If you use port forwarding kubectl port forward svc/{your_service} -n {service_namespace} you just need a clusterIP, kubectl will handle the traffic for you. Kubectl will be the proxy for your traffic
If you use nodeport for access your service means you need to open port on the worker nodes.
when you use port forwarding, that is going to cause our cluster to essentially behave like it has a node port service running inside of it without creating a service. This is strictly for the development setting. with one command you will have a node port service.
// find the name of the pod that running nats streaming server
kubectl get pods
kubectl port-forward nats-Pod-5443532542c8-5mbw9 4222:4222
kubectl will set up proxy that will forward any trafic on your local machine to a port on that specific pod.
however, to create a node port you need to write a YAML config file to set up a service. It will expose the port permanently and performs load balancing.
I couldn't find any information on wherever a connection creation between cluster's pod and locahost is encrypted when running "kubectl port-forward" command.
It seems like it uses "socat" library which supports encryption, but I'm not sure if kubernetes actually uses it.
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities.
The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
An example of where I've found it useful was that I was doing a quick PoC of a Jenkins Pipeline hosted on Azure Kubernetes Service and earlier in my Kubernetes studies I didn't know how to setup an Ingress, but I could reach the Server via port 80 unencrypted, but I knew my traffic could be snooped on. So I just did kubectl port-forward to temporarily login and securely to debug my POC. Also really helpful with RabbitMQ Cluster hosted on Kubernetes, you can go into the management webpage with kubectl port-forward and make sure that it's clustering the way you wanted it to.
I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/