Access nodeport via kube-proxy from another machine - kubernetes

I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?

NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/

If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/

Related

Can't access kubernetes service which have externalTrafficPolicy as "Local"

I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.

Difference between kubectl port-forwarding and NodePort service

Whats the difference between kubectl port-forwarding (which forwards port from local host to the pod in the cluster to gain access to cluster resources) and NodePort Service type ?
You are comparing two completely different things. You should compare ClusterIP, NodePort, LoadBalancer and Ingress.
The first and most important difference is that NodePort expose is persistent while by doing it using port-forwarding, you always have to run kubectl port-forward ... and kept it active.
kubectl port-forward is meant for testing, labs, troubleshooting and not for long term solutions. It will create a tunnel between your machine and kubernetes so this solution will serve demands from/to your machine.
NodePort can give you long term solution and it can serve demands from/to anywhere inside the network your nodes reside.
If you use port forwarding kubectl port forward svc/{your_service} -n {service_namespace} you just need a clusterIP, kubectl will handle the traffic for you. Kubectl will be the proxy for your traffic
If you use nodeport for access your service means you need to open port on the worker nodes.
when you use port forwarding, that is going to cause our cluster to essentially behave like it has a node port service running inside of it without creating a service. This is strictly for the development setting. with one command you will have a node port service.
// find the name of the pod that running nats streaming server
kubectl get pods
kubectl port-forward nats-Pod-5443532542c8-5mbw9 4222:4222
kubectl will set up proxy that will forward any trafic on your local machine to a port on that specific pod.
however, to create a node port you need to write a YAML config file to set up a service. It will expose the port permanently and performs load balancing.

Accessing micro service end point from deployed micro service using Kubernetes orchestration

I am trying to deploy my sample micro service Docker image in Kubernetes cluster having 2 node. I explored everything about Pods, Services, Deployment, StatefulSets and Daemon-sets etc.
I am trying to create a sample deployment and Service for that. Here I explored about how deployment provides the scalability and load balancing functionality. And exploring about service discovery by providing Services ClusterIp.
I have two questions:
My scenario is that I am trying to deploy microservice on my on-premise Ubuntu machine. The machine has the IP address of 192.168.1.15. When I am referring Kubernetes, service will also have one clusterIP.
If my microservice end point is /api/v1/loadCustomer, how I can call this end point? Do I need to use clusterIP also ? Can I call simply 192.168.1.15:8080/api/v1/loadCustomers ?
What is the role of clusterIP when I am calling my end point ? Can I directly use port?
I am referring to the following link for exploration:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
tldr:
you can not access the application using the clusterIP from the outside of the cluster. you can access the application using either loadbalancer's IP (type=LoadBalaner) or Node's IP (type=NodePort).
benefit of clusterIP:
As you know that pods can be created and terminated during its life-cycle consequently IP (endpoint IP)address created and terminated.Therefore, clusterIP is static which does not depends of the life-cycle of the pods.
Long Answer
In a Kubernetes cluster
an application or pod has following abstraction.
Endpoint IP and Port:It is provided by the CNI Plugins such as flannel, calico.
Each pod has an IP and tragetPort which is UNIQUE.
you can list and watch the endpoints by the following commands.
kubectl get endpoints --all-namespaces
clusterIP and port : It is provided by the kube-proxy component.
The replicated pods share a clusterIP and Port.
Load-balancing of request to the replicated pods.
internally expose so that other pod can discover it
you can list and watch clusterIP and port with the following command
kubectl get services --all-namespaces
externalIP and port: It can be layer 3-4 load balancer's IP and port or node's IP and Nodeport.
if you want to use loadbalancer's IP and port, you can use type=LoadBalaner in service file.
If you want to use node's IP, you need to use type=NodePort in service file.

Access NodePort through PortForward from the kubernetes master

What I Have: A Kubernetes Cluster on GCE with three Nodes. Lets suppose the master has the IP <MasterIP>. Additionally I have a Service within the cluster of Type NodePort which listens to the port <PORT>. I can access the service using <NodeIP>:<PORT>
What I would like to do: Access the service with <MasterIP>:<PORT> How can I forward the port from <MasterIP> to within the cluster? in other words: <MasterIP>:<PORT> --> <NodeIP>:<PORT>
The reason why I would like to do this is simply I don't want to rely on a specific NodeIP since the Pod can be rescheduled to another Node.
Thank you
A Kubernetes Service with type: NodePort opens the same port on every Node and sends the traffic to wherever the pod is currently scheduled using internal IP routing. You should be able to access the service at <AnyNodeIP>:<PORT>.
In Google's Kubernetes cluster, the master machines are not available for routing traffic. The way to reliably expose your services is to use a Service with type: LoadBalancer which will provide a single IP that resolves to your service regardless of which Nodes are up or what their IPs are.

Kubernetes - service exposed via NodePort not available on all nodes

I've a nginx service exposed via NodePort. According to the documentation, I should now be able to hit the service on $NODE_IP:$NODE_PORT for all my K8 worker IPs. However, I'm able to access the service via curl on only the node that hosts the actual nginx pod. Any idea why?
I did verify using netstat that kube-proxy is listening on $NODE_PORT on all the hosts. Somehow, the request is not being forwarded to the actual pod by kube-proxy.
This turned out to be an issue with the security group associated with the workers. I had opened only the ports in the --service-node-port-range. This was not enough because I was deploying nginx on port 80 and kube-proxy tried to forward the request to the pod's IP on port 80 but was being blocked.