I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.
Related
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
Whats the difference between kubectl port-forwarding (which forwards port from local host to the pod in the cluster to gain access to cluster resources) and NodePort Service type ?
You are comparing two completely different things. You should compare ClusterIP, NodePort, LoadBalancer and Ingress.
The first and most important difference is that NodePort expose is persistent while by doing it using port-forwarding, you always have to run kubectl port-forward ... and kept it active.
kubectl port-forward is meant for testing, labs, troubleshooting and not for long term solutions. It will create a tunnel between your machine and kubernetes so this solution will serve demands from/to your machine.
NodePort can give you long term solution and it can serve demands from/to anywhere inside the network your nodes reside.
If you use port forwarding kubectl port forward svc/{your_service} -n {service_namespace} you just need a clusterIP, kubectl will handle the traffic for you. Kubectl will be the proxy for your traffic
If you use nodeport for access your service means you need to open port on the worker nodes.
when you use port forwarding, that is going to cause our cluster to essentially behave like it has a node port service running inside of it without creating a service. This is strictly for the development setting. with one command you will have a node port service.
// find the name of the pod that running nats streaming server
kubectl get pods
kubectl port-forward nats-Pod-5443532542c8-5mbw9 4222:4222
kubectl will set up proxy that will forward any trafic on your local machine to a port on that specific pod.
however, to create a node port you need to write a YAML config file to set up a service. It will expose the port permanently and performs load balancing.
I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)
I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here
I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/