From assess cluster api, i know that the pod in the cluster can using the clusterIp service kubernetes.default.svc to access the api server, but i am curious about how it works.
The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.
So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?
The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.
Absolutely correct
So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?
This magic happens via kube-proxy, which usually delegates down to iptables, although I think it more recent kubernetes installs they are using ipvs to give a lot more control over ... well, almost everything. The kube-proxy receives its instructions from the API informing it of any changes, which it applies to the individual Nodes to keep the world in sync.
If you have access to the Nodes, you can run sudo iptables -t nat -L -n and see all the KUBE-SERVICE-* rules that are defined -- usually with helpful comments, even -- and see how they are mapped from the ClusterIP down to the Pod's IP of the Pods which match the selector on the Service
Related
I have run a Hello World application using the below command.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Created a service as below
kubectl expose deployment hello-world --type=NodePort --name=example-service
The pods are running
NAME READY STATUS RESTARTS AGE
hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s
hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s
Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.XX.XX.XX <none> 8080:32023/TCP 66s
Here, I am able to test it through curl inside the cluster.
curl http://10.XX.XX.XX:8080
Hello Kubernetes!
How can I access this service outside my cluster? (example, through laptop browser)
you shoud try
http://IP_OF_KUBERNETES:32023
IP_OF_KUBERNETES can be your master IP your worker IP
when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2
and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by
http://IP1:32023
http://IP2:32023
You should be able to access it outside the cluster using NodePort assingned(32023). Please paste following http://<IP>:<Port> in your browser and you will able to access your app:
http://<MASTER/WORKER_IP>:32023
There are answers already provided, but I felt like this topic needed some consolidation.
This seems to be fairly easy. NodePort actually exposes your application as the name says on the port of each node. So all you have to do is just find the IP address of the Node on which the pod is. You can do it by running:
kubectl get pods -o wide so you can find the IP or name of the node on which the pod is, then just follow what previous answers state: so http://<MASTER/WORKER_IP>:PORT
There is more methods:
You can deploy Ingress Controller and configure Ingress so the application will be reachable through the internet.
You can also use kubectl proxy to expose ClusterIP service outside of the cluster. Like in this example with Dashboard.
Another way is to use LoadBalancer type, which requires underlying cloud infrastructure.
If you are using minikube you can try to run minikube service list to check your exposed services and their IP.
You can access your service using MasterIP or WorkerIP. If you are planning to use it in production or in a more reliable way you should create a service with type LoadBalancer. And use load balancers IP to access it.
If you are using any cloud env, make sure the firewall rules allow incoming traffic.
This will take care of redirecting request to which ever node the pod is running on. Else you will have to manually hit masterIP or workerIP depending on where the pod is running. If the pod gets moved to different node, you will have to change the ip you are hitting
Service Load Balancer
I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.
I am trying to deploy my sample micro service Docker image in Kubernetes cluster having 2 node. I explored everything about Pods, Services, Deployment, StatefulSets and Daemon-sets etc.
I am trying to create a sample deployment and Service for that. Here I explored about how deployment provides the scalability and load balancing functionality. And exploring about service discovery by providing Services ClusterIp.
I have two questions:
My scenario is that I am trying to deploy microservice on my on-premise Ubuntu machine. The machine has the IP address of 192.168.1.15. When I am referring Kubernetes, service will also have one clusterIP.
If my microservice end point is /api/v1/loadCustomer, how I can call this end point? Do I need to use clusterIP also ? Can I call simply 192.168.1.15:8080/api/v1/loadCustomers ?
What is the role of clusterIP when I am calling my end point ? Can I directly use port?
I am referring to the following link for exploration:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
tldr:
you can not access the application using the clusterIP from the outside of the cluster. you can access the application using either loadbalancer's IP (type=LoadBalaner) or Node's IP (type=NodePort).
benefit of clusterIP:
As you know that pods can be created and terminated during its life-cycle consequently IP (endpoint IP)address created and terminated.Therefore, clusterIP is static which does not depends of the life-cycle of the pods.
Long Answer
In a Kubernetes cluster
an application or pod has following abstraction.
Endpoint IP and Port:It is provided by the CNI Plugins such as flannel, calico.
Each pod has an IP and tragetPort which is UNIQUE.
you can list and watch the endpoints by the following commands.
kubectl get endpoints --all-namespaces
clusterIP and port : It is provided by the kube-proxy component.
The replicated pods share a clusterIP and Port.
Load-balancing of request to the replicated pods.
internally expose so that other pod can discover it
you can list and watch clusterIP and port with the following command
kubectl get services --all-namespaces
externalIP and port: It can be layer 3-4 load balancer's IP and port or node's IP and Nodeport.
if you want to use loadbalancer's IP and port, you can use type=LoadBalaner in service file.
If you want to use node's IP, you need to use type=NodePort in service file.
I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/
I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant
I have in my service.yaml file:
spec:
type: NodePort
When I type:
kubectl get services
I see:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
kubernetes 10.100.0.1 <none> 443/TCP <none>
my-frontend 10.100.250.90 nodes 8000/TCP name=my-app
What is the "nodes" external IP? How do I access my-frontend externally?
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster:
Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. Link to the documentation
Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation
If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router".
Information about openshift origin.
Information about openshift origin routes
I assume you are using MiniKube for Kubernetes. In such case, to identify your node ip address, use the following command:
.\minikube.exe ip
If the exposed service is of type=Nodeport, to check the exposed port use the following command:
.\kubectl.exe describe service <service-name>
Check for Node port in the result. Also, if you want to have all these details via nice UI, then you can launch the Kubernetes Dashboard present at the following address:
<Node-ip>:30000
The easiest way to get the host ports is kubectl describe services my-frontend.
The node port will be displayed.
Also you can check the api:
api/v1/namespaces/{namespace_name}/services/{service_name}
or list all:
api/v1/namespaces/default/services
Last, you can chose a fixed nodePort in the service.yml
Here is the doc on node addresses: http://kubernetes.io/docs/admin/node/#addresses
You can specify the port number of nodePort when you specify the service. If you didn't manually specify a port, system will allocate one for you. You can kubectl get services -o yaml and find the port at spec.ports[*].nodePort, as suggested in the doc here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And you can access your front-end at {nodes' external addresses}:{nodePort}
Hope this helps.