I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)
Related
I see in an article that I can access to pods from kubeproxy, so what is the role of kubernetes service here? and what is the difference between Kube Proxy and service? finally,
is kube proxy part of service?
As far as I understand:
Service is a Kubernetes object that has a stable name and stable IP and sits in front of a set of pods. All requests sent to the pods should go to the service.
Kube-proxy is a networking component running on every cluster node(basically its a Daemonset). It implements the low-level rules to allow communication to pods from inside as well as outside the Kubernetes Cluster. We can say that kube-proxy is a part of service.
So when a user tries to reach an application deployed on Kubernetes first it reaches the service and then forwards the request one of the underlying pods. This is done by using the rules that Kube proxy created.
For more understanding refer this video : Kube proxy & blog
Closer look at Kube proxy
From my understanding
If you are only accessing the pod ports inside the cluster, then there are no Service involved, as you need Service objects to expose your pods outside of your Cluster
Service exposes your pods outside of your Cluster. Service provides a stable virtual IP address. A controller keeps track of the pods that are associated with the Service. While kube-proxy is a daemon running on each node and watches the service resources defined in the cluster and manages the rules for the requests on a Service’s backend pods
kube-proxy interacts with the Service so kube-proxy can change the iptable rules when there are changes on Service objects. Hence they are separate entities.
We can discuss this for a while, but let's short a long story.
Requests come to Service
Then Service passes it to Kube-Proxy
Kube-Proxy decides to which Pod this request go
How requests are forwarded from Service to Pod
Kube Proxy forwards the request
Responsible for maintaining a list of Service IPs and corresponding Pod IPs
Check this section for more details...
I have kubernetes cluster with 1 master 1 worker , i have DB service postgres running one namespace "PG" and i have another service config-server running in default namespace and i am unable to access postgres from config-server service which is in default namespace
Kubernetes version 1.13
overlay network -calico
as per the articles i read if pods doesnt have any network policy defined then pods can be reached to any other namespace pod without any restriction , need help in how to achieve it
should be able to reach any pod from another pod on the same cluster.
one quick way to check is to ping the service dns of the pod from another pod
get into config service pod and try to run the below command
ping <postgres-service-name>.<namespace>.svc.cluster.local
you should be able to get ping response
I was using kubernetes cluster with overlay network as calico , if there is no network policy created , by default kubernetes core dns will resolve the service but we have to add the . in the application or env variable where you are calling the service in another namespace. That will allow cross namespace communication
Every where its mentioned "cluster type of service makes pod accessible within a Kubernetes cluster"
Does it mean, after adding cluster service to a POD, then that POD can be connected only using cluster service IP of POD, we will not be able to connect POD using the IP of POD generated before adding cluster ?
Please help me understanding, am learning Kubernetes so.
When a service is created using the ClusterIP then that service is accessible only inside the cluster as service IP's are virtual IP.
Although if you want to access the pod from outside using the service IP then you can use the nodeport or loadbalancer type service which will allow you to access the pod using the Node's IP or the loadbalancer's IP.
Main reason behind using services to access pod is that it give a fixed location (ClusterIP or service name) to access. Pod's can come an go but service IP will remain same.
I am trying to deploy my sample micro service Docker image in Kubernetes cluster having 2 node. I explored everything about Pods, Services, Deployment, StatefulSets and Daemon-sets etc.
I am trying to create a sample deployment and Service for that. Here I explored about how deployment provides the scalability and load balancing functionality. And exploring about service discovery by providing Services ClusterIp.
I have two questions:
My scenario is that I am trying to deploy microservice on my on-premise Ubuntu machine. The machine has the IP address of 192.168.1.15. When I am referring Kubernetes, service will also have one clusterIP.
If my microservice end point is /api/v1/loadCustomer, how I can call this end point? Do I need to use clusterIP also ? Can I call simply 192.168.1.15:8080/api/v1/loadCustomers ?
What is the role of clusterIP when I am calling my end point ? Can I directly use port?
I am referring to the following link for exploration:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
tldr:
you can not access the application using the clusterIP from the outside of the cluster. you can access the application using either loadbalancer's IP (type=LoadBalaner) or Node's IP (type=NodePort).
benefit of clusterIP:
As you know that pods can be created and terminated during its life-cycle consequently IP (endpoint IP)address created and terminated.Therefore, clusterIP is static which does not depends of the life-cycle of the pods.
Long Answer
In a Kubernetes cluster
an application or pod has following abstraction.
Endpoint IP and Port:It is provided by the CNI Plugins such as flannel, calico.
Each pod has an IP and tragetPort which is UNIQUE.
you can list and watch the endpoints by the following commands.
kubectl get endpoints --all-namespaces
clusterIP and port : It is provided by the kube-proxy component.
The replicated pods share a clusterIP and Port.
Load-balancing of request to the replicated pods.
internally expose so that other pod can discover it
you can list and watch clusterIP and port with the following command
kubectl get services --all-namespaces
externalIP and port: It can be layer 3-4 load balancer's IP and port or node's IP and Nodeport.
if you want to use loadbalancer's IP and port, you can use type=LoadBalaner in service file.
If you want to use node's IP, you need to use type=NodePort in service file.
Is there a way to discover all the endpoints of a headless service from outside the cluster?
Preferably using DNS or Static IPs
By watching changes to a list of Endpoints:
GET /api/v1/watch/namespaces/{namespace}/endpoints
Headless Services are a group of Pod IPs. Pod IPs are not (generally) available outside the cluster/cloud-provider.
Are you trying to get external IPs for a headless service or are you within the same network (e.g. in the GCE project) but not in the cluster?
The DNS addon is exactly what you're after. From the docs:
For example, if you have a Service called "my-service" in Kubernetes
Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods
which exist in the "my-ns" Namespace should be able to find it by
simply doing a name lookup for "my-service". Pods which exist in other
Namespaces must qualify the name as "my-service.my-ns". The result of
these name lookups is the cluster IP.
And in the case of a headless service:
DNS is configured to return multiple A records (addresses) for the
Service name, which point directly to the Pods backing the Service.
However, this service is only available inside the cluster. But KubeDNS is just another pod:
kubectl get po --namespace=kube-system
kubectl describe po kube-dns-pod-name --namespace=kube-system
Which means you can create a service with an externally accessible address to expose this service. Just use a selector matching your kube-dns pod label.
http://kubernetes.io/v1.1/docs/user-guide/services.html#dns
https://github.com/kubernetes/kubernetes/blob/release-1.1/cluster/addons/dns/README.md