I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here
Related
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
Every where its mentioned "cluster type of service makes pod accessible within a Kubernetes cluster"
Does it mean, after adding cluster service to a POD, then that POD can be connected only using cluster service IP of POD, we will not be able to connect POD using the IP of POD generated before adding cluster ?
Please help me understanding, am learning Kubernetes so.
When a service is created using the ClusterIP then that service is accessible only inside the cluster as service IP's are virtual IP.
Although if you want to access the pod from outside using the service IP then you can use the nodeport or loadbalancer type service which will allow you to access the pod using the Node's IP or the loadbalancer's IP.
Main reason behind using services to access pod is that it give a fixed location (ClusterIP or service name) to access. Pod's can come an go but service IP will remain same.
What is the use of external IP address option in kubernetes service when the service is of type ClusterIP
An ExternalIP is just an endpoint through which services can be accessed from outside the cluster, so a ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too. For instance, you could set the ExternalIP to the IP of one of your k8s nodes, or create an ingress to your cluster on that IP.
ClusterIP is the default service type in Kubernetes which allows you to reach your service only within the cluster.
If your service type is set as LoadBalancer or NodePort, ClusterIP is automatically created and LoadBalancer or NodePort service will route to this ClusterIP IP address.
The new external IP addresses are only allocated with LoadBalancer type.
You can also use the node's external IP addresses when you set your service as NodePort. But in this case you will need extra firewall rules for your nodes to allow ingress traffic for your exposed node ports.
When you are using service with type: ClusterIP it has only Cluster IP and no external IP address <none>.
ClusterIP is unique Ip given from IP pool to a service and to access pods of this service cluster IP can be used only inside a cluster.Cluster IP is default service type in kubernetes.
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
above example will create a service with external IP and cluster IP.
In case of loadbalancer,nodeport services ,the service can be accessed from other clusters through externalIP
Just to add to coolinuxoid answer. I'm working with GKE and based on their documentation when you add a Service of type ClusterIP they offer to access it via:
Accessing your Service
List your running Pods:
kubectl get pods
In the output, copy one of the Pod names that begins with
my-deployment.
NAME READY STATUS RESTARTS AGE
my-deployment-dbd86c8c4-h5wsf 1/1 Running 0 2m51s
Get a shell into one of your running containers:
kubectl exec -it pod-name -- sh
where pod-name is the name of one of the Pods in my-deployment.
In your shell, install curl:
apk add --no-cache curl
In the container, make a request to your Service by using your cluster
IP address and port 80. Notice that 80 is the value of the port field
of your Service. This is the port that you use as a client of the
Service.
curl cluster-ip:80
So, while you might find a way to route to such a Service from outside, it's not a recommended/ordinary way to do it.
As a better alternative either use:
LoadBalancer if you are working on a project with a lot of services and detailed requirements.
NodePort if you are working on something smaller where cloud-native load balancing is not needed and you don't mind using your node's IP directly to map it to the Service. Incidentally, the same document does suggest an option to do so(tested it personally; works like a charm):
If the nodes in your cluster have external IP addresses, find the
external IP address of one of your nodes:
kubectl get nodes --output wide
The output shows the external IP addresses of your nodes:
NAME STATUS ROLES AGE VERSION EXTERNAL-IP
gke-svc-... Ready none 1h v1.9.7-gke.6 203.0.113.1
Not all clusters have external IP addresses for nodes. For example,
the nodes in private clusters do not have external IP addresses.
Create a firewall rule to allow TCP traffic on your node port:
gcloud compute firewall-rules create test-node-port --allow
tcp:node-port
where: node-port is the value of the nodePort field of your Service.
I am trying to deploy my sample micro service Docker image in Kubernetes cluster having 2 node. I explored everything about Pods, Services, Deployment, StatefulSets and Daemon-sets etc.
I am trying to create a sample deployment and Service for that. Here I explored about how deployment provides the scalability and load balancing functionality. And exploring about service discovery by providing Services ClusterIp.
I have two questions:
My scenario is that I am trying to deploy microservice on my on-premise Ubuntu machine. The machine has the IP address of 192.168.1.15. When I am referring Kubernetes, service will also have one clusterIP.
If my microservice end point is /api/v1/loadCustomer, how I can call this end point? Do I need to use clusterIP also ? Can I call simply 192.168.1.15:8080/api/v1/loadCustomers ?
What is the role of clusterIP when I am calling my end point ? Can I directly use port?
I am referring to the following link for exploration:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
tldr:
you can not access the application using the clusterIP from the outside of the cluster. you can access the application using either loadbalancer's IP (type=LoadBalaner) or Node's IP (type=NodePort).
benefit of clusterIP:
As you know that pods can be created and terminated during its life-cycle consequently IP (endpoint IP)address created and terminated.Therefore, clusterIP is static which does not depends of the life-cycle of the pods.
Long Answer
In a Kubernetes cluster
an application or pod has following abstraction.
Endpoint IP and Port:It is provided by the CNI Plugins such as flannel, calico.
Each pod has an IP and tragetPort which is UNIQUE.
you can list and watch the endpoints by the following commands.
kubectl get endpoints --all-namespaces
clusterIP and port : It is provided by the kube-proxy component.
The replicated pods share a clusterIP and Port.
Load-balancing of request to the replicated pods.
internally expose so that other pod can discover it
you can list and watch clusterIP and port with the following command
kubectl get services --all-namespaces
externalIP and port: It can be layer 3-4 load balancer's IP and port or node's IP and Nodeport.
if you want to use loadbalancer's IP and port, you can use type=LoadBalaner in service file.
If you want to use node's IP, you need to use type=NodePort in service file.
I've a nginx service exposed via NodePort. According to the documentation, I should now be able to hit the service on $NODE_IP:$NODE_PORT for all my K8 worker IPs. However, I'm able to access the service via curl on only the node that hosts the actual nginx pod. Any idea why?
I did verify using netstat that kube-proxy is listening on $NODE_PORT on all the hosts. Somehow, the request is not being forwarded to the actual pod by kube-proxy.
This turned out to be an issue with the security group associated with the workers. I had opened only the ports in the --service-node-port-range. This was not enough because I was deploying nginx on port 80 and kube-proxy tried to forward the request to the pod's IP on port 80 but was being blocked.