How to connect to a GKE service from GCE using internal IPs - kubernetes

I have an Nginx service deployed in GKE with a NodePort exposed and i want to connect it from my Compute Engine instances through internal IP address only. When i try to connect to the Nginx with the cluster IP i only receive Timeout.
I think that clusterIP is only reachable inside a cluster but when i activated the NodePort might be works.
I am not know well the difference between NodePort and ClusterIP.

Background
You can expose your application outside cluster using NodePort or LoadBalancer. ClusterIP allows connection only inside the cluster and it's default Service type.
ClusterIP:
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
NodePort:
Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
In short, when you are using NodePort you need to use NodePublicIP:NodePort. When you are using LoadBalancer it will create Network LB with ExternalIP.
In your GKE cluster you have something called VPC - Virtual Private Cloud which provides networking for your cloud-based resources and services that is global, scalable, and flexible.
Solution
Using VPC-Native CLuster
Wit VPC-native clusters you'll be able to reach to Pod's IPs directly. You will need to create subnet in order to do it. Full guide can be found here
Using VPC Peering
If you would like to connect from 2 different projects in GKE, you will need to use VPC Peering.
Access from outside the cluster using NodePort
If you would like to reach your nginx service from outside you can use NodeIP:NodePort.
NodeExternalIP (keep in mind that this node must have application pod on it. If you have 3 nodes and only 1 application replica, you must use NodeExternalIP where this pod was deployed. Another node, you need to allow NodePort access on Firewall.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 3h23m v1.17.14-gke.1600 10.128.0.26 23.236.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.8.9.10 <none> 80:30785/TCP 39m
$ curl 23.236.50.249:30785
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Cluster IP address is only accessible within cluster; so that's why it is giving timeout message. Nodeport use to expose a port on Public IP of every node of cluster; so it may work.

Related

External ip always <none> or <pending> in kubernetes

Recently i started building my very own kubernetes cluster using a few Raspberry pi's.
I have gotten to the point where i have a cluster up and running!
Some background info on how i setup the cluster, i used this guide
But now, when i want to deploy and expose an application i encounter some issues...
Following the kubernetes tutorials i have made an deployment of nginx, this is running fine. when i do a port-forward i can see the default nginx page on my localhost.
Now the tricky part, creating an service and routing the traffic from the internet through an ingress to the service.
i have executed the following command's
kubectl expose deployment/nginx --type="NodePort" --port 80
kubectl expose deployment/nginx --type="Loadbalancer" --port 80
And these result in the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx NodePort 10.103.77.5 <none> 80:30106/TCP 7m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx LoadBalancer 10.107.233.191 <pending> 80:31332/TCP 4s
The external ip address never shows, which makes it quite impossible for me to access the application from outside of the cluster by doing curl some-ip:80 which in the end is the whole reason for me to setup this cluster.
If any of you have some clear guides or advice i can work with it would be really appreciated!
Note:
I have read things about LoadBalancer, this is supposed to be provided by the cloud host. since i run on RPI i don't think this will work for me. but i believe NodePort should be just fine to route with an ingress.
Also i am aware of the fact that i should have an ingress-controller of some sort for ingress to work.
Edit
So i have the following now for the nodeport - 30168
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.96.125.112 <none> 80:30168/TCP 6m20s
and for the ip address i have either 192.168.178.102 or 10.44.0.1
$ kubectl describe pod nginx-688b66fb9c-jtc98
Node: k8s-worker-2/192.168.178.102
IP: 10.44.0.1
But when i enter either of these ip addresses in the browser with the nodeport i still don't see the nginx page. am i doing something wrong?
Any of your worker nodes' IP address will work for a NodePort (or LoadBalancer) service. From the description of NodePort services :
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
If you don't know those IP addresses kubectl get nodes can tell you; if you're planning on calling them routinely then setting up a load balancer in front of the cluster or configuring DNS (or both!) can be helpful.
In your example, say some node has the IP address 10.20.30.40 (you log into the Raspberry PI directly and run ifconfig and that's the host's address); you can reach the nginx from the second example at http://10.20.30.40:31332.
The EXTERNAL-IP field will never fill in for a NodePort service, or when you're not in a cloud environment that can provide an external load balancer for you. That doesn't affect this case, for either of these service types you can still call the port on the node directly.
Since you are not in a cloud provider, you need to use MetalLB to have the LoadBalancer features working.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible
The MetalLB setup is very easy:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
This will deploy MetalLB to your cluster, under the metallb-system namespace
You need to create a configMap with the ip range you want to use, create a file named metallb-cf.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250 <= Select the range you want.
kubectl apply -f metallb-cf.yaml
That's all.
To use on your services just create with type LoadBalancer and MetalLB will do the rest. If you want to customize the configuration see here
MetalLB will assign a IP for your service/ingress, but if you are in a NAT network you need to configure your router to forward the requests for your ingress/service IP.
EDIT:
You have problem to get External IP with MetalLB running on Raspberry Pi, try to change iptables to legacy version:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
Reference: https://www.shogan.co.uk/kubernetes/building-a-raspberry-pi-kubernetes-cluster-part-2-master-node/
I hope that helps.

How to expose a service in Kubernetes?

My organization offers Containers as a Service through Rancher. I start a rabbitmq service using some web interface. The service started OK. I'm having trouble accessing this service through an external IP.
Using kubectl, I tried to get the list of the running services:
$ kubectl get services -n flash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-ha ClusterIP XX.XX.X.XXX <none> 15672/TCP,5672/TCP,4369/TCP 46m
rabbitmq-ha-discovery ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 46m
How do I expose the 'rabbitmq-ha' service to the external word so I can access it via IP address:15672, etc? Right now, the external IP is none. I'm not sure how to get kubernetes to assign one.
If you are in supported cloud environment(AWS, GCP, Azure...etc) then you can create a service of type Loadbalancer and an external Load Balancer will be provisioned and an external IP or DNS will be assigned by your cloud provider.Here is the docs on this.
If you are on bare metal on prem then you can use melatLB which provides an implementation of LoadBalancer.
Apart from above you can also use Nodeport Type service to expose a service to be accessible outside your kubernetes cluster. Here is guide on how to do that.
One disadvantage of using LoadBalancer type service is that for every service an external load balancer will be provisioned which is costly, as an alternative you can use ingress abstraction. Ingress is implemented by many softwares such as nginx, HAProxy, traefik.

Access EKS DNS from worker nodes and other EC2 instances in same VPC

I have created an EKS cluster by following the getting started guide by AWS with k8s version 1.11. I have not changed any configs as such for kube-dns.
If I create a service let's say myservice, I would like to access it from some other ec2 instance which is not part of this eks cluster but it is in same VPC.
Basically, I want this DNS to work as my DNS server for instances outside the cluster as well. How will I be able to do that?
I have seen that the kube-dns service gets a cluster IP but doesn't get an external IP, is that necessary for me to be able to access it from outside the cluster?
This is the current response :
[ec2-user#ip-10-0-0-149 ~]$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 4d
My VPC subnet is 10.0.0.0/16
I am trying to reach this 172.20.0.10 IP from other instances in my VPC and I am not able to, which I think is expected because my VPC is not aware of any subnet range that is 172.20.0.10. But then how do make this dns service accessible to all my instances in VPC?
The problem you are facing is mostly not related to DNS. As you said you cannot reach ClusterIP from your other instances because it is internal cluster network and it is unreachable from outside of Kubernetes.
Instead of going into the wrong direction I recommend you to make use of Nginx Ingress which allows you to create Nginx backed by AWS Load Balancer and expose your services on that Nginx.
You can further integrate your Ingresses with External-DNS addon which will allow you to dynamically create DNS records in Route 53.
This will take some time to configure but this is the Kubernetes way.

what is the use of external IP address with a ClusterIP service type in kubernetes

What is the use of external IP address option in kubernetes service when the service is of type ClusterIP
An ExternalIP is just an endpoint through which services can be accessed from outside the cluster, so a ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too. For instance, you could set the ExternalIP to the IP of one of your k8s nodes, or create an ingress to your cluster on that IP.
ClusterIP is the default service type in Kubernetes which allows you to reach your service only within the cluster.
If your service type is set as LoadBalancer or NodePort, ClusterIP is automatically created and LoadBalancer or NodePort service will route to this ClusterIP IP address.
The new external IP addresses are only allocated with LoadBalancer type.
You can also use the node's external IP addresses when you set your service as NodePort. But in this case you will need extra firewall rules for your nodes to allow ingress traffic for your exposed node ports.
When you are using service with type: ClusterIP it has only Cluster IP and no external IP address <none>.
ClusterIP is unique Ip given from IP pool to a service and to access pods of this service cluster IP can be used only inside a cluster.Cluster IP is default service type in kubernetes.
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
above example will create a service with external IP and cluster IP.
In case of loadbalancer,nodeport services ,the service can be accessed from other clusters through externalIP
Just to add to coolinuxoid answer. I'm working with GKE and based on their documentation when you add a Service of type ClusterIP they offer to access it via:
Accessing your Service
List your running Pods:
kubectl get pods
In the output, copy one of the Pod names that begins with
my-deployment.
NAME READY STATUS RESTARTS AGE
my-deployment-dbd86c8c4-h5wsf 1/1 Running 0 2m51s
Get a shell into one of your running containers:
kubectl exec -it pod-name -- sh
where pod-name is the name of one of the Pods in my-deployment.
In your shell, install curl:
apk add --no-cache curl
In the container, make a request to your Service by using your cluster
IP address and port 80. Notice that 80 is the value of the port field
of your Service. This is the port that you use as a client of the
Service.
curl cluster-ip:80
So, while you might find a way to route to such a Service from outside, it's not a recommended/ordinary way to do it.
As a better alternative either use:
LoadBalancer if you are working on a project with a lot of services and detailed requirements.
NodePort if you are working on something smaller where cloud-native load balancing is not needed and you don't mind using your node's IP directly to map it to the Service. Incidentally, the same document does suggest an option to do so(tested it personally; works like a charm):
If the nodes in your cluster have external IP addresses, find the
external IP address of one of your nodes:
kubectl get nodes --output wide
The output shows the external IP addresses of your nodes:
NAME STATUS ROLES AGE VERSION EXTERNAL-IP
gke-svc-... Ready none 1h v1.9.7-gke.6 203.0.113.1
Not all clusters have external IP addresses for nodes. For example,
the nodes in private clusters do not have external IP addresses.
Create a firewall rule to allow TCP traffic on your node port:
gcloud compute firewall-rules create test-node-port --allow
tcp:node-port
where: node-port is the value of the nodePort field of your Service.

What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?

Question 1 - I'm reading the documentation and I'm slightly confused with the wording. It says:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
Does the NodePort service type still use the ClusterIP but just at a different port, which is open to external clients? So in this case is <NodeIP>:<NodePort> the same as <ClusterIP>:<NodePort>?
Or is the NodeIP actually the IP found when you run kubectl get nodes and not the virtual IP used for the ClusterIP service type?
Question 2 - Also in the diagram from the link below:
Is there any particular reason why the Client is inside the Node? I assumed it would need to be inside a Clusterin the case of a ClusterIP service type?
If the same diagram was drawn for NodePort, would it be valid to draw the client completely outside both the Node andCluster or am I completely missing the point?
A ClusterIP exposes the following:
spec.clusterIp:spec.ports[*].port
You can only access this service while inside the cluster. It is accessible from its spec.clusterIp port. If a spec.ports[*].targetPort is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling kubectl get services is the IP assigned to this service within the cluster internally.
A NodePort exposes the following:
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
If you access this service on a nodePort from the node's external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP.
Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from spec.clusterIp:spec.ports[*].nodePort.
A LoadBalancer exposes the following:
spec.loadBalancerIp:spec.ports[*].port
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.
To clarify for anyone who is looking for what is the difference between the 3 on a simpler level. You can expose your service with minimal ClusterIp (within k8s cluster) or larger exposure with NodePort (within cluster external to k8s cluster) or LoadBalancer (external world or whatever you defined in your LB).
ClusterIp exposure < NodePort exposure < LoadBalancer exposure
ClusterIp
Expose service through k8s cluster with ip/name:port
NodePort
Expose service through Internal network VM's also external to k8s ip/name:port
LoadBalancer
Expose service through External world or whatever you defined in your LB.
ClusterIP: Services are reachable by pods/services in the Cluster
If I make a service called myservice in the default namespace of type: ClusterIP then the following predictable static DNS address for the service will be created:
myservice.default.svc.cluster.local (or just myservice.default, or by pods in the default namespace just "myservice" will work)
And that DNS name can only be resolved by pods and services inside the cluster.
NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service)
If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:
mynodeportservice.mynamespace.svc.cluster.local (or just mynodeportservice.mynamespace)
For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster.
Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852.
A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.
LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet)
(Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.)
If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.
You also asked:
Does the NodePort service type still use the ClusterIP? Yes*
Or is the NodeIP actually the IP found when you run kubectl get nodes? Also Yes*
Lets draw a parrallel between Fundamentals:
A container is inside a pod. a pod is inside a replicaset. a replicaset is inside a deployment.
Well similarly:
A ClusterIP Service is part of a NodePort Service. A NodePort Service is Part of a Load Balancer Service.
In that diagram you showed, the Client would be a pod inside the cluster.
Lets assume you created a Ubuntu VM on your local machine. It's IP address is 192.168.1.104.
You login into VM, and installed Kubernetes. Then you created a pod where nginx image running on it.
1- If you want to access this nginx pod inside your VM, you will create a ClusterIP bound to that pod for example:
$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080
Then on your browser you can type ip address of nginxclusterip with port 80, like:
http://10.152.183.2:80
2- If you want to access this nginx pod from your host machine, you will need to expose your deployment with NodePort. For example:
$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort
Now from your host machine you can access to nginx like:
http://192.168.1.104:31865/
In my dashboard they appear as:
Below is a diagram shows basic relationship.
Feature
ClusterIP
NodePort
LoadBalancer
Exposition
Exposes the Service on an internal IP in the cluster.
Exposing services to external clients
Exposing services to external clients
Cluster
This type makes the Service only reachable from within the cluster
A NodePort service, each cluster node opens a port on the node itself (hence the name) and redirects traffic received on that port to the underlying service.
A LoadBalancer service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on
Accessibility
It is default service and Internal clients send requests to a stable internal IP address.
The service is accessible at the internal cluster IP-port, and also through a dedicated port on all nodes.
Clients connect to the service through the load balancer’s IP.
Yaml Config
type: ClusterIP
type: NodePort
type: LoadBalancer
Port Range
Any public ip form Cluster
30000 - 32767
Any public ip form Cluster
User Cases
For internal communication
Best for testing public or private access or providing access for a small amount of time.
widely used For External communication
Sources:
Kubernetes in Action
Kubernetes.io Services
Kubernetes Services simply visually explained
clusterIP : IP accessible inside cluster (across nodes within d cluster).
nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.
pod3 can talk to pod1 via their clusterIP network.
nodeport : to make pods accessible from outside the cluster via nodeIP:nodeport, it will create/keep clusterIP above as its clusterIP network.
nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX
you might access service on pod1 either via nodeIPA:nodeportX OR nodeIPB:nodeportX. Either way will work because kube-proxy (which is installed in each node) will receive your request and distribute it [redirect it(iptables term)] across nodes using clusterIP network.
Load balancer
basically just putting LB in front, so that inbound traffic is distributed to nodeIPA:nodeportX and nodeIPB:nodeportX then continue with the process flow number 2 above.
Practical understanding.
I have created 2 services 1 for NodePort and other for ClusterIP
If I wanted to access the service inside the cluster(from master or any worker node) than both are accessible.
Now if I wanted to access the services from outside the cluster then Nodeport only accessible not ClusterIP.
Here you can see localhost wont listening on port 80 even my nginx container are listening on port 80.
Yes, this is the only difference.
ClusterIP. Exposes a service which is only accessible from within the cluster.
NodePort. Exposes a service via a static port on each node’s IP.
LoadBalancer. Exposes the service via the cloud provider’s load balancer.
ExternalName. Maps a service to a predefined externalName field by returning a value for the CNAME record.
Practical Use Case
Let be assume you have to create below architecture in your cluster. I guess its pretty common.
Now, user only going to communicate with frontend on some port. Backend and DB services are always hidden to the external world.
Summary:
There are five types of Services:
ClusterIP (default): Internal clients send requests to a stable internal IP address.
NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
LoadBalancer: Clients send requests to the IP address of a network load balancer.
ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
Headless: You can use a headless service when you want a Pod grouping, but don't need a stable IP address.
The NodePort type is an extension of the ClusterIP type. So a Service of type NodePort has a cluster IP address.
The LoadBalancer type is an extension of the NodePort type. So a Service of type LoadBalancer has a cluster IP address and one or more nodePort values.
Illustrate through Image
Details
ClusterIP
ClusterIP is the default and most common service type.
Kubernetes will assign a cluster-internal IP address to ClusterIP service. This makes the service only reachable within the cluster.
You cannot make requests to service (pods) from outside the cluster.
You can optionally set cluster IP in the service definition file.
Use Cases
Inter-service communication within the cluster. For example, communication between the front-end and back-end components of your app.
NodePort
NodePort service is an extension of ClusterIP service. A ClusterIP Service, to which the NodePort Service routes, is automatically created.
It exposes the service outside of the cluster by adding a cluster-wide port on top of ClusterIP.
NodePort exposes the service on each Node’s IP at a static port (the NodePort). Each node proxies that port into your Service. So, external traffic has access to fixed port on each Node. It means any request to your cluster on that port gets forwarded to the service.
You can contact the NodePort Service, from outside the cluster, by requesting :.
Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.
If you are going to choose node port explicitly, ensure that the port was not already used by another service.
Use Cases
When you want to enable external connectivity to your service.
Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by
Kubernetes, or even to expose one or more nodes’ IPs directly.
Prefer to place a load balancer above your nodes to avoid node failure.
LoadBalancer
LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
It integrates NodePort with cloud-based load balancers.
It exposes the Service externally using a cloud provider’s load balancer.
Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer implementation. The cloud provider will create a load balancer, which then automatically routes requests to your Kubernetes Service.
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
The actual creation of the load balancer happens asynchronously.
Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.
Use Cases
When you are using a cloud provider to host your Kubernetes cluster.
ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.
You specify these Services with the spec.externalName parameter.
It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.
No proxying of any kind is established.
Use Cases
This is commonly used to create a service within Kubernetes to represent an external datastore like a database that runs externally to Kubernetes.
You can use that ExternalName service (as a local service) when Pods from one namespace talk to a service in another namespace.
Here is the answer for the Question 2 about the diagram, since it still doesn't seem to be answered directly:
Is there any particular reason why the Client is inside the Node? I
assumed it would need to be inside a Clusterin the case of a ClusterIP
service type?
At the diagram the Client is placed inside the Node to highlight the fact that ClusterIP is only accessible on a machine which has a running kube-proxy daemon. Kube-proxy is responsible for configuring iptables according to the data provided by apiserver (which is also visible at the diagram). So if you create a virtual machine and put it into the network where the Nodes of your cluster are and also properly configure networking on that machine so that individual cluster pods are accessible from there, even with that ClusterIP services will not be accessible from that VM, unless the VM has it's iptables configured properly (which doesn't happen without kubeproxy running on that VM).
If the same diagram was drawn for NodePort, would it be valid to draw
the client completely outside both the Node andCluster or am I
completely missing the point?
It would be valid to draw client outside the Node and Cluster, because NodePort is accessible from any machine which has access to a cluster Node and the corresponding port, including machines outside the cluster.
And do not forget the "new" service type (from the k8s docu):
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.