GKE 1 load balancer with multiple apps on different assigned ports - kubernetes

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort

GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.

Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

Related

Access the Kubernetes cluster/node from outside

I am new to kubernetes. I have created a cluster of db of kubernetes with 2 nodes. I can access those kubernetes pods from thin client like dbeaver to check the data. But I can not access those kubernetes nodes externally. I am currently trying to run a thick client which will load the data into cluster on kubernetes.
kubectl describe svc <svc>
I can see cluster-Ip assigned to the service. Type of my service is loadbalancer. I tried to use that but still not connecting. I read about using nodeport but without any IP address how to access that
So what is the best way to connect any node or cluster from outside.
Thank you in advance
Regards
#KrishnaChaurasia is right but I would like to explain it in more detail with the help of the official docs.
I strongly recommend going through the following sources:
NodePort Type Service: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>. Here is an example of the NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Accessing services running on the cluster: You have several options for connecting to nodes, pods and services from outside the cluster:
Access services through public IPs.
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication?
Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, place a unique label on the pod and create a new service which selects this label.
In most cases, it should not be necessary for application developer to directly access nodes via their nodeIPs.
A supplement example: Use a Service to Access an Application in a Cluster: This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster.
These will help you to better understand the concepts of different Service Types, how to expose and access them from outside the cluster.

Kubernetes: how to access service if nodePort is random?

I'm new to K8s and am currently using Minikube to play around with the platform. How do I configure a public (i.e. outside the cluster) port for the service? I followed the nginx example, and K8s service tutorials. In my case, I created the service like so:
kubectl expose deployment/mysrv --type=NodePort --port=1234
The service's port is 1234 for anyone trying to access it from INSIDE the cluster. The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
kubectl describe service mysrv | grep NodePort
...
NodePort: <unset> 32387/TCP
# curl "http://`minikube ip`:32387/"
But I don't understand how, in a real cluster, the service could have a fixed world-accessible port. The nginx examples describe something about using the LoadBalancer service kind, but they don't even specify ports there...
Any ideas how to fix the external port for the entire service?
The minikube tutorials say I need to access the service directly through it's random nodePort, which works for manual testing purposes:
When you create service object of type NodePort with a $ kubectl expose command you cannot choose your NodePort port. To choose a NodePort port you will need to create a YAML definition of it.
You can manually specify the port in service object of type Nodeport with below example:
apiVersion: v1
kind: Service
metadata:
name: example-nodeport
spec:
type: NodePort
selector:
app: hello # selector for deployment
ports:
- name: example-port
protocol: TCP
port: 1234 # CLUSTERIP PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # HERE!
You can apply above YAML definition by invoking command:
$ kubectl apply -f FILE_NAME.yaml
Above service object will be created only if nodePort port is available to use.
But I don't understand how, in a real cluster, the service could not have a fixed world-accessible port.
In clusters managed by cloud providers (for example GKE) you can use a service object of type LoadBalancer which will have a fixed external IP and fixed port.
Clusters that have nodes with public IP's can use service object of type NodePort to direct traffic into the cluster.
In minikube environment you can use a service object of type LoadBalancer but it will have some caveats described in last paragraph.
A little bit of explanation:
NodePort
Nodeport is exposing the service on each node IP at a static port. It allows external traffic to enter with the NodePort port. This port will be automatically assigned from range of 30000 to 32767.
You can change the default NodePort port range by following this manual.
You can check what is exactly happening when creating a service object of type NodePort by looking on this answer.
Imagine that:
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
A word about targetPort. It's a definition for port on the pod that is for example a web server.
According to above example you will get hello response with:
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
You can check access with curl command as below:
$ curl http://NODE_IP:NODEPORT
In the example you mentioned:
$ kubectl expose deployment/mysrv --type=NodePort --port=1234
What will happen:
It will assign a random port from range of 30000 to 32767 on your minikube instance directing traffic entering this port to pods.
Additionally it will create a ClusterIP with port of 1234
In the example above there was no parameter targetPort. If targetPort is not provided it will be the same as port in the command.
Traffic entering a NodePort will be routed directly to pods and will not go to the ClusterIP.
From the minikube perspective a NodePort will be a port on your minikube instance. It's IP address will be dependent on the hypervisor used. Exposing it outside your local machine will be heavily dependent on operating system.
LoadBalancer
There is a difference between a service object of type LoadBalancer(1) and an external LoadBalancer(2):
Service object of type LoadBalancer(1) allows to expose a service externally using a cloud provider’s LoadBalancer(2). It's a service within Kubernetes environment that through service controller can schedule a creation of external LoadBalancer(2).
External LoadBalancer(2) is a load balancer provided by cloud provider. It will operate at Layer 4.
Example definition of service of type LoadBalancer(1):
apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 1234 # LOADBALANCER PORT
targetPort: 50001 # POD PORT WHICH APPLICATION IS RUNNING ON
nodePort: 32222 # PORT ON THE NODE
Applying above YAML will create a service of type LoadBalancer(1)
Take a specific look at:
ports:
- port: 1234 # LOADBALANCER PORT
This definition will simultaneously:
specify external LoadBalancer(2) port as 1234
specify ClusterIP port as 1234
Imagine that:
Your external LoadBalancer(2) have:
ExternalIP: 34.88.255.5
port:7654
Your nodes have IP's:
192.168.0.100
192.168.0.101
192.168.0.102
Your pods respond on port 50001 with hello and they have IP's:
10.244.1.10
10.244.1.11
10.244.1.12
Your Services are:
NodePort (port 32222) with:
ClusterIP:
IP: 10.96.0.100
port:7654
targetPort:50001
According to above example you will get hello response with:
ExternalIP:port (all the pods could respond with hello):
34.88.255.5:7654
NodeIP:NodePort (all the pods could respond with hello):
192.168.0.100:32222
192.168.0.101:32222
192.168.0.102:32222
ClusterIP:port (all the pods could respond with hello):
10.0.96.100:7654
PodIP:targetPort (only the pod that request is sent to can respond with hello)
10.244.1.10:50001
10.244.1.11:50001
10.244.1.12:50001
ExternalIP can be checked with command: $ kubectl get services
Flow of the traffic:
Client -> LoadBalancer:port(2) -> NodeIP:NodePort -> Pod:targetPort
Minikube: LoadBalancer
Note: This feature is only available for cloud providers or environments which support external load balancers.
-- Kubernetes.io: Create external LoadBalancer
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
-- Kubernetes.io: Hello minikube
Minikube can create service object of type LoadBalancer(1) but it will not create an external LoadBalancer(2).
The ExternalIP in command $ kubectl get services will have pending status.
To address that there is no external LoadBalancer(2) you can invoke $ minikube tunnel which will create a route from host to minikube environment to access the CIDR of ClusterIP directly.
There is a small mistake in Dawid Kruk’s answer,
Traffic entering a NodePort will be routed directly to pods and will
not go to the ClusterIP.
But as k8s documented here:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the
NodePort Service, from outside the cluster, by requesting
:.
Traffic entering a NodePort did go to ClusterIP.

How to access kubernetes services externally on bare-metal cluster

I have a api-service with type 'ClusterIp' which is working fine and is accessible on the node with clusterip. I want to access it externally . It's a baremetal installation with kubeadm . I cannot use Loadbalancer or Nodeport.
If I use nginx-ingress that too I will use as 'ClusterIP' so how to get the service externally accessible in either api service or nginx-ingress case .
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api ClusterIP 10.97.48.17 <none> 80/TCP 41s
ingress-nginx ClusterIP 10.107.76.178 <none> 80/TCP 3h49m
Changes to solve the issue:
nginx configuration on node
in /etc/nginx/sites-available
upstream backend {
server node1:8001;
server node2:8001;
server node3:8001;
}
server_name _;
location / {
proxy_pass http://backend;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
Ran my two services as DaemonSet
ClusterIP services are accesible only within the cluster.
For bare metal clusters, you can use any of the following approaches to make a service available externally. Suggestions are from most recommended to least recommended order:
Use metallb to implement LoadBalancer service type support - https://metallb.universe.tf/. You will need a pool of IP addresses for metallb to handout. It also supports IP sharing mode where you can use same IP for multiple LoadBalancer services.
Use NodePort service. You can access your service from any node IP:node_port address. NodePort service selects random port in node port range by default. You can choose a custom port in node port range using spec.ports.nodePort field in the service specification.
Disadvantage: The node port range by default is 30000-32767. So you cannot bind to any custom port that you want like 8080. Although you can change the node port range with --service-node-port-range flag of kube-api-server, it is not recommended to use it with low port ranges.
Use hostPort to bind a port on the node.
Disadvantage: You don't have fixed IP address because you don't know which node your pod gets scheduled to unless you use nodeAffinity. You can make your pod a daemonset if you want it to be accessible from all nodes on the given port.
If you are dealing with HTTP traffic, another option is installing an IngressController like nginx or Traefik and use Ingress resource. As part of their installation, they use one of the approaches mentioned above to make themselves available externally.
Well, as you can guess by reading the name, ClusterIp is only accessible from inside the cluster.
To make a service accessible from outside the cluster, you havec 3 options :
NodePort Service type
LoadBalancer Service type (you still have to manage your LoadBalancer manually though)
Ingress
There is a fourth option which is hostPort (which is not a service type) but I'd rather keep it from special case when you're absolutely sure that your pod will always be located on the same node (or eventually for debug).
Having this said, then this leaves us only with one solution offered by Kubernetes : Ingress.

Kubernetes Service not being assigned an (external) IP address

There are various answers for very similar questions around SO that all show what I expect my deployment to look like, however mine does not.
I am running Minikube 0.25, with Kubernetes 1.9 on Windows
10.
I have successfully created a node, a replication controller, and a
single pod template has been replicated 10 times.
The node is Minikube, and is assigned the IP address 10.49.106.251
The dashboard is available at 10.49.106.251:30000
I am deploying a service with a YAML file, but the service is never assigned an external IP - the result is the same if I happen to use kubectl expose.
The YAML file that I am using:
kind: Service
apiVersion: v1
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello-world
ports:
- protocol: TCP
port: 8080
I can also use the YAML file to assign an external IP - I assign it the same value as the node IP address. Either way results in no possible connection to the service. I should also point out that the 10 replicated pods all match the selector.
The result of running kubectl get svc for the default, and after updating the external IP are below:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-service NodePort 10.108.61.233 <none> 8080:32406/TCP 1m
hello-service NodePort 10.108.61.233 10.49.106.251 8080:32406/TCP 1m
The tutorial I have been following, and the other answers on SO show a result similar to:
hello-service NodePort 10.108.61.233 <nodes> 8080:32406/TCP 1m
Where the difference is that the external IP is set to <nodes>
I have encountered a number of issues when running locally - is this just another case of doing so, or has someone else identified a way to get around the external IP assignment issue?
For local development purpose, I have also met with the problem of exposing a 'public IP' for my local development cluster.
Fortunately, I have found one of the kubectl command which can help:
kubectl port-forward service/service-name 9092
Where 9092 is the container port to expose, so that I can access applications inside the cluster, on my local development environment.
The important note is that it is not a 'production' grade solution.
Works well as a temporary hack to get to the cluster insides.
Using NodePort means it will open a port on all nodes of your cluster. In your example above, the port exposed to the outside world is 32406.
In order to access hello-service (if it is http) it will be http://[ the node ip]:32406/. This will hit your minikube and the the request will be routed to your pod in roundrobin fashion.
same problem when trying to deploy a simple helloworld image locally with Kubernetes v1.9.2
After two weeks of attempts , It seems that Kubernetes expose all nginx web server applications internally in port 80 not 8080
So this should work kubectl expose deployment hello-service --type=NodePort --port=80

What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?

Question 1 - I'm reading the documentation and I'm slightly confused with the wording. It says:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
Does the NodePort service type still use the ClusterIP but just at a different port, which is open to external clients? So in this case is <NodeIP>:<NodePort> the same as <ClusterIP>:<NodePort>?
Or is the NodeIP actually the IP found when you run kubectl get nodes and not the virtual IP used for the ClusterIP service type?
Question 2 - Also in the diagram from the link below:
Is there any particular reason why the Client is inside the Node? I assumed it would need to be inside a Clusterin the case of a ClusterIP service type?
If the same diagram was drawn for NodePort, would it be valid to draw the client completely outside both the Node andCluster or am I completely missing the point?
A ClusterIP exposes the following:
spec.clusterIp:spec.ports[*].port
You can only access this service while inside the cluster. It is accessible from its spec.clusterIp port. If a spec.ports[*].targetPort is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling kubectl get services is the IP assigned to this service within the cluster internally.
A NodePort exposes the following:
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
If you access this service on a nodePort from the node's external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP.
Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from spec.clusterIp:spec.ports[*].nodePort.
A LoadBalancer exposes the following:
spec.loadBalancerIp:spec.ports[*].port
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.
To clarify for anyone who is looking for what is the difference between the 3 on a simpler level. You can expose your service with minimal ClusterIp (within k8s cluster) or larger exposure with NodePort (within cluster external to k8s cluster) or LoadBalancer (external world or whatever you defined in your LB).
ClusterIp exposure < NodePort exposure < LoadBalancer exposure
ClusterIp
Expose service through k8s cluster with ip/name:port
NodePort
Expose service through Internal network VM's also external to k8s ip/name:port
LoadBalancer
Expose service through External world or whatever you defined in your LB.
ClusterIP: Services are reachable by pods/services in the Cluster
If I make a service called myservice in the default namespace of type: ClusterIP then the following predictable static DNS address for the service will be created:
myservice.default.svc.cluster.local (or just myservice.default, or by pods in the default namespace just "myservice" will work)
And that DNS name can only be resolved by pods and services inside the cluster.
NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service)
If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:
mynodeportservice.mynamespace.svc.cluster.local (or just mynodeportservice.mynamespace)
For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster.
Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852.
A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.
LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet)
(Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.)
If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.
You also asked:
Does the NodePort service type still use the ClusterIP? Yes*
Or is the NodeIP actually the IP found when you run kubectl get nodes? Also Yes*
Lets draw a parrallel between Fundamentals:
A container is inside a pod. a pod is inside a replicaset. a replicaset is inside a deployment.
Well similarly:
A ClusterIP Service is part of a NodePort Service. A NodePort Service is Part of a Load Balancer Service.
In that diagram you showed, the Client would be a pod inside the cluster.
Lets assume you created a Ubuntu VM on your local machine. It's IP address is 192.168.1.104.
You login into VM, and installed Kubernetes. Then you created a pod where nginx image running on it.
1- If you want to access this nginx pod inside your VM, you will create a ClusterIP bound to that pod for example:
$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080
Then on your browser you can type ip address of nginxclusterip with port 80, like:
http://10.152.183.2:80
2- If you want to access this nginx pod from your host machine, you will need to expose your deployment with NodePort. For example:
$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort
Now from your host machine you can access to nginx like:
http://192.168.1.104:31865/
In my dashboard they appear as:
Below is a diagram shows basic relationship.
Feature
ClusterIP
NodePort
LoadBalancer
Exposition
Exposes the Service on an internal IP in the cluster.
Exposing services to external clients
Exposing services to external clients
Cluster
This type makes the Service only reachable from within the cluster
A NodePort service, each cluster node opens a port on the node itself (hence the name) and redirects traffic received on that port to the underlying service.
A LoadBalancer service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on
Accessibility
It is default service and Internal clients send requests to a stable internal IP address.
The service is accessible at the internal cluster IP-port, and also through a dedicated port on all nodes.
Clients connect to the service through the load balancer’s IP.
Yaml Config
type: ClusterIP
type: NodePort
type: LoadBalancer
Port Range
Any public ip form Cluster
30000 - 32767
Any public ip form Cluster
User Cases
For internal communication
Best for testing public or private access or providing access for a small amount of time.
widely used For External communication
Sources:
Kubernetes in Action
Kubernetes.io Services
Kubernetes Services simply visually explained
clusterIP : IP accessible inside cluster (across nodes within d cluster).
nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.
pod3 can talk to pod1 via their clusterIP network.
nodeport : to make pods accessible from outside the cluster via nodeIP:nodeport, it will create/keep clusterIP above as its clusterIP network.
nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX
you might access service on pod1 either via nodeIPA:nodeportX OR nodeIPB:nodeportX. Either way will work because kube-proxy (which is installed in each node) will receive your request and distribute it [redirect it(iptables term)] across nodes using clusterIP network.
Load balancer
basically just putting LB in front, so that inbound traffic is distributed to nodeIPA:nodeportX and nodeIPB:nodeportX then continue with the process flow number 2 above.
Practical understanding.
I have created 2 services 1 for NodePort and other for ClusterIP
If I wanted to access the service inside the cluster(from master or any worker node) than both are accessible.
Now if I wanted to access the services from outside the cluster then Nodeport only accessible not ClusterIP.
Here you can see localhost wont listening on port 80 even my nginx container are listening on port 80.
Yes, this is the only difference.
ClusterIP. Exposes a service which is only accessible from within the cluster.
NodePort. Exposes a service via a static port on each node’s IP.
LoadBalancer. Exposes the service via the cloud provider’s load balancer.
ExternalName. Maps a service to a predefined externalName field by returning a value for the CNAME record.
Practical Use Case
Let be assume you have to create below architecture in your cluster. I guess its pretty common.
Now, user only going to communicate with frontend on some port. Backend and DB services are always hidden to the external world.
Summary:
There are five types of Services:
ClusterIP (default): Internal clients send requests to a stable internal IP address.
NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
LoadBalancer: Clients send requests to the IP address of a network load balancer.
ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
Headless: You can use a headless service when you want a Pod grouping, but don't need a stable IP address.
The NodePort type is an extension of the ClusterIP type. So a Service of type NodePort has a cluster IP address.
The LoadBalancer type is an extension of the NodePort type. So a Service of type LoadBalancer has a cluster IP address and one or more nodePort values.
Illustrate through Image
Details
ClusterIP
ClusterIP is the default and most common service type.
Kubernetes will assign a cluster-internal IP address to ClusterIP service. This makes the service only reachable within the cluster.
You cannot make requests to service (pods) from outside the cluster.
You can optionally set cluster IP in the service definition file.
Use Cases
Inter-service communication within the cluster. For example, communication between the front-end and back-end components of your app.
NodePort
NodePort service is an extension of ClusterIP service. A ClusterIP Service, to which the NodePort Service routes, is automatically created.
It exposes the service outside of the cluster by adding a cluster-wide port on top of ClusterIP.
NodePort exposes the service on each Node’s IP at a static port (the NodePort). Each node proxies that port into your Service. So, external traffic has access to fixed port on each Node. It means any request to your cluster on that port gets forwarded to the service.
You can contact the NodePort Service, from outside the cluster, by requesting :.
Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.
If you are going to choose node port explicitly, ensure that the port was not already used by another service.
Use Cases
When you want to enable external connectivity to your service.
Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by
Kubernetes, or even to expose one or more nodes’ IPs directly.
Prefer to place a load balancer above your nodes to avoid node failure.
LoadBalancer
LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
It integrates NodePort with cloud-based load balancers.
It exposes the Service externally using a cloud provider’s load balancer.
Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer implementation. The cloud provider will create a load balancer, which then automatically routes requests to your Kubernetes Service.
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
The actual creation of the load balancer happens asynchronously.
Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.
Use Cases
When you are using a cloud provider to host your Kubernetes cluster.
ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.
You specify these Services with the spec.externalName parameter.
It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.
No proxying of any kind is established.
Use Cases
This is commonly used to create a service within Kubernetes to represent an external datastore like a database that runs externally to Kubernetes.
You can use that ExternalName service (as a local service) when Pods from one namespace talk to a service in another namespace.
Here is the answer for the Question 2 about the diagram, since it still doesn't seem to be answered directly:
Is there any particular reason why the Client is inside the Node? I
assumed it would need to be inside a Clusterin the case of a ClusterIP
service type?
At the diagram the Client is placed inside the Node to highlight the fact that ClusterIP is only accessible on a machine which has a running kube-proxy daemon. Kube-proxy is responsible for configuring iptables according to the data provided by apiserver (which is also visible at the diagram). So if you create a virtual machine and put it into the network where the Nodes of your cluster are and also properly configure networking on that machine so that individual cluster pods are accessible from there, even with that ClusterIP services will not be accessible from that VM, unless the VM has it's iptables configured properly (which doesn't happen without kubeproxy running on that VM).
If the same diagram was drawn for NodePort, would it be valid to draw
the client completely outside both the Node andCluster or am I
completely missing the point?
It would be valid to draw client outside the Node and Cluster, because NodePort is accessible from any machine which has access to a cluster Node and the corresponding port, including machines outside the cluster.
And do not forget the "new" service type (from the k8s docu):
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.