Kubernetes Node Port Service - kubernetes

I have a kubernetes cluster on bare metal with NodePort Service and 2 HAProxies balances traffic to these nodes.
when I send a request to one of these nodes, it balances traffic to other nodes in cluster. is it possible to change this behavior? I don't want to re-balance traffic.
Update:
we can use externalTrafficPolicy: Local
spec:
selector:
app: nginx
type: NodePort
externalTrafficPolicy: Local

Nodeport traffic will be intercepted by kube-proxy and then it will redirect the traffic to the node which contains the Pod in a random manner. It's advisable to use Loadbalancer service instead of nodePort. This applies if you are using userspace and iptables modes
You may use IPVS to change the behavior

Related

2-Node Cluster, Master goes down, Worker fails

We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.
Unfortunately, if the master goes down the worker will not serve any traffic (ingress).
This is strange because all the service pods (which ingress feeds) on the worker node are running.
We suspect the reason is that key services such as the traefik ingress controller and coredns are only running on the master.
Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.
We've tried to increase the number of replicas of the traefik and coredns deployment which helps a bit BUT:
This gets lost on the next reboot
The worker still functions when the master is down but every 2nd ingress request fails
It seems the worker still blindly (round-robin) sends traffic to a non-existant master
We would appreciate some advice and explanation:
Should not key services such as traefik and coredns be DaemonSets by default?
How can we change the service description (e.g. replica count) in a persistent way that does not get lost
How can we get intelligent traffic routing with ingress to only "up" nodes
Would it make sense to make this a 2-master cluster?
UPDATE: Ingress Description:
kubectl describe ingress -n msa
Name: msa-ingress
Namespace: msa
Address: 10.3.229.111,10.3.229.112
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates service.ourdomain.com,node1.ourdomain.com,node2.ourdomain.com
Rules:
Host Path Backends
---- ---- --------
service.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node1.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node2.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
Annotations: kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: msa-middleware#kubernetescrd
Events: <none>
Your goals seems can be achievable with a few K8S internal features (not specific to Traffic):
Assure you have 1 replica of Ingress Controller's Pod on each Node => use Daemon Set as a installation method
To fix the error from Ingress Description set the correct load Balancer IP of Ingress Controller's Service.
Use external Traffic Policy to "Local" - this assures that traffic is routed to local endpoints only (Controller Pads running on Node accepting traffic from Load Balancer)
externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
Service name of Ingress Backend should use external Traffic Policy externalTrafficPolicy: Local too.
Running single node or two node masters in k8s cluster is not recommended and it doesnt tolerate failure of master components. Consider running 3 masters in your kubernetes cluster.
Following link would be helpful -->
https://netapp-trident.readthedocs.io/en/stable-v19.01/dag/kubernetes/kubernetes_cluster_architecture_considerations.html

what is the network structure like in a cluster?

I have a very hard time understanding what kubernetes network architecture is really like.
As a basic understanding "there's a machine behind each IP", but with this stuff of containers inside pods inside nodes inside a cluster hosted somewhere.
Adding services, deployments and other kubernetes objects, makes it even more confusing. The documentation is not super clear on that. I'm just lost and throwing hands in the air
Could I ask for a brief explanation of what network is inside what network, and what elements have IPs and/or ports?
"there's a machine behind each IP"
i am not sure about for which IP you are talking about
There are multiple components in Kubernetes if we focus main
POD (It runs docker container)
Deployment
Service
Ingress
Now if talk about managing the traffic it's work like
Ingress > ingress controller > Service > deployment > POD > Container
There are IPs assigned to each PODs (workloads)
But it's not useful in normal case, it auto managed by K8s nothing to do it with it.
it will be internal IP so you can not connect with workload of POD from out of Kubernetes.
Now we have Type of Services
ClusterIP
Load Balancer
Node Port
Cluster IP is the same again internal IP managed by Kubernetes.
The load balancer is exposed to the internet it's like you are attaching the LB to your workload or application so it will be exposed to the internet.
In this case, you will get the external IP open to the internet.
This was like intern arch.
If we talk about simple cluster architecture
There are master node and work nodes
Work nodes have internal and external IP based on you Private Kubernetes cluster or Public Kubernetes cluster.
Each of you container or POD runs on worker node and have internal IP in ideal scenario.
Multiple workloads or containers can run on a single Machine or single VM NODE.
Ports get used the same way we use generally.
For example this is my test service :
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test
tier: frontend
it's has exposed two port 80 and 9595. if you look carefully targetPort: 9595 there is a target port in both cases it is diverting traffic to the 9595 port on which my container or workload will be running.

ignite CommunicationSpi questions in PAAS environment

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.

How to merge ingress-nginx with existing nginx on worker node?

One worker node has already installed a nginx and listened on port 80. I want to leverage ingress-nginx and keep former service in worker node still working. Is there any way to merge ingress-nginx with existing nginx on worker node?
I'm working on baremetal environment.
Having multiple pods listening on port 80 should not be an issue as they should be in their own network namespaces, unless you explicitly run them with hostNetwork: true which in most cases you should not.
For running nginx-ingress on baremetal you should expose it with NodePort Service on predefined ports like ie. 32080 and 32443, which will make your ingress availabe on all the nodes on these ports, and then configure your network so that some IP 80/443 traffic is directed by your loadbalancer to kube nodes on these predefined ports
The ingress-nginx has its own nginx running, it watches the resources at the api-server and update the nginx configuration dynamically, while the nginx uses the static configuration, so they can't be merged together. I guess you can configure ingress to access nginx through ingress-nginx.

What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?

Question 1 - I'm reading the documentation and I'm slightly confused with the wording. It says:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
Does the NodePort service type still use the ClusterIP but just at a different port, which is open to external clients? So in this case is <NodeIP>:<NodePort> the same as <ClusterIP>:<NodePort>?
Or is the NodeIP actually the IP found when you run kubectl get nodes and not the virtual IP used for the ClusterIP service type?
Question 2 - Also in the diagram from the link below:
Is there any particular reason why the Client is inside the Node? I assumed it would need to be inside a Clusterin the case of a ClusterIP service type?
If the same diagram was drawn for NodePort, would it be valid to draw the client completely outside both the Node andCluster or am I completely missing the point?
A ClusterIP exposes the following:
spec.clusterIp:spec.ports[*].port
You can only access this service while inside the cluster. It is accessible from its spec.clusterIp port. If a spec.ports[*].targetPort is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling kubectl get services is the IP assigned to this service within the cluster internally.
A NodePort exposes the following:
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
If you access this service on a nodePort from the node's external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP.
Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from spec.clusterIp:spec.ports[*].nodePort.
A LoadBalancer exposes the following:
spec.loadBalancerIp:spec.ports[*].port
<NodeIP>:spec.ports[*].nodePort
spec.clusterIp:spec.ports[*].port
You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.
To clarify for anyone who is looking for what is the difference between the 3 on a simpler level. You can expose your service with minimal ClusterIp (within k8s cluster) or larger exposure with NodePort (within cluster external to k8s cluster) or LoadBalancer (external world or whatever you defined in your LB).
ClusterIp exposure < NodePort exposure < LoadBalancer exposure
ClusterIp
Expose service through k8s cluster with ip/name:port
NodePort
Expose service through Internal network VM's also external to k8s ip/name:port
LoadBalancer
Expose service through External world or whatever you defined in your LB.
ClusterIP: Services are reachable by pods/services in the Cluster
If I make a service called myservice in the default namespace of type: ClusterIP then the following predictable static DNS address for the service will be created:
myservice.default.svc.cluster.local (or just myservice.default, or by pods in the default namespace just "myservice" will work)
And that DNS name can only be resolved by pods and services inside the cluster.
NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service)
If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:
mynodeportservice.mynamespace.svc.cluster.local (or just mynodeportservice.mynamespace)
For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster.
Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852.
A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.
LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet)
(Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.)
If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.
You also asked:
Does the NodePort service type still use the ClusterIP? Yes*
Or is the NodeIP actually the IP found when you run kubectl get nodes? Also Yes*
Lets draw a parrallel between Fundamentals:
A container is inside a pod. a pod is inside a replicaset. a replicaset is inside a deployment.
Well similarly:
A ClusterIP Service is part of a NodePort Service. A NodePort Service is Part of a Load Balancer Service.
In that diagram you showed, the Client would be a pod inside the cluster.
Lets assume you created a Ubuntu VM on your local machine. It's IP address is 192.168.1.104.
You login into VM, and installed Kubernetes. Then you created a pod where nginx image running on it.
1- If you want to access this nginx pod inside your VM, you will create a ClusterIP bound to that pod for example:
$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080
Then on your browser you can type ip address of nginxclusterip with port 80, like:
http://10.152.183.2:80
2- If you want to access this nginx pod from your host machine, you will need to expose your deployment with NodePort. For example:
$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort
Now from your host machine you can access to nginx like:
http://192.168.1.104:31865/
In my dashboard they appear as:
Below is a diagram shows basic relationship.
Feature
ClusterIP
NodePort
LoadBalancer
Exposition
Exposes the Service on an internal IP in the cluster.
Exposing services to external clients
Exposing services to external clients
Cluster
This type makes the Service only reachable from within the cluster
A NodePort service, each cluster node opens a port on the node itself (hence the name) and redirects traffic received on that port to the underlying service.
A LoadBalancer service accessible through a dedicated load balancer, provisioned from the cloud infrastructure Kubernetes is running on
Accessibility
It is default service and Internal clients send requests to a stable internal IP address.
The service is accessible at the internal cluster IP-port, and also through a dedicated port on all nodes.
Clients connect to the service through the load balancer’s IP.
Yaml Config
type: ClusterIP
type: NodePort
type: LoadBalancer
Port Range
Any public ip form Cluster
30000 - 32767
Any public ip form Cluster
User Cases
For internal communication
Best for testing public or private access or providing access for a small amount of time.
widely used For External communication
Sources:
Kubernetes in Action
Kubernetes.io Services
Kubernetes Services simply visually explained
clusterIP : IP accessible inside cluster (across nodes within d cluster).
nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.
pod3 can talk to pod1 via their clusterIP network.
nodeport : to make pods accessible from outside the cluster via nodeIP:nodeport, it will create/keep clusterIP above as its clusterIP network.
nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX
you might access service on pod1 either via nodeIPA:nodeportX OR nodeIPB:nodeportX. Either way will work because kube-proxy (which is installed in each node) will receive your request and distribute it [redirect it(iptables term)] across nodes using clusterIP network.
Load balancer
basically just putting LB in front, so that inbound traffic is distributed to nodeIPA:nodeportX and nodeIPB:nodeportX then continue with the process flow number 2 above.
Practical understanding.
I have created 2 services 1 for NodePort and other for ClusterIP
If I wanted to access the service inside the cluster(from master or any worker node) than both are accessible.
Now if I wanted to access the services from outside the cluster then Nodeport only accessible not ClusterIP.
Here you can see localhost wont listening on port 80 even my nginx container are listening on port 80.
Yes, this is the only difference.
ClusterIP. Exposes a service which is only accessible from within the cluster.
NodePort. Exposes a service via a static port on each node’s IP.
LoadBalancer. Exposes the service via the cloud provider’s load balancer.
ExternalName. Maps a service to a predefined externalName field by returning a value for the CNAME record.
Practical Use Case
Let be assume you have to create below architecture in your cluster. I guess its pretty common.
Now, user only going to communicate with frontend on some port. Backend and DB services are always hidden to the external world.
Summary:
There are five types of Services:
ClusterIP (default): Internal clients send requests to a stable internal IP address.
NodePort: Clients send requests to the IP address of a node on one or more nodePort values that are specified by the Service.
LoadBalancer: Clients send requests to the IP address of a network load balancer.
ExternalName: Internal clients use the DNS name of a Service as an alias for an external DNS name.
Headless: You can use a headless service when you want a Pod grouping, but don't need a stable IP address.
The NodePort type is an extension of the ClusterIP type. So a Service of type NodePort has a cluster IP address.
The LoadBalancer type is an extension of the NodePort type. So a Service of type LoadBalancer has a cluster IP address and one or more nodePort values.
Illustrate through Image
Details
ClusterIP
ClusterIP is the default and most common service type.
Kubernetes will assign a cluster-internal IP address to ClusterIP service. This makes the service only reachable within the cluster.
You cannot make requests to service (pods) from outside the cluster.
You can optionally set cluster IP in the service definition file.
Use Cases
Inter-service communication within the cluster. For example, communication between the front-end and back-end components of your app.
NodePort
NodePort service is an extension of ClusterIP service. A ClusterIP Service, to which the NodePort Service routes, is automatically created.
It exposes the service outside of the cluster by adding a cluster-wide port on top of ClusterIP.
NodePort exposes the service on each Node’s IP at a static port (the NodePort). Each node proxies that port into your Service. So, external traffic has access to fixed port on each Node. It means any request to your cluster on that port gets forwarded to the service.
You can contact the NodePort Service, from outside the cluster, by requesting :.
Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.
If you are going to choose node port explicitly, ensure that the port was not already used by another service.
Use Cases
When you want to enable external connectivity to your service.
Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by
Kubernetes, or even to expose one or more nodes’ IPs directly.
Prefer to place a load balancer above your nodes to avoid node failure.
LoadBalancer
LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
It integrates NodePort with cloud-based load balancers.
It exposes the Service externally using a cloud provider’s load balancer.
Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer implementation. The cloud provider will create a load balancer, which then automatically routes requests to your Kubernetes Service.
Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
The actual creation of the load balancer happens asynchronously.
Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.
Use Cases
When you are using a cloud provider to host your Kubernetes cluster.
ExternalName
Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.
You specify these Services with the spec.externalName parameter.
It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.
No proxying of any kind is established.
Use Cases
This is commonly used to create a service within Kubernetes to represent an external datastore like a database that runs externally to Kubernetes.
You can use that ExternalName service (as a local service) when Pods from one namespace talk to a service in another namespace.
Here is the answer for the Question 2 about the diagram, since it still doesn't seem to be answered directly:
Is there any particular reason why the Client is inside the Node? I
assumed it would need to be inside a Clusterin the case of a ClusterIP
service type?
At the diagram the Client is placed inside the Node to highlight the fact that ClusterIP is only accessible on a machine which has a running kube-proxy daemon. Kube-proxy is responsible for configuring iptables according to the data provided by apiserver (which is also visible at the diagram). So if you create a virtual machine and put it into the network where the Nodes of your cluster are and also properly configure networking on that machine so that individual cluster pods are accessible from there, even with that ClusterIP services will not be accessible from that VM, unless the VM has it's iptables configured properly (which doesn't happen without kubeproxy running on that VM).
If the same diagram was drawn for NodePort, would it be valid to draw
the client completely outside both the Node andCluster or am I
completely missing the point?
It would be valid to draw client outside the Node and Cluster, because NodePort is accessible from any machine which has access to a cluster Node and the corresponding port, including machines outside the cluster.
And do not forget the "new" service type (from the k8s docu):
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.