Does NodePort requestbalance between deployments? - kubernetes

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081

If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?

Related

K8s even load balancing among pods not happening in load test [duplicate]

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?

To expose the Ladbalancer with static IP

I understand that we can expose the serive as loadbalancer.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
I have created a static IP.
Is it possible to replace the LoadBalancer Ingress with static IP?
tl;dr = yes, but trying to edit the IP in that Service resource won't do what you expect -- it's just reporting the current state of the world to you
Is it possible to replace the LoadBalancer Ingress with static IP?
First, the LoadBalancer is whatever your cloud provider created when kubernetes asked it to create one; you have a lot of annotations (that one is for AWS, but there should be ones for your cloud provider, too) that influence the creation, and it appears EIPs for NLBs is one of them, but I doubt that does what you're asking
Second, the type: LoadBalancer is merely convenience -- it's not required to expose your Service outside of the cluster. It's a replacement for creating a Service of type: NodePort, then creating an external load balancer resource, associating all the Nodes in your cluster with that load balancer, pointing to the NodePort on the Node to get traffic from the outside world into the cluster. If you already have a static IP-ed load balacer, you can update its registration to point to the NodePort allocations for your existing my-service and you'll be back in business

expose private kubernetes cluster with NodePort type service

I have created a VPC-native cluster on GKE, master authorized networks disabled on it.
I think I did all things correctly but I still can't access to the app externally.
Below is my service manifest.
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: '3000'
port: 80
targetPort: 3000
protocol: TCP
nodePort: 30382
selector:
io.kompose.service: app
type: NodePort
The app's container port is 3000 and I checked it is working from logs.
I added firewall to open the 30382port in my vpc network too.
I still can't access to the node with the specified nodePort.
Is there anything I am missing?
kubectl get ep:
NAME ENDPOINTS AGE
app 10.20.0.10:3000 6h17m
kubernetes 34.69.50.167:443 29h
kubectl get svc:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app NodePort 10.24.6.14 <none> 80:30382/TCP 6h25m
kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 29h
In Kubernetes, the service is used to communicate with pods.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort type.
The NodePort setting applies to the Kubernetes services. By default Kubernetes services are accessible at the ClusterIP which is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service. To make the service accessible from outside of the cluster a user can create a service of type NodePort.
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.

Expose deployment via service type Node Port on digital Ocean Kubernetes

I'm implementing a solution in Kubernetes for several clients, and I want to monitoring my cluster with Prometheus. However, because this can scale quickly, and I want to reduce costs, I will use Federation for Prometheus, to scrape different clusters of Kubernetes, but I need to expose my Prometheus deployment.
I already have that working with a service type LoadBalancer exposing my Prometheus deployment, but this approach add this extra expense to my infra structure (Digital Ocean LB).
Is it possible to make this using a service type NodePort, exposing a port to my Cluster IP, something like this:
XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090
Where I can use this URL to my master Prometheus scrappe all "slaves" Prometheus instances?
I already tried, but I can't reach my cluster port. Something is blocking. I also delete my firewall, to ensure that nothing is interferes in this implementation but nothing.
This is my service:
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Can anybody help me please?
---
This is my service:
```kubectl describe service my-nodeport-service
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can then set up host XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090 to act as your load balancer with Nginx.
Try setup Nginx TCP load balancer.
Note: You will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option.
Example config file:
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server dhcp-180.example.com:446;
server dhcp-185.example.com:446;
server dhcp-186.example.com:446;
server dhcp-187.example.com:446;
}
server {
listen 446;
proxy_pass stream_backend;
}
After runing Nginx, test results should be like:
Host lb.example.com acts as load balancer with Nginx.
In this example Ngnix is configured to use round-robin and as you can see, every time a new connection ends up to a different host/container.
Note: the container hostname is same as the node hostname this is due to the hostNetwork.
There are some drawbacks of this solution like:
defining hostNetwork reserves the host’s port(s) for all the containers running in the pod
creating one load balancer you have single point of failure
every time a new node is added or removed to the cluster, the load balancer should be updated
This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster.
Useful post: load-balancer-tcp.
NodePort documentation: nodePort.

How to have the static ELB endpoint for kubernates deployments

Every time I deploy a new build in kubernates. I am getting different EXTERNAL-IP which in below case is afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com
$ kubectl get services -o wide -l appname=${APP_FULLNAME_SYSTEST},stage=${APP_SYSTEST_ENV}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
test-systest-lb-https LoadBalancer 123.45.xxx.21 afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com 443:30316/TCP 9d appname=test-systest,stage=systest
How can I have a static external IP (elb) so that I can link that to route 53. Do I have to include something on my Kubernates deployment yml file.
Additional details: I am using below loadbalancer
spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
If you are just doing new builds of a single Deployment then you should check what your pipeline is doing to the Service. You want to do a kubectl apply and a rolling update on the Deployment (provided the strategy is set on the Deployment) without modifying the Service (so not a delete and a create). If you do kubectl get services you should see its age (your output shows 9d so that's all good) and kubectl describe service <service_name> will show any events on it.
I'm guessing do just want an external IP entry you can point to like 'afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com' and not a truly static IP. If you do want a true static IP you won't get it like this but you can now try NLB.
If you mean you want multiple Deployments (different microservices) to share a single IP then you could install an ingress controller and expose that with an ELB. Then when you deploy new apps you use an Ingress resource for each to tell the controller to expose them externally. So you can then put all your apps on the same external IP but routed under different paths or subdomains. The nginx ingress controller is a good option.