So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?
Related
Can someone please explain how POD to POD works in AKS?
from the docs, I can see it uses kube proxy component to send the traffic to the desired POD.
But I have been told that I must use clusterIP service and bind all the relevant POD's together.
So what is real flow? Or I missed something. below a couple of questions to be more clear.
Questions:
how POD to POD inside one node can talk to each other? what is the flow?
how POD to POD inside a cluster (different nodes) can talk to each other? what is the flow?
if it's possible it will be highly appreciated if you can describe the flows for #1 and #2 in the deployment of kubenet and CNI.
Thanks a lot!
for pod to pod communication we use services. so first we need to understand,
why we need service: what actually do service for us that, they resolve the dns name and give us the the exact ip that we need to connect a specific pod. now as you want to communicate with pod to pod you need to create a ClusterIP service.
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType. with ClusterIP service you can't access a pod from outside the cluster for this reason we use clusterip service if we want the communication between pod to pod only.
kube-proxy is the network proxy that runs on each node in your cluster.
it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
every service maintain iptables.And kube-proxy handled these ip tables for every service. so yes, kube-proxy is the most vital point for network setup in our k8s cluster.
how the network policy works in kubernetes:
all Pods can communicate with all other Pods without using network address translation (NAT).
all Nodes can communicate with all Pods without NAT.
the IP that a Pod sees itself as is the same IP that others see it as.
with those point:
Container-to-Container networking
Pod-to-Pod networking
Pod-to-Service networking
Internet-to-Service networking
It handles transmission of packets between pod to pods, and also with the outside world. It acts like a network proxy and load balancer for pods running on the node by implementing load-balancing using NAT in iptables.
The kube-proxy process stands in between the Kubernetes network and the pods that are running on that particular node. It is responsible for ensuring that communication is maintained efficiently across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible to translate that object into meaningful rules in the local iptables rule set on the worker node. iptables is used to translate the virtual IP assigned to the service object to all of the pod IPs mapped by the service.
i hope it's clear your idea about kube proxy.
lets see a example how it's works.
here i used headless service so that i can connect a specific pod.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
clusterIP: None
selector:
app: my-test
ports:
- port: 80
name: rest
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-sts
spec:
serviceName: my-service
replicas: 3
selector:
matchLabels:
app: my-test
template:
metadata:
labels:
app: my-test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
---
this will create 3 pods. as : my-sts-0, my-sts-1, my-sts-2. now if we want to connect to the pod my-sts-0 just use this dns name my-sts-0.my-service.default.svc:80 . and the service will resolve the dns name and will provide the exact podip of my-sts-0. now if you need to comminucate from my-sts-1 to my-sts-0, you can just use this dns name.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
ref
I understand that we can expose the serive as loadbalancer.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
I have created a static IP.
Is it possible to replace the LoadBalancer Ingress with static IP?
tl;dr = yes, but trying to edit the IP in that Service resource won't do what you expect -- it's just reporting the current state of the world to you
Is it possible to replace the LoadBalancer Ingress with static IP?
First, the LoadBalancer is whatever your cloud provider created when kubernetes asked it to create one; you have a lot of annotations (that one is for AWS, but there should be ones for your cloud provider, too) that influence the creation, and it appears EIPs for NLBs is one of them, but I doubt that does what you're asking
Second, the type: LoadBalancer is merely convenience -- it's not required to expose your Service outside of the cluster. It's a replacement for creating a Service of type: NodePort, then creating an external load balancer resource, associating all the Nodes in your cluster with that load balancer, pointing to the NodePort on the Node to get traffic from the outside world into the cluster. If you already have a static IP-ed load balacer, you can update its registration to point to the NodePort allocations for your existing my-service and you'll be back in business
I'm implementing a solution in Kubernetes for several clients, and I want to monitoring my cluster with Prometheus. However, because this can scale quickly, and I want to reduce costs, I will use Federation for Prometheus, to scrape different clusters of Kubernetes, but I need to expose my Prometheus deployment.
I already have that working with a service type LoadBalancer exposing my Prometheus deployment, but this approach add this extra expense to my infra structure (Digital Ocean LB).
Is it possible to make this using a service type NodePort, exposing a port to my Cluster IP, something like this:
XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090
Where I can use this URL to my master Prometheus scrappe all "slaves" Prometheus instances?
I already tried, but I can't reach my cluster port. Something is blocking. I also delete my firewall, to ensure that nothing is interferes in this implementation but nothing.
This is my service:
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Can anybody help me please?
---
This is my service:
```kubectl describe service my-nodeport-service
Name: my-nodeport-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nodeport-service","namespace":"default"},"spec":{"ports":[{"na...
Selector: app=nginx
Type: NodePort
IP: 10.245.162.125
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30800/TCP
Endpoints: 10.244.2.220:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can then set up host XXXXXXXXXXXXXXXX.k8s.ondigitalocean.com:9090 to act as your load balancer with Nginx.
Try setup Nginx TCP load balancer.
Note: You will be using Nginx stream and if you want to use open source Nginx and not Nginx Plus, then you might have to compile your own Nginx with -with-stream option.
Example config file:
events {
worker_connections 1024;
}
stream {
upstream stream_backend {
server dhcp-180.example.com:446;
server dhcp-185.example.com:446;
server dhcp-186.example.com:446;
server dhcp-187.example.com:446;
}
server {
listen 446;
proxy_pass stream_backend;
}
After runing Nginx, test results should be like:
Host lb.example.com acts as load balancer with Nginx.
In this example Ngnix is configured to use round-robin and as you can see, every time a new connection ends up to a different host/container.
Note: the container hostname is same as the node hostname this is due to the hostNetwork.
There are some drawbacks of this solution like:
defining hostNetwork reserves the host’s port(s) for all the containers running in the pod
creating one load balancer you have single point of failure
every time a new node is added or removed to the cluster, the load balancer should be updated
This way, one could set up a kubernetes cluster to Ingress-Egress TCP connections routing from/to outside of the cluster.
Useful post: load-balancer-tcp.
NodePort documentation: nodePort.
Every time I deploy a new build in kubernates. I am getting different EXTERNAL-IP which in below case is afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com
$ kubectl get services -o wide -l appname=${APP_FULLNAME_SYSTEST},stage=${APP_SYSTEST_ENV}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
test-systest-lb-https LoadBalancer 123.45.xxx.21 afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com 443:30316/TCP 9d appname=test-systest,stage=systest
How can I have a static external IP (elb) so that I can link that to route 53. Do I have to include something on my Kubernates deployment yml file.
Additional details: I am using below loadbalancer
spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
If you are just doing new builds of a single Deployment then you should check what your pipeline is doing to the Service. You want to do a kubectl apply and a rolling update on the Deployment (provided the strategy is set on the Deployment) without modifying the Service (so not a delete and a create). If you do kubectl get services you should see its age (your output shows 9d so that's all good) and kubectl describe service <service_name> will show any events on it.
I'm guessing do just want an external IP entry you can point to like 'afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com' and not a truly static IP. If you do want a true static IP you won't get it like this but you can now try NLB.
If you mean you want multiple Deployments (different microservices) to share a single IP then you could install an ingress controller and expose that with an ELB. Then when you deploy new apps you use an Ingress resource for each to tell the controller to expose them externally. So you can then put all your apps on the same external IP but routed under different paths or subdomains. The nginx ingress controller is a good option.
So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?