How to make browser talk to different Pod? - kubernetes

As browser(external caller) talks to backend service on a single connection, kubernetes respects this connection so that service(kind: Service) always divert traffic to same Pod.
If the browser traffic goes to same pod then it becomes difficult to maintain zero downtime, amidst rolling updates.
Below is the spec we are using:
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: NodePort
selector:
app: my-nginx
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
Amidst rolling deployment(kubectl apply -f file-deployment.yml), how to maintain zero downtime? amidst such scenario..

Related

How to rewrite subdomain to sub-route in DigitalOcean kubernetes?

I have deployed a nodejs/remix app to the Digitalocean Kubernetes service.
I want all my subdomains to subroute
so xxx.example.com should be rewritten as example.com/xxx. Is it possible to do with kubernetes load balancer?
My current config is
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-deployment
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000

Kubernetes manifest service for deployment

So final manifest will be next :
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
It will work for defining specific targetport
Service is an abstract way to expose your application running on a set of Pods. This one is a manifest for creating a service, here targetPort: 8080 is the pod port. In this manifest there are basically two parts, one is metadata which gives the service name and also give it a label. Then the spec part, which is short form of specification, it basically the specification of the service, here the selector is given, and also the ports are specified here, port represents the service port, targetPort represents the port on which the service will send requests. By nodePort the outside world (from outside the cluster) can communicate to the service, and finally type represents the type of the service. If the type = NodePort then it is basically means from outside the cluster the service will expose a port (nodePort).
apiVersion: v1
kind: Service
metadata:
name: apiserver-service
labels:
app: apiserver
spec:
selector:
app: apiserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30005
type: NodePort
The first example in Kubernetes Service documentation Defining a Service contains what you ask, a Service where port: and targetPort: is different.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376

Load balancing to multiple containers of same app in a pod

I have a scenario where I need to have two instances of an app container run within the same pod.
I have them setup to listen on different ports. Below is how the Deployment manifest looks like.
The Pod launches just fine with the expected number of containers.
I can even connect to both ports on the podIP from other pods.
kind: Deployment
metadata:
labels:
service: app1-service
name: app1-dep
namespace: exp
spec:
template:
spec:
contianers:
- image: app1:1.20
name: app1
ports:
- containerPort: 9000
protocol: TCP
- image: app1:1.20
name: app1-s1
ports:
- containerPort: 9001
protocol: TCP
I can even create two different Services one for each port of the container, and that works great as well.
I can individually reach both Services and end up on the respective container within the Pod.
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: app1-s1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
I want both the instances of the container behind a single service, that round robins between both the containers.
How can I achieve that? Is it possible within the realm of services? Or would I need to explore ingress for something like this?
Kubernetes services has three proxy modes: iptables (is the default), userspace, IPVS.
Userspace: is the older way and it distribute in round-robin as the only way.
Iptables: is the default and select at random one pod and stick with it.
IPVS: Has multiple ways to distribute traffic but first you have to install it on your node, for example on centos node with this command:
yum install ipvsadm and then make it available.
Like i said, Kubernetes service by default has no round-robin.
To activate IPVS you have to add a parameter to kube-proxy
--proxy-mode=ipvs
--ipvs-scheduler=rr (to select round robin)
One can expose multiple ports using a single service. In Kubernetes-service manifest, spec.ports[] is an array. So, one can specify multiple ports in it. For example, see bellow:
apiVersion: v1
kind: Service
metadata:
name: app1
namespace: exp
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
- name: http-s1
port: 81
protocol: TCP
targetPort: 9001
selector:
service: app1-service
sessionAffinity: None
type: ClusterIP
Now, the hostname is same except the port and by default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
What I would do is to separate the app in two different deployments, with one container in each deployment. I would set the same labels to both deployments and attack them both with one single service.
This way, you don't even have to run them on different ports.
Later on, if you would want one of them to receive more traffic, I would just play with the number of the replicas of each deployment.

Why don't my Kubernetes services publish on the port I specify?

I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).
For instance, I'm going through this walkthrough on Ingress and I create the following Deployment and Service objects per the instructions:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:
~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I also verified that none of my nodes are listening on 8080, but they are listening on 31669.
This isn't super ideal - especially considering that the Ingress portion will need to know what servicePort is being used (the walkthrough references this at 8080).
By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.
Am I missing something? Am I doing it all wrong?
Matt,
The reason a random port is being allocated is because you are creating a service of type NodePort.
K8s documentation explains NodePort here
Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:
Service ip/port :- 10.109.24.16:9376
Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort
You can also query the pod directly to test that the pod is in-fact exposing a service.
Pod ip/port: 10.40.0.4:8080
Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.

Discovering Kubernetes Pod without specifying port number

I have a single kubernetes service called MyServices which hold four deployments. Each deployment is running as a single pod and each pod has its own port number.
As mentioned all the pods are running inside one kubernetes service.
I am able to call the services through the external IP Address of that kubernetes service and port number.
Example : 92.18.1.1:3011/MicroserviceA Or 92.18.1.1:3012/MicroserviceB
I am now trying to develop and orchestration layer that calls these services and get a response from them, However, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName. Example: 192.168.1.1/MicroserviceA
How can I achieve above statement?
From architecture perspective, is it a good idea to deploy all microservice inside a single kubenetes service (like my current approach) or each micro-service needs it's own service
Below is the kubernetes deployment file ( I removed the script for micro-service C and D since they are identical to A and B):
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: microservice
ports:
- name: microserviceA
protocol: TCP
port: 3011
targetPort: 3011
- name: microserviceB
protocol: TCP
port: 3012
targetPort: 3012
- name: microserviceC
protocol: TCP
port: 3013
targetPort: 3013
- name: microserviceD
protocol: TCP
port: 3014
targetPort: 3014
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceAdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3011
imagePullSecrets:
- name: regcred
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceBdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3012
There is a way to discover all the port of Kubernetes services.
So you could consider using kubectl get svc, as seen in "Source IP for Services with Type=NodePort"
NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services <yourService>)
, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName
Then you need to expose those services through one entry point, typically a reverse-proxy like NGiNX.
The idea is to expose said services using the default ports (80 or 443), and reverse-proxy them to the actual URL and port number.
Check "Service Discovery in a Microservices Architecture" for the general idea.
And "Service Discovery for NGINX Plus with etcd" for an implementation (using NGiNX plus, so could be non-free).
Or "Setting up Nginx Ingress on Kubernetes" for a more manual approach.