kubernetes - container busy with request, then request should route to another container - kubernetes

I am very new to k8s and docker. But I have task on k8s. Now I stuck with a use case. That is:
If a container is busy with requests. Then incoming request should redirect to another container.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: twopoddeploy
namespace: twopodns
spec:
selector:
matchLabels:
app: twopod
replicas: 1
template:
metadata:
labels:
app: twopod
spec:
containers:
- name: secondcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24244"
- name: firstcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24243"
service.yaml
apiVersion: v1
kind: Service
metadata:
name: twopodservice
spec:
type: NodePort
selector:
app: twopod
ports:
- nodePort: 31024
protocol: TCP
port: 82
targetPort: 24243
From deployment.yaml, I created a pod with two containers with same image. Because, firstcontainer is not reachable/ busy, then secondcontainer should handles the incoming requests. This our idea and use case. So (only for checking our use case) I delete firstcontainer using docker container rm -f id_of_firstcontainer. Now site is not reachable until, docker recreates the firstcontainer. But I need k8s should redirects the requests to secondcontainer instead of waiting for firstcontainer.
Then, I googled about the solution, I found Ingress and Liveness - Readiness. But Ingress does route the request based on the path instead of container status. Liveness - Readiness also recreates the container. SO also have some question and some are using ngix server. But no luck. That why I create a new question.
So my question is how to configure the two containers to reduce the downtime?
what is keyword for get the solution from the google to try myself?
Thanks,
Pugal.

A service can load-balance between multiple pods. You should delete the second copy of the container inside the deployment spec, but also change it to have replicas: 2. Now your deployment will launch two identical pods, but the service will match both of them, and requests will go to both.
apiVersion: apps/v1
kind: Deployment
metadata:
name: onepoddeploy
namespace: twopodns
spec:
selector: { ... }
replicas: 2 # not 1
template:
metadata: { ... }
spec:
containers:
- name: firstcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24243"
# no secondcontainer
This means if two pods aren't enough to handle your load, you can kubectl scale deployment -n twopodns onepoddeploy --replicas=3 to increase the replica count. If you can tell from CPU utilization or another metric when you're getting to "not enough", you can configure the horizontal pod autoscaler to adjust the replica count for you.
(You will usually want only one container per pod. There's no way to share traffic between containers in the way you're suggesting here, and it's helpful to be able to independently scale components. This is doubly true if you're looking at a stateful component like a database and a stateless component like an HTTP service. Typical uses for multiple containers are things like log forwarders and network proxies, that are somewhat secondary to the main operation of the pod, and can scale or be terminated along with the primary container.)
There are two important caveats to running multiple pod replicas behind a service. The service load balancer isn't especially clever, so if one of your replicas winds up working on intensive jobs and the other is more or less idle, they'll still each get about half the requests. Also, if your pods are configure for HTTP health checks (recommended), if a pod is backed up to the point where it can't handle requests, it will also not be able to answer its health checks, and Kubernetes will kill it off.
You can help Kubernetes here by trying hard to answer all HTTP requests "promptly" (aiming for under 1000 ms always is probably a good target). This can mean returning a "not ready yet" response to a request that triggers a large amount of computation. This can also mean rearranging your main request handler so that an HTTP request thread isn't tied up waiting for some task to complete.

Related

Redis master/slave replication on Kubernetes for ultra-low latency

A graph is always better than the last sentences, so here is what I would like to do :
To sum up:
I want to have a Redis master instance outside (or inside, this is not relevant here) my K8S cluster
I want to have a Redis slave instance per node replicating the master instance
I want that when removing a node, the Redis slave pod gets unregistered from master
I want that when adding a node, a Redis slave pod is added to the node and registered to the master
I want all pods in one node to consume only the data of the local Redis slave (easy part I think)
Why do I want such an architecture?
I want to take advantage of Redis master/slave replication to avoid dealing with cache invalidation myself
I want to have ultra-low latency calls to Redis cache, so having one slave per node is the best I can get (calling on local host network)
Is it possible to automate such deployments, using Helm for instance? Are there domcumentation resources to make such an architecture with clean dynamic master/slave binding/unbinding?
And most of all, is this architecture a good idea for what I want to do? Is there any alternative that could be as fast?
i remember we had a discussion on this topic previously here, no worries adding more here.
Read more about the Redis helm chart : https://github.com/bitnami/charts/tree/master/bitnami/redis#choose-between-redis-helm-chart-and-redis-cluster-helm-chart
You should also be asking the question of how my application will be
connecting to POD on same Node without using the service of Redis.
For that, you can use the `environment variables and expose them to application POD.
Something like :
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
It will give you the value of Node IP on which the POD is running, then you can use that IP to connect to DeamonSet (Redis slave if you are running).
You can read more at : https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
Is it possible to automate such deployments, using Helm for instance?
Yes, you can write down your own Helm chart and deploy the generated YAML manifest.
And most of all, is this architecture a good idea for what I want to
do? Is there any alternative that could be as fast?
If you think then it is a good idea, as per my consideration this could create the $$$ issue & higher cluster resources usage.
What if you are running the 200 nodes on each you will be running the slave of Redis ? Which might consume resources on each node and add cost to your infra.
OR
if you are planning for specific deployment
Your above suggestion is also good, but still, if you are planning to use the Redis with only Specific deployment you can use the sidecar pattern also and connect multiple Redis together using configuration.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: web
spec:
ports:
- port: 80
name: redis
targetPort: 5000
selector:
app: web
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
name: redis
protocol: TCP
- name: web-app
image: web-app
env:
- name: "REDIS_HOST"
value: "localhost"

Can I have a K8s pod per user/firm?

Is there a way we can have a K8s pod per user/per firm? I realise, per user/per firm grouping is mixing up the business level semantics with infrastructure but say I had this need for regulatory reasons, etc to keep things separate. Then is there a way to create a pod on the fly when a user logs in for the first time and hold this pod reference and route any further requests to the relevant pod which will host a set of containers each running an instance of one of the modules.
Is this even possible?
If possible, what are those identifiers that
can be injected into the pod on the fly that I could use to identify that this is
USER-A-POD vs USER_B_POD or FIRM_A_POD vs FIRM_B_POD ?
Effectively, I need to have a pod template that helps me create identical pods of 1 replica but the only way they differ is they are serving traffic related to one user/one firm only.
Generally, if you want to send traffic to a specific pod say from a Kubernetes Service you would use Labels and Selectors. For example, using the selector app: usera-app in the Service:
apiVersion: v1
kind: Service
metadata:
name: usera-service
spec:
selector:
app: usera-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Then say if the Deployment for your pods, using the label app: usera-app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usera-deployment
spec:
selector:
matchLabels:
app: usera-app
replicas: 2
template:
metadata:
labels:
app: usera-app
spec:
containers:
- name: myservice
image: nginx
ports:
- containerPort: 80
More info here
How you assign your pods and deployments is up to you and whatever configuration you may use. If you'd like to force create some of the labels in deployments/pods you can take a look at MutatingAdminssionWebhooks.
If you are looking at projects to facilitate all this you can take a look at:
Gatekeeper which is an implementation of the Open Policy Agent for Kubernetes admission. (Still in alpha as of this writing)
Other tools that can help you with attestation and admission mechanism (would have to be adapted for labels):
Kritis
Portieris
Yes, you can create multiple virtual clusters for each user with namespaces.
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
Namespaces are the way to divide cluster between users.

How to Route to specific pod through Kubernetes Service (like a Gateway API)

I am running Kubernetes on "Docker Desktop" in Windows.
I have a LoadBalancer Service for a deployment which has 3 replicas.
I would like to access SPECIFIC pod through some means (such as via URL path : < serviceIP >:8090/pod1).
Is there any way to achieve this usecase?
deployment.yaml :
apiVersion: v1
kind: Service
metadata:
name: my-service1
labels:
app: stream
spec:
ports:
- port: 8090
targetPort: 8090
name: port8090
selector:
app: stream
# clusterIP: None
type: LoadBalancer
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: stream-deployment
labels:
app: stream
spec:
replicas: 3
selector:
matchLabels:
app: stream
strategy:
type: Recreate
template:
metadata:
labels:
app: stream
spec:
containers:
- image: stream-server-mock:latest
name: stream-server-mock
imagePullPolicy: Never
env:
- name: STREAMER_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STREAMER_ADDRESS
value: stream-server-mock:8090
ports:
- containerPort: 8090
My end goal is to achieve horizontal auto-scaling of pods.
How Application designed/and works as of now (without kubernetes) :
There are 3 components : REST-Server, Stream-Server (3 instances
locally on different JVM on different ports), and RabbitMQ.
1 - The client sends a request to "REST-Server" for a stream url.
2 - The REST-Server puts in the RabbitMQ queue.
3 - One of the Stream-Server picks it up and populates its IP and sends back to REST-Server through RabbitMQ.
4 - The client receives the IP and establishes a direct WS connection using the IP.
The Problem what I face is :
1 - When the client requests for a stream IP, one of the pods (lets say POD1) picks it up and sends its URL (which is service URL, comes through LoadBalancer Service).
2 - Next time when the client tries to connect (WebSocket Connection) using the Service IP, it wont be the same pod which accepted the request.
It should be the same pod which accepted the request, and must be accessible by the client.
You can use StatefulSets if you are not required to use deployment.
For replica 3, you will have 3 pods named
stream-deployment-0
stream-deployment-1
stream-deployment-2
You can access each pod as $(podname).$(service name).$(namespace).svc.cluster.local
For details, check this
You may also want to set up an ingress to point each pod from outside of the cluster.
As mentioned by aerokite, you can use StatefulSets. However, if you don't want to modify your deployments, you can simply use Headless Services. As specified in the documentation:
For headless Services, a cluster IP is not allocated.
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service.
This means that whenever you query the DNS name for your Service (i.e. my-svc.my-namespace.svc.cluster-domain.example), what you get is a list of all the Pod IPs (unlike regular services where you get the cluster IP). You can then select your Pods using your own mechanisms.
Regarding your new question, if that is your only issue, you can use session affinity. If you set service.spec.sessionAffinity to ClientIP, then connections from a particular client will always go to the same Pod each time. You don't need other modifications like the headless Services mentioned above.
IMO, the only way to achieve this will be:
Instead of using a deployment with 3 replicas, use 3 deployments with 1 replicas each (or just create pods only); deployment1 -> pod1, deployment2 -> pod2, deployment3 -> pod3
Expose all the deployments on a separate service, service1 -> deployment1, service2 -> deployment2, service3 -> deployment3
Create an ingress resource and route to each pod using the service for each deployment. For example:
ingress-url/service1
ingress-url/service2
ingress-url/service3

Why should I specify service before deployment in a single Kubernetes configuration file?

I'm trying to understand why kubernetes docs recommend to specify service before deployment in one configuration file:
The resources will be created in the order they appear in the file. Therefore, it’s best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
Does it mean spread pods between kubernetes cluster nodes?
I tested with the following configuration where a deployment is located before a service and pods are distributed between nods without any issues.
apiVersion: apps/v1
kind: Deployment
metadata:
name: incorrect-order
namespace: test
spec:
selector:
matchLabels:
app: incorrect-order
replicas: 2
template:
metadata:
labels:
app: incorrect-order
spec:
containers:
- name: incorrect-order
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: incorrect-order
namespace: test
labels:
app: incorrect-order
spec:
type: NodePort
ports:
- port: 80
selector:
app: incorrect-order
Another explanation is that some environment variables with service URL will not be set for pods in this case. However it also works ok in case a configuration is inside one file like the example above.
Could you please explain why it is better to specify service before the deployment in case of one configuration file? Or may be it is some outdated recommendation.
If you use DNS as service discovery, the order of creation doesn't matter.
In case of Environment Vars (the second way K8S offers service discovery) the order matters, because once that vars are passed to the starting pod, they cannot be modified later if the service definition changes.
So if your service is deployed before you start your pod, the service envvars are injected inside the linked pod.
If you create a Pod/Deployment resource with labels, this resource will be exposed through a service once this last is created (with proper selector to indicate what resource to expose).
You are correct in that it effects the spread among the worker nodes.
Deployments without a Service will simply be scheduled onto the nodes with the least cpu/memory allocation. For instance, a brand new and empty node will get all new pods from a new deployment.
With a Deployment that also has a service the Scheduler tries to spread the pods between nodes, disregarding the cpu/memory load (within limits), to help the Service survive better.
It puzzles me that a Deployment on it's own doesn't cause a optimal spread but it doesn't, not yet at least.
This is the answer from the official documentation:
The resources will be created in the order they appear in the file.
Therefore, it's best to specify the service first, since that will
ensure the scheduler can spread the pods associated with the service
as they are created by the controller(s), such as Deployment.
Kubernetes Documentation/Concepts/Cluster/Administration/Managing Resources

Is there a kubernetes config parm (service or rc or other) to delay before using a pod

We are running a workload against a cluster hosting 2 instances of a small (3 container) pod. Accessing the pod using a service w/nodeport. If we stop a pod and rc starts a new one, our constant (low volume) workload has numerous failures (Rational Perf Tester, http test hitting the service on the master ... but likely same if it were hitting either minion ... master also has a minion). Anyway, if we just add a pod with kubectl scale, we also get errors. If we then take down this a pod (rc doesn't start a new one because we had one more than needed due to scale) ... no errors. Seems that service starts sending work to new pod because kubelet has done his thing, even though containers are not up. Thus, any time a pod is started ... it starts receiving work a little too soon (after kubelet did his work, but before all containers are ready). Is there a way to guarantee that the service will not route to this pod until all containers are up? Barring that is there some way to say wait 'n' seconds before sending to this pod? I may be wrong, but behavior seems to suggest this scenario.
This is precisely what the readinessProbe option is :)
It's documented more here and here, and is part of the container definition in a pod specification.
For example, you might use a pod specification like the one below to ensure that your nginx pod won't be marked as ready (and thus won't have traffic sent to it) until it responds to an HTTP request for /index.html:
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 10
timeoutSeconds: 5