Is it possible to dynamically add a new pod to an existing deployment? - kubernetes

Just to be clear: I'm not asking about scaling up the number of replicas of a pod - I'm asking about adding a new pod which provides completely new functionality.
So I'm wondering: can I call the Kubernetes API to dynamically add a new pod to an existing deployment?

Deployments are meant to be a homogeneous set of replicas of the same pod template, each presumably providing the same functionality. Deployments keep the desired number of replicas running in the event of crashes and other failures, and facilitate rolling updates of the pods when you need to change configuration or the version of the container image, for example. If you want to run a pod that provides different functionality, do so via a different deployment.
Adding a different pod to an existing deployment is not a viable option. If you want to spin up pods in response to API requests to do some work, there are a handful of officially support client libraries you can use in your API business logic: https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries.

You can inject a container to an existing Pod. Not sure whether it would meet your requirement.
You can refer to how istio inject a sidecar proxy to an existing Pod manually. Manual injection

Related

What happens when we scale the kubernetes deployment and change one of the pod or container configuration?

When i scale the application by creating deployment .Let's say i am running nginx service on 3 cluster.
Nginx is running in containers in multiple pods .
If i change nginx configuration in one of the pod ,does it propagate to all the nodes and pods because it is running in cluster and scaled.
does it propagate to all the nodes and pods because it is running in
cluster and scaled.
No. Only when you change the deployment yaml. Then it re-creates pods 1 by 1 with the new configuration.
I would like to add a few more things to what was already said. First of all you are even not supposed to do any changes to Pods which are managed let's say by ReplicaSet, ReplicationController or Deployment. This are objects which provide additional abstraction layer and it is their responsibility to ensure that there are given number of Pods of a certain kind running in your kubernetes cluster.
It doesn't matter how many nodes your cluster consists of as mentioned controllers span across all nodes in the cluster.
Changes made in a single Pod will not only not propagate to other Pods but may be easily lost if such newly created Pod with changed configuration crashes.
Remember that one of the tasks of the Deployment is to make sure that certain number of Pods of a given type ( specified in a Pod template section of the Deployment ) are always up and running. When your manually reconfigured Pod goes down then your Deployment (actually ReplicaSet created by the Deployment) acts behind the scenes and recreates such Pod. But how does it recreate it ? Does it take into consideration changes introduced by you to such Pod ? Of course not, it will recreate it based on the template it is given in the Deployment.
If you want to make changes in your Pods one by one kubernetes allows you to do so by providing so called rolling update mechanism.
Here you can read about old-fashioned approach using ReplicationController which is not used any more as it is replaced by Deployments and ReplicaSets but I think it's still worth reading just to grasp the concept.
Currently Deployment is the way to go. About updating a Deployment you can read here. Note that the default update strategy is RollingUpdate which ensures that changes are not applied to all Pods at once but one by one.

Can i only change one pod in kubernetes?

I only want to deploy one pod in k8s.
For example, I deploy several pods in one pool with the same codes, but I only want to change one pod to do some test. Can it be done?
What you're describing in your question is actually the closest to what we call Canary Deployment.
In a nutshell Canary Deployment (also known as Canary Release) is a technique that allows you to reduce potential risk of introducing in production a new software version that may be corrupted. It is achieved by rolling out the change only to a small subset of servers ( in Kubernetes it may be just one pod ) before deploying it to the entire infrastructure and making it available to everybody.
If you decide e.g. to deploy one more pod using new image version and you've got already working deployment consisting let's say of 3 replicas, only 25 % of traffic will be routed to the new pod. Once you decide the test was successful you may continue rolling out the update to other pods.
Here you can find an article describing in detail how you can perform such kind of deployment on Kubernetes.
It's actually similar approach to Blue-Green Deployment already mentioned by #Malathi and has a lot in common with it.
Perhaps you meant Blue-Green Deployments.
The common release process involves, adding new pods with the latest release and perhaps expose a certain percent of the traffic to be routed to the new release pod. If everything goes well you can remove the old pods with old release and replace them with new pods with the new release.
This article talks of blue-green deployments with Kubernetes.
It is also possible to use service mesh-like istio with Kubernetes for advanced blue-green deployments such as redirect traffic to a new release based on header values or cookies.

How can I distrubute loads to Kubernetes Pods?

I have work defined in a file/config with the following format,
config1,resource9
config3,resource21
config5,resource10
How can I spin individual pods based on the configuration? If I add one more line to the configuration, Kubernetes need to spin one more pod and send the configuration line to that pod.
How to store the configuration in Kubernetes and spin up pods based on the configuration?
Take a look at Kubernetes Operators. The pattern adds a Kubernetes management layer to an application. Basically you run a kubernetes native app (the operator) that connects to the kubernetes API and takes care of the deployment management for you.
If you are familiar with helm, then a quick way to get started is with the helm example. This example will create a new Nginx deployment for each Custom Resource you create. The Custom Resource contains all the helm values nginx requires for a deployment.
As a first step you could customise the example so that all you need to do is manage the single Custom Resource to deploy or update the app.
If you want to take it further then you may run into some helm limitations pretty quickly, for advanced use cases you can use the go operator-sdk directly.
There are a number of projects operators to browse on https://operatorhub.io/

Why don't Kubernetes deployments support services?

I'm new to K8s, so still trying to get my head around things. I've been looking at deployments and can appreciate how useful they will be. However, I don't understand why they don't support services (only replica sets and pods).
Why is this? Does this mean that services would typically be deployed outside of a deployment?
To answer your question, Kubernetes deployments are used for managing stateless services running in the cluster instead of StatefulSets which are built for the stateful application run-time. Actually, with deployments you can describe the update strategy and road map for all underlying objects that have to be created during implementation.Therefore, we can distinguish separate specification fields for some objects determination, like needful replica number of Pods, template for Pod by describing a list of containers that should be in the Pod, etc.
However, as #P Ekambaram already mention in his answer, Services represent abstraction layer of network communication model inside Kubernetes cluster, and they declare a way to access Pods within a cluster via corresponded Endpoints. Services are separated from deployment object manifest specification, because of their mission to dynamically provide specific network behavior for the nested Pods without affecting or restarting them in case of any communication modification via appropriate Service Types.
Yes, services should be deployed as separate objects. Note that deployment is used to upgrade or rollback the image and works above ReplicaSet
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicaSets in particular create and destroy Pods dynamically (e.g. when scaling out or in). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?
Services.come to the rescue.
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is (usually) determined by a Label Selector
Something I've just learnt that is somewhat related to my question: multiple K8s objects can be included in the same yaml file, separate by ---. Something like:
apiVersion: v1
kind: Deployment
# other stuff here
---
apiVersion: v1
kind: Service
# other stuff here
i think it intends to decoupled and fine-grained.

Kubernetes changing rolling update logic

Currently kubernetes rolling update creates a new pod to a terminated pod and add it to the service. At the moment of rolling update there could be two types of pods registered (old ones and new ones) for a service. However I need to enforce the consistency. For example when a rolling update request comes to Kubernetes, first it creates a new rc but pods created under that rc is not added to the service. Once all replications of that rc becomes available, all the traffic came to the service is routed to that rc. Finally the old rc is deleted. Can we currently do this using Kubernetes? If not is there a way I can write an extension to Kubernetes to implement this functionality?
If the new pods have labels matching the service's label selector, they should be added to the service as soon as they come up.
If you want to experiment with different logic for a rolling update, you can write a client-side controller using the Kubernetes API client libraries, or create a server-side object by extending the API.