Kubernetes Operator Vs Helm for Pub-Sub Model application [closed] - kubernetes

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have a Publisher Subscriber (Pub-Sub model) application in C# and I want to host it on Kubernetes for high availability. Is it good to go with helm or shall I use operators in my application.
What is best suited for Pub-Sub model applications ?

If you have a (dockerized) application and you want to run it in Kubernetes, then it's enough if you create Kubernetes Deployment configuration.
So the simplest thing you can do is to create a file deployment.yaml with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: <your-docker-image>
And then deploy it in Kubernetes with the following command.
kubectl apply -f deployment.yaml
About Helm and Operators, you generally use them for some more complex deployments, to organize and template multiple Kubernetes configurations, to interact with your application, to perform backups, and more operational tasks.

As already mentioned on the previous answer simple deployment would be enough for you to launch an application in Kubernetes.
Idea of helm is to have reusable yaml artifacts thru templates. So it allows you to define Kubernetes yamls files with some properties. Values for those properties are stored in separate file. Most use case for helm is to create custom yamls for the same application workload with different configuration or scheduling those deployment in different environments.
Kubernetes Operator on the other hand is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but includes domain or application-specific knowledge to automate the entire life cycle of the software it manages.
So if you some special requirements that your application needs you may want to be more interested in creating custom operator.
To sum up one could say that helm is sort of package manager for Kubernetes where Kubernetes operator is a controller which manages the life cycle of particular kubernetes resource/application/software/
Here's a good article how does two differs and what they have in common.

Related

Kubernetes - Reconfiguring a Service to point to a new Deployment (blue/green)

I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.

How to edit/patch kubernetes deployment to add label using python

I am fairly new to kubernetes - I have developed web UI/API that automates model deployment using Azure Machine Learning Services to Azure Kubernetes Services (AKS). As a hardening measure, I am tying to set up managed identity for deployed pods in AKS using this documentation. One of the step is to edit the deployment to add identity-feature label at /spec/template/metadata/labels for the deployment (see para starting like Edit the deployment to add ... in this section).
I wish to automate this step using python kubernetes client (https://github.com/kubernetes-client/python). Browsing the available API, I was wondering that perhaps patch_namespaced_deployment will allow me to edit deployment and add label at /spec/template/metadata/labels. I was looking for some example code using the python client for the same - any help to achieve above will be appreciated.
Have a look at this example:
https://github.com/kubernetes-client/python/blob/master/examples/deployment_crud.py#L62-L70
def update_deployment(api_instance, deployment):
# Update container image
deployment.spec.template.spec.containers[0].image = "nginx:1.16.0"
# Update the deployment
api_response = api_instance.patch_namespaced_deployment(
name=DEPLOYMENT_NAME,
namespace="default",
body=deployment)
print("Deployment updated. status='%s'" % str(api_response.status))
The Labels are on the deployment object, from the App v1 API,
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx
which means you need to update the following:
deployment.spec.template.metadata.labels.app = "nginx"

kubernetes having different env for one of the replica in service

A Usecase where one of the service must be scaled to 10 pods.
BUT, one of the pod must have different env variables. (kind of doing certain actions like DB actions and triggers handling, don't want 10 triggers to be handled instead of 1 DB change), for example 9 pods have env variable CHANGE=0 but one of the pod has env variable CHANGE=1
Also i am resolving by service name, so changing service name is not what i am looking for.
It sounds like you're trying to solve an issue with your app using Kubernetes.
The reason I say that is because the whole concept of "replicas" is to have identical instances, what you're actually saying is: "I have 10 identical pods but I want 1 of the to be different" and that's not how Kubernetes works.
So, you need to re-think the reason for which you need this environment variable to be different, what do you use it for. If you want to share the details maybe I can help you find an idiomatic way of doing this using Kubernetes.
The easiest way to do what you describe is to have two separate Services. One attaches to any "web" pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-web
spec:
selector:
app: myapp
tier: web
The second attaches to only the master pod(s):
apiVersion: v1
kind: Service
metadata:
name: myapp-master
spec:
selector:
app: myapp
tier: web
role: master
Then have two separate Deployments. One has the single master pod, and one replica; the other has nine server pods. Your administrative requests go to myapp-master but general requests go to myapp-web.
As #omricoco suggests you can come up with a couple of ways to restructure this. A job queue like RabbitMQ will have the property that each job is done once (with retries if a job fails), so one setup is to run a queue like this, allow any server to accept administrative requests, but have their behavior just be to write a job into the queue. Then you can run a worker process (or several) to service these.

Kubernetes set deploment number of replicas based on namespace

I've split our Kubernetes cluster into two different namespaces; staging and production, aiming to have production deployments having two replicas (for rolling deployments, autoscaling comes later) and staging having one single replica.
Other than having one deployment configuration per namespace, I was wondering whether or not we could set the default number of replicas per deployment, per namespace?
When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
If not, is there a recommended approach for this which will prevent the need to have a deployment config per namespace?
One way of doing this would be to scale the deployment up to two replicas, manually, in the production namespace, once it has been created for the first time, but I would prefer to skip any manual steps.
It is not possible to set different number of replicas per namespace in one deployment.
But you can have 2 different deployment files 1 per each namespace, i.e. <your-app>-production.yaml and <your-app>-staging.yaml.
In these descriptions you can determine any custom values and settings that you need.
For an example:
<your-app>-production.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: production #Here is namespace
...
spec:
replicas: 2 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...
<your-app>-staging.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: staging #Here is namespace
...
spec:
replicas: 1 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...
I don't think you can avoid having two deployments, but you can get rid of the duplicated code by using helm templates (https://docs.helm.sh/chart_template_guide). Then you can define a single deployment yaml and substitute different values when you deploy with an if statement.
When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
Actually, there are two ways to do it, but both of them involved coding.
Admission Controllers:
This is the recommended way of assigning default values to fields.
While creating objects in Kubernetes, it passes through some admission controllers and one of them is MutatingWebhook.
MutatingWebhook has been upgraded to beta version since v1.9+. This admission controller modifies (mutates) the object before actully created (or modified/deleted), say, assigning default values of some fields and some similar task. You can change the minimum replicas number here.
User Have to implement a admission server to receive requests from kubernetes and give modified object as response accordingly.
Here is a sample admission server implemented by Openshift kubernetes-namespace-reservation.
Deployment Controller:
This is comparatively easier but kind of hacking the deployment procedure.
You can write a Deployment controller which will watch for deployment and if there is any deployment made, it will do some task. Here, you can update the deployment with some minimum values you wish.
You can see the official Sample Pod Controller.
If both of them seems lots to do, it is better to assign fields more carefully each time for each deployment.

How to configure a Kubernetes Multi-Pod Deployment

I would like to deploy an application cluster by managing my deployment via k8s Deployment object. The documentation has me extremely confused. My basic layout has the following components that scale independently:
API server
UI server
Redis cache
Timer/Scheduled task server
Technically, all 4 above belong in separate pods that are scaled independently.
My questions are:
Do I need to create pod.yml files and then somehow reference them in deployment.yml file or can a deployment file also embed pod definitions?
K8s documentation seems to imply that the spec portion of Deployment is equivalent to defining one pod. Is that correct? What if I want to declaratively describe multi-pod deployments? Do I do need multiple deployment.yml files?
Pagids answer has most of the basics. You should create 4 Deployments for your scenario. Each deployment will create a ReplicaSet that schedules and supervises the collection of PODs for the Deployment.
Each Deployment will most likely also require a Service in front of it for access. I usually create a single yaml file that has a Deployment and the corresponding Service in it. Here is an example for an nginx.yaml that I use:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
name: nginx
targetPort: 80
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
Here some additional information for clarification:
A POD is not a scalable unit. A Deployment that schedules PODs is.
A Deployment is meant to represent a single group of PODs fulfilling a single purpose together.
You can have many Deployments work together in the virtual network of the cluster.
For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.
Deployments are meant to contain stateless services. If you need to store a state you need to create StatefulSet instead (e.g. for a database service).
You can use the Kubernetes API reference for the Deployment and you'll find that the spec->template field is of type PodTemplateSpec along with the related comment (Template describes the pods that will be created.) it answers you questions. A longer description can of course be found in the Deployment user guide.
To answer your questions...
1) The Pods are managed by the Deployment and defining them separately doesn't make sense as they are created on demand by the Deployment. Keep in mind that there might be more replicas of the same pod type.
2) For each of the applications in your list, you'd have to define one Deployment - which also makes sense when it comes to difference replica counts and application rollouts.
3) you haven't asked that but it's related - along with separate Deployments each of your applications will also need a dedicated Service so the others can access it.
additional information:
API server use deployment
UI server use deployment
Redis cache use statefulset
Timer/Scheduled task server maybe use a statefulset (If your service has some state in)