Helm install in specific order for "deployment" [duplicate] - kubernetes

This question already has answers here:
Helm install in certain order
(2 answers)
Closed 3 years ago.
I am trying to create a Helm chart (x) with 5 deployments within chart (x) in a specific order:
Deployment 1 ( zk)
Deployment 2 (security)
Deployment 3 (localmaster)
Deployment 4 (nginx)
Deployment 5 (semoss)
Helm/Tiller version: "v2.12.3"
Kubectl version: Major:"1", Minor:"17"
Minikube version: v1.6.2
What I currently have:
RESOURCES:
==> v1/Deployment
NAME
Localmaster
Nginx
Security
Semoss
Zk
I can easily deploy chart (x) but once I run helm ls, my (x) chart is in a random order as you can see above. I only have one chart name (x) and within (x) I have:
Chart.yaml charts templates values.yaml
Templates and charts are directories and the rest are files.
Is there a specific way or trick to have my x (chart) in the order I want? I’ve done some research and I am not so sure if helm spray in the right call as I am trying to deploy 1 chart with different deployment as opposed to an umbrella chart, and many other sub-charts.
Let me know if you need more info.

Helm is package manager, allows you to define applications as a set of components on your cluster, and provides mechanisms to manage those sets from start to end.
Helm itself its not creating pods, it send requests to Kubernetes api and then Kubernetes is creating everything.
I have one idea how it can be achieve using Helm.
Helm order of deploying Kinds is hardcoded here. However if you want to set deploying order of the same kind to k8s, it can be done using annotations.
You could set annotations: Pre-install hook with hook-weight like in this example (lower value in hook-weight have higher priority). Similar case can be found on Github.
It would look like example below:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
helm.sh/hook: pre-install
helm.sh/hook-weight: "10"
labels:
app.kubernetes.io/instance: test
...
You can check which deployment was created first using kubectl get events. However, creation of pods is still scheduled by Kubernetes.
To obtain exactly what you need you can use initContainers and hardcode "sleep" command. First deployment with sleep 1s, second deployment with 5s, third with 10s, depends how long deployment need to create all pods.
You can check this article, but keep in mind spec.containers and spec.initContainers are two different things.

Related

Kubernetes - Reconfiguring a Service to point to a new Deployment (blue/green)

I'm following along with a video explaining blue/green Deployments in Kubernetes. They have a simple example with a Deployment named blue-nginx and another named green-nginx.
The blue Deployment is exposed via a Service named bgnginx. To transfer traffic from the blue deployment to the green deployment, the Service is deleted and the green deployment is exposed via a Service with the same name. This is done with the following one-liner:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
Obviously, this works successfully. However, I'm wondering why they don't just use kubectl edit to change the labels in the Service instead of deleting and recreating it. If I edit bgnginx and set .metadata.labels.app & .spec.selector.app to green-nginx it achieves the same thing.
Is there a benefit to deleting and recreating the Service, or am I safe just editing it?
Yes, you can follow the kubectl edit svc and edit the labels & selector there.
it's fine, however YAML and other option is suggested due to kubectl edit is error-prone approach. you might face indentation issues.
Is there a benefit to deleting and recreating the Service, or am I
safe just editing it?
It's more about following best practices, and you have YAML declarative file handy with version control if managing.
The problem with kubectl edit is that it requires a human to operate a text editor. This is a little inefficient and things do occasionally go wrong.
I suspect the reason your writeup wants you to kubectl delete the Service first is that the kubectl expose command will fail if it already exists. But as #HarshManvar suggests in their answer, a better approach is to have an actual YAML file checked into source control
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app.kubernetes.io/name: myapp
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: blue
You should be able to kubectl apply -f service.yaml to deploy it into the cluster, or a tool can do that automatically.
The problem here is that you still have to edit the YAML file (or in principle you can do it with sed) and swapping the deployment would result in an extra commit. You can use a tool like Helm that supports an extra templating layer
spec:
selector:
app.kubernetes.io/name: myapp
example.com/deployment: {{ .Values.color }}
In Helm I might set this up with three separate Helm releases: the "blue" and "green" copies of your application, plus a separate top-level release that just contained the Service.
helm install myapp-blue ./myapp
# do some isolated validation
helm upgrade myapp-router ./router --set color=blue
# do some more validation
helm uninstall myapp-green
You can do similar things with other templating tools like ytt or overlay layers like Kustomize. The Service's selectors: don't have to match its own metadata, and you could create a Service that matched both copies of the application, maybe for a canary pattern rather than a blue/green deployment.

Helm --force option

I read in a book written by Helm creators the following fact about the --force option :
Sometimes, though, Helm users want to make sure that the pods are restarted. That’s
where the --force flag comes in. Instead of modifying the Deployment (or similar
object), it will delete and re-create it. This forces Kubernetes to delete the old pods
and create new ones.
What I understand from that is, if I install a chart and then I change the number of replicas (=number of pods) then I upgrade the chart, it should recreate all the pods. This is not what happens in my case and I wanted to understand what I am missing here.
Let's take a hypothetical minimal Deployment (many required details omitted):
spec:
replicas: 3
template:
spec:
containers:
- image: abc:123
and you change this to only increase the replica count
spec:
replicas: 5 # <-- this is the only change
template:
spec:
containers:
- image: abc:123
The Kubernetes Deployment controller looks at this change and says "I already have 3 Pods running abc:123; if I leave those alone, and start 2 more, then I will have 5, and the system will look like what the Deployment spec requests". So absent any change to the embedded Pod spec, the existing Pods will be left alone and the cluster will just scale up.
deployment-12345-aaaaa deployment-12345-aaaaa
deployment-12345-bbbbb deployment-12345-bbbbb
deployment-12345-ccccc ---> deployment-12345-ccccc
deployment-12345-ddddd
deployment-12345-eeeee
(replicas: 3) (replicas: 5)
Usually this is fine, since you're running the same image version and the same code. If you do need to forcibly restart things, I'd suggest using kubectl rollout restart deployment/its-name rather than trying to convince Helm to do it.

Deploying multiple version of a single application with Helm in the same namespace

I have a situation where i have an application for which i would like to run several set of instances configured differently. I guess from readying online, people would usually having several version of the same application in your clusters.
But let me somewhat describe the use case at the high level. The application is a component that takes as configuration a a dataset and a set of instructions stating how to process the dataset. The dataset is actually a datasource.
So in the same namespace, we would like for instance process 2 dataset.
So it is like having two deployments for the same application. Each dataset has different requirements, hence we should be able to have deployment1 scale to 10 instance and deployement 2 scale to 5 instance.
The thing is it is the same application and so far it is the same helm chart, and deployment definition.
The question is what are the different options that exist to handle that at this time.
examples, pointers, article are welcome.
So far i found the following article as the most promising:
https://itnext.io/support-multiple-versions-of-a-service-in-kubernetes-using-helm-ce26adcb516d
Another thing i though about is duplicating the deployment chart into 2 sub-charts of which the folder name differ.
Helm supports this pretty straightforwardly.
In Helm terminology, you would write a chart that describes how to install one copy of your application. This creates Kubernetes Deployments and other manifests; but it has templating that allows parts of the application to be filled in at deploy time. One copy of the installation is a release, but you can have multiple releases, in the same or different Kubernetes namespaces.
For example, say you have a YAML template for a Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-processor
spec:
replicas: {{ .Values.replicas }}
template:
spec:
containers:
- env:
- name: DATASET_NAME
value: {{ .Values.dataset }}
# and the other things that usually go into a container spec
When you go to deploy this, you can create a values file:
# a.yaml
replicas: 10
dataset: dataset-1
And you can deploy it:
helm install \
one \ # release name
. \ # chart location
-f a.yaml # additional values to use
If you use kubectl get deployment, you will see one-processor, and if you look at it in detail, you will see it has 10 replicas and its environment variable is set to dataset-1.
You can create a second deployment with different settings in the same namespace:
# b.yaml
replicas: 5
dataset: dataset-2
helm install two . -f b.yaml
Or in a different namespace:
helm install three . -n other-namespace -f c.yaml
It's theoretically possible to have a chart that only installs other subcharts (an umbrella chart), but there are some practical issues with it, most notably that Helm will want to install only one copy of a given chart no matter where it appears in the chart hierarchy. There are other higher-level tools like Helmsman and Helmfile that would allow you to basically describe these multiple helm install commands in a single file.
You can "cascade" the values YAML files to achieve what you want. For example, you could define common.yaml to be all the common settings for your application. Then, each separate instance would be a second YAML file.
Here is an example. Let's say that the file common.yaml looks like this:
namespace: myapp-dev
pod-count: 1
use_ssl: true
image-name: debian:buster-slim
... more ...
Let's say you want two Deployments, own that scales to 5 replicas and one that scales to 10. You would create two more files:
# local5.yaml
pod-count: 5
and
# local10.yaml
pod-count: 10
Note that you do not have to repeat the settings in common.yaml. To deploy the five-replica version you do something like this:
$ helm install -f common.yaml -f local5.yaml five .
To deploy the 10-replica version:
$ helm install -f common.yaml -f local10.yaml ten .
The YAML files cascade with the later file overriding the earlier.

Helm chart deployment ordering

I created a new chart with 2 podPresets and 2 deployments and when I go to run helm install the deployment(pod) object is created first and then podPresets hence my values from podPreset are not applied to the pods, but when I manually create podPreset first and then deployment the presets are applied properly, Is there a way I can specify in helm as to which object should be created first.
Posting this as Community Wiki for better visibility as answer was provided in comments below another answer made by #Rastko.
PodPresents
A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. Using a Pod Preset allows
pod template authors to not have to explicitly provide all information
for every pod. This way, authors of pod templates consuming a specific
service do not need to know all the details about that service.
For more information, please check official docs.
Order of deploying objects in Helm
Order of deploying is hardcoded in Helm. List can be found here.
In addition, if resource is not in the list it will be executed as last one.
Answer to question from comments*
Answer to your question - To achieve order different then default one, you can create two helm charts in which one with deployments is executed afterwards with preinstall hook making sure that presets are there.
Pre-install hook annotation allows to execute after templates are rendered, but before any resources are created.
This workaround was mentioned on Github thread. Example for service:
apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
As additional information, there is possibility to define weight for a hook which will help build a deterministic executing order.
annotations:
"helm.sh/hook-weight": "5"
For more details regarding this annotation, please check this Stackoverflow qustion.
Since you are using Helm charts and have full control of this part, why not make optional parts in your helm charts that you can activate with an external value?
This would be a lot more "Helm native" way:
{{- if eq .Values.prodSecret "enabled"}}
- name: prod_db_password
valueFrom:
secretKeyRef:
name: prod_db_password
key: password
{{- end}}
Then you just need to add --set prodSecret=enabled when executing your Helm chart.

Kubernetes set deploment number of replicas based on namespace

I've split our Kubernetes cluster into two different namespaces; staging and production, aiming to have production deployments having two replicas (for rolling deployments, autoscaling comes later) and staging having one single replica.
Other than having one deployment configuration per namespace, I was wondering whether or not we could set the default number of replicas per deployment, per namespace?
When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
If not, is there a recommended approach for this which will prevent the need to have a deployment config per namespace?
One way of doing this would be to scale the deployment up to two replicas, manually, in the production namespace, once it has been created for the first time, but I would prefer to skip any manual steps.
It is not possible to set different number of replicas per namespace in one deployment.
But you can have 2 different deployment files 1 per each namespace, i.e. <your-app>-production.yaml and <your-app>-staging.yaml.
In these descriptions you can determine any custom values and settings that you need.
For an example:
<your-app>-production.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: production #Here is namespace
...
spec:
replicas: 2 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...
<your-app>-staging.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: staging #Here is namespace
...
spec:
replicas: 1 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...
I don't think you can avoid having two deployments, but you can get rid of the duplicated code by using helm templates (https://docs.helm.sh/chart_template_guide). Then you can define a single deployment yaml and substitute different values when you deploy with an if statement.
When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
Actually, there are two ways to do it, but both of them involved coding.
Admission Controllers:
This is the recommended way of assigning default values to fields.
While creating objects in Kubernetes, it passes through some admission controllers and one of them is MutatingWebhook.
MutatingWebhook has been upgraded to beta version since v1.9+. This admission controller modifies (mutates) the object before actully created (or modified/deleted), say, assigning default values of some fields and some similar task. You can change the minimum replicas number here.
User Have to implement a admission server to receive requests from kubernetes and give modified object as response accordingly.
Here is a sample admission server implemented by Openshift kubernetes-namespace-reservation.
Deployment Controller:
This is comparatively easier but kind of hacking the deployment procedure.
You can write a Deployment controller which will watch for deployment and if there is any deployment made, it will do some task. Here, you can update the deployment with some minimum values you wish.
You can see the official Sample Pod Controller.
If both of them seems lots to do, it is better to assign fields more carefully each time for each deployment.