Helm Chart/Statefulset dinamically set a pods to determined nodes - kubernetes

So this is a little bit related with this other post, but I have a determined example that I think justify this use case.
I am installing a redis-cluster using bitnami/redis-cluster helm chart. This works with 6 instances, 3 master + 3 slaves, that are deployed over 3 nodes. My idea then, its that these instances were deployed over 3 nodes, avoiding master 1 and slave 1 (for example) were not in the same node, so if one node fails I have its replica in one of the other nodes.
What I am doing actually, is set a custom scheduler for this nodes, and in the end of deployment execute an script that manually sets each instance in the node I want.
Is there any way to do this simply using nodeSelector, affinity, taints... directly from values file? I would like to stay on this chart, and don't create a yaml for each redis instance.

Related

Assign Gitlab Runner daemon's pod and the jobs's pods to two separate node groups in Kubernetes when using Kubernetes executor

We're using Gitlab Runner with Kubernetes executor and we were thinking about what I think is currently not possible. We want to assign the Gitlab Runner daemon's pod to a specific node group's worker with instance type X and the jobs' pods to a different node group Y worker nodes as these usually require more computation resources than the Gitlab Runner's pod.
This comes in order to save costs, as the node where the Gitlab runner main daemon will always be running, then we want it to be running on a cheap instance, and later the jobs which need more computation capacity then they can run on different instances with different type and which will be started by the Cluster Autoscaler and later destroyed when no jobs are present.
I made an investigation about this feature, and the available way to assign the pods to specific nodes is to use the node selector or node affinity, but the rules included in these two configuration sections are applied to all the pods of the Gitlab Runner, the main pod and the jobs pods. The proposal is to make it possible to apply two separate configurations, one for the Gitlab Runner's pod and one for the jobs' pods.
The current existing config consists of the node selector and nodes/pods affinity, but as I mentioned these apply globally to all the pods and not to specified ones as we want in our case.
Gitlab Runner Kubernetes Executor Config: https://docs.gitlab.com/runner/executors/kubernetes.html
This problem is solved! After a further investigation I found that Gitlab Runner's Helm chart provide 2 nodeSelector features, to exactly do what I was looking for, 1 for the main pod which represents the Gitlab Runner pod and the other one for the Gitlab Runner's jobs pods. Below I show a sample of the Helm chart in which I set beside each nodeSelector its domain and the pod that it affects.
Note that the first level nodeSelector is the one that affects the main Gitlab Runner pod, and the runners.kubernetes.node_selector is the one that affects the Gitlab Runner's jobs pods.
gitlabUrl: https://gitlab.com/
...
nodeSelector:
gitlab-runner-label-example: label-values-example-0
...
runnerRegistrationToken: ****
...
runners:
config:
[[runners]]
name = "gitlabRunnerExample"
executor = "kubernetes"
environment = ["FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=true"]
[runners.kubernetes]
...
[runners.kubernetes.node_selector]
"gitlab-runner-label-example" = "label-values-example-1"
[runners.cache]
...
[runners.cache.s3]
...
...
using the helm chart, there is an additional configuration part where you can specify, well, additional configuration
one of the them is especially the node selector for jobs pods and another one for toleration.
The combination of that and some namespace level config should allow you to run the 2 kinds of pod on different node types

How to delete decrease the nodes in eksctl kubernetes

I want to delete the single node of cluster
here is my problem i am create the node where 2 nodes are running only
but for sometime i need more nodes for few minutes only then after using scaling down i want delete the drain node only from cluster.
i do scaling up/down manually
here is the step i follow
create cluster with 2 node
scale up the cluster and add 2 more.
after i want to delete the 2 node with all backup pod only
i tried it with command
eksctl scale nodegroup --cluster= cluster-name --name= name --nodes=4 --nodes-min=1 --nodes-max=4
but it doesn't help it will delete random node also manager will crash.
One option is using a separate node group for the transient load, use taints/tolerations for laod to be scheduled on that node group, drain/delete that particular node group if not needed.
Do you manually scale up/down nodes? If you are using something like cluster auto scaler, there will be variables like "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" to protect pods from scaling down.

Kubernetes - Are containers automatically allocated to nodes after adding worker node?

I apologize for my poor English.
I created 1 master-node and 1 worker-node in cluster, and deployed container (replicas:4).
then kubectl get all shows like as below. (omitted)
NAME  NODE
pod/container1 k8s-worker-1.local
pod/container2 k8s-worker-1.local
pod/container3 k8s-worker-1.local
pod/container4 k8s-worker-1.local
next, I added 1 worker-node to this cluster. but all containers keep to be deployed to worker1.
ideally, I want 2 containers to stop, and start up on worker2 like as below.
NAME  NODE
pod/container1 k8s-worker-1.local
pod/container2 k8s-worker-1.local
pod/container3 k8s-worker-2.local
pod/container4 k8s-worker-2.local
Do I need some commands after adding additional node?
Scheduling only happens when a pod is started. After that, it won't be moved. There are tools out there for deleting (evicting) pods when nodes get too imbalanced, but if you're just starting out I wouldn't go that far for now. If you delete your 4 pods and recreate them (or let the Deployment system recreate them as is more common in a real situation) they should end up more balanced (though possibly not 2 and 2 since the system isn't exact and spreading out is only one of the factors used in scheduling).

How to make a multi-regional Kafka/Zookeeper cluster using multiple Google Kubernetes Engine (GKE) clusters?

I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform.
I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).
This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).
I am trying this set-up using this Helm Chart provided by Incubator.
I tried this setup manually on 3 GCP VMs following this guide and I was able to do it without any issues.
However, setting up a Kafka cluster on Kubernetes seems complicated.
As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:
...
# list of servers
server.1=0.0.0.0:2888:3888
server.2=<Ip of second server>:2888:3888
server.3=<ip of third server>:2888:3888
...
As I can see in the Helm chart config-script.yaml file has a script which creates the Zookeeper configuration file for every deployment.
The part of the script which echos the zookeeper servers looks something like below:
...
for (( i=1; i<=$ZK_REPLICAS; i++ ))
do
echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE
done
...
As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica (replica here means Kubernetes Pods replicas).
...
# "release-name" is the name of the Helm release
server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888
...
At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?
How shall I modify the script?
I see you are trying to create 3 node zookeeper cluster on top of 3 different GKE clusters.
This is not an easy task and I am sure there are multiple ways to achieve it
but I will show you one way in which it can be done and I believe it should solve your problem.
The first thing you need to do is create a LoadBalancer service for every zookeeper instance.
After LoadBalancers are created, note down the ip addresses that got assigned
(remember that by default these ip addresses are ephemeral so you might want to change them later to static).
Next thing to do is to create an private DNS zone
on GCP and create A records for every zookeeper LoadBalancer endpoint e.g.:
release-name-zookeeper-1.zookeeper.internal.
release-name-zookeeper-2.zookeeper.internal.
release-name-zookeeper-3.zookeeper.internal.
and in GCP it would look like this:
After it's done, just modify this line:
...
DOMAIN=`hostname -d'
...
to something like this:
...
DOMAIN={{ .Values.domain }}
...
and remember to set domain variable in Values file to zookeeper.internal
so in the end it should look like this:
DOMAIN=zookeeper.internal
and it should generate the folowing config:
...
server.1=release-name-zookeeper-1.zookeeper.internal:2888:3888
server.2=release-name-zookeeper-2.zookeeper.internal:2888:3888
server.3=release-name-zookeeper-3.zookeeper.internal:2888:3888
...
Let me know if it is helpful

Specifying exact number of pods per node then performing image version upgrade

I have 3 nodes in a k8s cluster and I need exactly 2 pods to be scheduled in each node, so I would end up having 3 nodes with 2 pods each (6 replicas).
I found that k8s have Pod Affinity/Anti-Affinity feature and that seems to be the correct way of doing.
My problem is: I want to run 2 pods per node but I often use kubectl apply to upgrade my docker image version, and in this case k8s should've be able to schedule 2 new images in each node before terminating the old ones - will the newer images be scheduled if I use Pod Affinity/Anti-Affinity to allow only 2 pods per node?
How can I do this in my deployment configuration? I cannot get it to work.
I believe it is part of kubelet's setting, so you would have to look into kubelet's --max-pods flag, depending on what your cluster configuration is.
The following links could be useful:
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#kubelet
and
https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/