In GCP Kubernetes (GKE) how do I assign a stateless pod created by a deployment to a provisioned vm - kubernetes

I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?

It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation

Related

How can I find out how a Kubernetes Cluster was provisioned?

I am trying to determine how a Kubernetes cluster was provisioned. (Either using minikube, kops, k3s, kind or kubeadm).
I have looked at config files to establish this distinction but didn't find any.
Is there some way one can identify what was used to provision a Kubernetes cluster?
Any help is appreciated, thanks.
Usually but not always you can view the cluster(s) definition in your ~/.kube/config and you will see "entry" per cluster usually with the type.
Again it's not 100%.
Another option is to check the pods & ns, if you will see minikube it is almost certain minikube, k3s, rancher etc.
If you will see namespace *cattle - it can be rancher with a k3s or with RKE
To summarize it, there is no single answer to how to figure out how your cluster was deployed, but you can find hints for that
If you see kubeadm configmap object in kube-system namespace then it means that the cluster is provisioned using kubeadm.

Kubernetes Statefulset problem with Cluster Autoscaler and Multi-AZ

I have a EKS cluster with cluster autoscaler setup, spanning across three availability zones. I have deployed a Redis Cluster using helm and it works fine. Basically it is a statefulset of 6 replicas with dynamic PVC.
Currently, my EKS cluster has two worker nodes, which I will name as Worker-1A and Worker-1B in AZ 1A and 1B respectively, and has no worker node on AZ 1C. I am doing some testing to make sure the Redis Cluster can always spin up and attach the volume properly. All the Redis Cluster pods are created in Worker-1B. In my testing, I kill all the pods in the Redis Cluster, and before it spins new pods up, I deploy some other deployments to use all the resources in Worker-1A and Worker-1B. Now since that the worker nodes have no resource to create new pods, the cluster autoscaler will create a worker node in AZ 1C (to balance nodes across AZ). Now the problem comes, when the Redis Cluster statefulset trying to recreate the pods, it cannot create in Worker-1B because there is no resource, and it will try to create in Worker-1C instead, and the pods will hit the following error: node(s) had volume node affinity conflict.
I know this situation might be rare but how do I fix this issue if it ever happens? I am hoping if there is an automated way to solve this instead of fixing it manually.

How does kubernetes help in reducing the cost of hosting?

I am trying to understand this hosting and scaling stuffs , say if i have a website with huge traffic on weekends which would require 2 vps at least to handle the load.
we could do either of the 2 things
we could simply upgrade to a larger vps plan and forget it, which is an inefficient way and also a costlier option.
Making 2 vps and setting up a load balancer and let it handle the traffic between 2 vps just like kubernetes does.
So how are kubernetes helpful then if we are still paying for 2nd vps?
Can kubernetes spin full vps before deploying news pods in it?
You can use Cluster Autoscaler for your Kubernetes cluster which will add or remove nodes on demands.
Kubernetes can run virtually anywhere - on bare metal as well as in a private or public cloud.
However, where you choose to run Kubernetes determines the scalability of your Kubernetes cluster.
Deploying Kubernetes on VPS servers requires more effort on your side and the cluster is less scalable compared to managed Kubernetes services such as: GKE, EKS and AKS.
In General, the Cluster Autoscaler is available primarily for managed Kubernetes Services (see: Supported cloud providers).
Cluster Autoscaler:
Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:
there are pods that failed to run in the cluster due to insufficient resources.
there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.
For VPS, you can still use the Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) to optimize the resource utilization of your application.
Horizontal Pod Autoscaler:
The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).
Vertical Pod Autoscaler:
The Vertical Pod Autoscaler automatically adjust the amount of CPU and memory requested by pods running in the Kubernetes Cluster.

How can I make multiple deployment share the same fargate instance in EKS?

I deployed a EKS Farget cluster in AWS and created a fargate profile with default namespace without any labels. I found that whenever I deploy a new deployment kubectl apply , a new fargate node will be created for that deployment. See below screenshot.
How can I make the deployment share one fargate instance?
And how can I rename the fargate node name?
The spirit of using Fargate is that you want a serverless experience where you don't have to think about nodes (they are displayed simply because K8s can't operate without nodes). One of the design tenets of Fargate is that it supports 1 pod per node for increased security. You pay for the size of the pod you deploy (not the node the service provision to run that pod - even if the node > pod). See here for how pod are sized. What is the use case for which you may want/need to run multiple pods per Fargate node? And why do you prefer Fargate over EKS managed node groups (which supports multiple pods per node)?

How to deploy an etcd cluster on a Kubernetes cluster with a previous etcd service

I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.