Deploy application to EKS Cluster - kubernetes

After creating an eks cluster with eksctl or aws CLI with the specified node group. Then when I apply my Deployment yaml file, is my Pods distributed among the node group above automatically?

Yes your pod will get deployed on any node in cluster which has sufficient resource to support it.

Related

K8s anti affinity for different clusters

I have a deployment file for a k8s cluster in the cloud with an anti affinity rule which prevents multiple pods of same deployment on the same node. This works well but not for my local k8s which uses a single node. I can't seem to find a way to use same deployment file for remote cluster and local cluster.
I have tweaked the affinity and node selector rules to no avail.

Enable unsafe sysctls on a cluster managed by Amazon EKS

I'm attempting to follow instructions for resolving a data congestion issue by enabling 2 unsafe sysctls for certain pods running in a Kubernetes cluster where the Pods are deployed by EKS. To do this, I must enable those parameters in the nodes running those pods. The following command is for enabling on a per-node basis:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen,net.core.somaxconn'
However, the Nodes in the cluster I am working with are deployed by EKS. The EKS cluster was deployed by using the Amazon dashboard (Not a yaml config file/terraform/etc). I am not sure how to translate the above step to have all nodes in my cluster have those systcl enabled.

How can I make multiple deployment share the same fargate instance in EKS?

I deployed a EKS Farget cluster in AWS and created a fargate profile with default namespace without any labels. I found that whenever I deploy a new deployment kubectl apply , a new fargate node will be created for that deployment. See below screenshot.
How can I make the deployment share one fargate instance?
And how can I rename the fargate node name?
The spirit of using Fargate is that you want a serverless experience where you don't have to think about nodes (they are displayed simply because K8s can't operate without nodes). One of the design tenets of Fargate is that it supports 1 pod per node for increased security. You pay for the size of the pod you deploy (not the node the service provision to run that pod - even if the node > pod). See here for how pod are sized. What is the use case for which you may want/need to run multiple pods per Fargate node? And why do you prefer Fargate over EKS managed node groups (which supports multiple pods per node)?

How to deploy application to a different cluster within same google cloud project?

I have one google cloud project, but have 2 different kubernetes clusters. Each of these clusters have one node each.
I would like to deploy an application to a specific kubernetes cluster. The deployment defaults to the other cluster. How can I specify which kubernetes cluster to deploy my app to?
See the cluster with which kubectl is currently communicating:
kubectl config current-context
Set the cluster with which you want kubectl to communicate:
kubectl config use-context my-cluster-name
See official docs here for more details

In GCP Kubernetes (GKE) how do I assign a stateless pod created by a deployment to a provisioned vm

I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?
It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation