Enable unsafe sysctls on a cluster managed by Amazon EKS - kubernetes

I'm attempting to follow instructions for resolving a data congestion issue by enabling 2 unsafe sysctls for certain pods running in a Kubernetes cluster where the Pods are deployed by EKS. To do this, I must enable those parameters in the nodes running those pods. The following command is for enabling on a per-node basis:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen,net.core.somaxconn'
However, the Nodes in the cluster I am working with are deployed by EKS. The EKS cluster was deployed by using the Amazon dashboard (Not a yaml config file/terraform/etc). I am not sure how to translate the above step to have all nodes in my cluster have those systcl enabled.

Related

Should Windows K8s nodes have aws-node & kube-proxy pods?

I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?
kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni

Deploy application to EKS Cluster

After creating an eks cluster with eksctl or aws CLI with the specified node group. Then when I apply my Deployment yaml file, is my Pods distributed among the node group above automatically?
Yes your pod will get deployed on any node in cluster which has sufficient resource to support it.

How kube-apiserver knows where is kubelet service/process running in worker node?

I have bootstraped (kubernetes the hard way by kelseyhightower) a k8s cluster in virtual box with 2 master(s) and 2 worker(s) and 1 LB for 2 master's kube-apiserver. BTW, kubelet is not running in master, only in worker node.
Now cluster is up and running but I am not able to understand how kube-apiserver on master is connecting to kubelet to fetch the node's metric data etc.
Could you please let me in details?
Kubernetes API server is not aware of Kubelets but Kubelets are aware of Kubernetes API server. Kubelet registers the node and reports metrics to Kubernetes API Server which gets persisted into ETCD key value store. Kubelets use a kubeconfig file to communicate with Kubernetes API Server. This kubeconfig file has the endpoint of Kubernetes API server.The communication between Kubelet and Kubernetes API Server is secure with mutual TLS.
In Kubernetes the Hard Way Kubernetes control plane components - API Server, Scheduler, Controller Manager are run as systems unit and that's why there is no Kubelet running on the control plane nodes and if you perform kubectl get nodes command you would not see the master nodes listed as there is no Kubelet to register the master nodes.
A more standard way to deploy Kubernetes control plane components - API Server, Scheduler, Controller Manager is using Kubelet and not systemd units and that's how Kubeadm deploys Kubernetes control plane.
Official documentation on Master to Cluster communication.

Istio deployed but doesn't show in the GKE UI

I have added Istio to an existing GKE cluster. This cluster was initially deployed from the GKE UI with Istio "disabled".
I have deployed Istio from the CLI using kubectl and while everything works fine (istio namespace, pods, services, etc...) and I was able later on to deploy an app with Istio sidecar pods etc..., I wonder why the GKE UI still reports that Istio is disabled on this cluster. This is confusing - in effect, Istio is deployed in the cluster but the UI reports the opposite.
Is that a GKE bug ?
Deployed Istio using:
kubectl apply -f install/kubernetes/istio-auth.yaml
Deployment code can be seen here:
https://github.com/hassanhamade/istio/blob/master/deploy
From my point of view this doesn't look as a bug, I assume that the status is disabled because you have deployed a custom version of Istio on you cluster. This flag should be indicating the status of the GKE managed version.
If you want to update your cluster to use GKE managed version, you can do it as following:
With TLS enforced
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_STRICT
or
With mTLS in permissive mode
gcloud beta container clusters update CLUSTER_NAME \
--update-addons=Istio=ENABLED --istio-config=auth=MTLS_PERMISSIVE
Check this for more details.
Be careful since you already have deployed Istio, enabling the GKE managed one may cause issues.
Istio will only show as enabled in the GKE cluster UI when using the Istio on GKE addon. If you manually install Istio OSS, the cluster UI will show "disabled".

In GCP Kubernetes (GKE) how do I assign a stateless pod created by a deployment to a provisioned vm

I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?
It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation