In ECS with Fargate, we can manage service isolation via security group. However that is no longer the case with EKS on Fargate.
Is there a way where pods on the same cluster can be isolated from each other like a Network Policy? I know this is possible with kubernetes but it needs to be implemented by the network plugin. Tried to install the network provider listed here without success as it needs daemonset (limitation of eks fargate: Cannot run Daemonsets, Privileged pods, or pods that use HostNetwork or HostPort.)
This is something we are tracking in this roadmap item. There isn't a viable workaround for now. As you pointed out when using EC2 we'd suggest to use the Calico network policy engine but with Fargate there is no DaemonSet support and it can't be used.
Given the SG associated to a pod is defined at the cluster level, one way to try to mitigate this would be to spread like-pods across different clusters where the pod SG is configured for that specific type of workload BUT this will mean more work and higher control plane costs.
Related
Is it possible to create a Cloud Run on GKE (Anthos) Kubernetes Cluster with Preemptible nodes and if so can you also enable plugins such as gke-node-pool-shifter and gke-pvm-killer or will it interfere with cloud run actions such as autoscaling pods
https://hub.helm.sh/charts/rimusz/gke-node-pool-shifter
https://hub.helm.sh/charts/rimusz/gke-pvm-killer
Technically a Cloud Run on GKE cluster is still a GKE cluster at the end of the day, so it can have preemptive node pools.
However, some Knative Serving components, such as the activator and autoscaler are in the hot path of serving the requests. You need to make sure they don't end up in a preemptible pool. Similarly, the controller and webhook are somewhat central to the control plane lifecycle of Knative API objects, so you also need to make sure these pods end up in a non-preemptible node pool.
Secondly, Knative (for now) does not support node selectors or taints/tolerations: https://knative.tips/pod-config/node-affinity/ It simply doesn't give you a way to specify nodeSelector or other affinity fields in the Pod template of Knative Service object.
Therefore, you gotta find out a way (like implementing your mutating admission webhook for Knative-created pods) to add such node selectors to the Pods, which is quite tedious.
However, by combining node taints and pd tolerations, I think you can have Knative system components end up in a non-preemptible pool, and everything else (i.e. Knative-created pods) in other nodes (i.e. preemptible nodes).
We have a deployment of Kubernetes in Google Cloud Platform. Recently we hit one of the well known issues related on a problem with the kube-dns that happens at high amount of requests https://github.com/kubernetes/kubernetes/issues/56903 (its more related to SNAT/DNAT and contract but the final result is out of service of kube-dns).
After a few days of digging on that topic we found that k8s already have a solution witch is currently in alpha (https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/)
The solution is to create a caching CoreDNS as a daemonset on each k8s node so far so good.
Problem is that after you create the daemonset you have to tell to kubelet to use it with --cluster-dns option and we cant find any way to do that in GKE environment. Google bootstraps the cluster with "configure-sh" script in instance metadata. There is an option to edit the instance template and "hardcode" the required values but that is not an option if you upgrade the cluster or use the horizontal autoscaling all of the modified values will be lost.
The last idea was to use custom startup script that pull configuration and update the metadata server but this is a too complicated task.
As of 2019/12/10, GKE now supports through the gcloud CLI in beta:
Kubernetes Engine
Promoted NodeLocalDNS Addon to beta. Use --addons=NodeLocalDNS with gcloud beta container clusters create. This addon can be enabled or disabled on existing clusters using --update-addons=NodeLocalDNS=ENABLED or --update-addons=NodeLocalDNS=DISABLED with gcloud container clusters update.
See https://cloud.google.com/sdk/docs/release-notes#27300_2019-12-10
You can spin up another kube-dns deployment e.g. in different node-pool and thus having 2x nameserver in the pod's resolv.conf.
This would mitigate the evictions and other failures and generally allow you to completely control your kube-dns service in the whole cluster.
In addition to what was mentioned in this answer - With beta support on GKE, the nodelocal caches now listen on the kube-dns service IP, so there is no need for a kubelet flag change.
I am using kubernete to manage docker cluster. Right now, I can set up POD autoscale using Horizontal Pod Scaler, that is fine.
And now I think the next step is to autoscale nodes. I think for HPA, the auto-created pod is only started in the already created nodes, but if all the available nodes are utilized and no available resource for any more pods, I think the next step is to automatically create node and have node join the k8s master.
I googled a lot and there are very limited resources to introduce this topic.
Can anyone please point me to any resource how to implement this requirement.
Thanks
One way to do using AWS and setting up your own Kubernetes cluster is by following these steps :
Create an Instance greater than t2.micro (will be master node).
Initialize the Kubernetes cluster using some tools like Kubeadm. After the initialisation would be completed you would get a join command, which needs to e run on all the nodes who want to join the cluster. (Here is the link)
Now create an Autoscaling Group on AWS with start/boot script containing that join command.
Now whenever the utilisation specified by you in autoscaling group is breached the scaling would happen and the node(s) would automatically join the Kubernetes cluster. This would allow the Kubernetes to schedule pods on the newly joined nodes based on the HPA.
(I would suggest to use Flannel as pod network as it automatically removes the node from Kubernetes cluster when it is not available)
kubernetes operations (kops) helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line.
Features:
Automates the provisioning of Kubernetes clusters in AWS and GCE
Deploys Highly Available (HA) Kubernetes Masters
Most of the managed kubernetes service providers provide auto scaling feature of the nodes
Elastic Kubernetes Service EKS- configure cluster auto scalar
Google Kubernetes Engine
GKE Auto Scalar
Auto scaling feature needs to be supported by the underlying cloud provider. Google cloud supports auto scaling during cluster creation or update by passing flags --enable-autoscaling --min-nodes and --max-nodes to the corresponding gcloud commands.
Examples:
gcloud container clusters create mytestcluster --zone=us-central1-b --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5
gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=15
below link would be helpful
https://medium.com/kubecost/understanding-kubernetes-cluster-autoscaling-675099a1db92
AWS EKS makes use of their own CNI plugin and there are docs that allow you to install Calico for managing policy. For a number of reasons, I'd like to have Calico manage networking as well.
Based on the installation instructions I can't seem to find a way to do either option:
etcd
Doesn't seem viable as I can't find a way to access the EKS control plane etcd endpoints. If I were to deploy my own etcd pods inside the cluster, I need to use the AWS CNI plugin for those to get an IP address, so that doesn't work. I could bring my own etcd cluster outside of Kubernetes, but that seems a bit ridiculous.
Kubernetes API datastore
This option wants me to change setting to the controller which I don't have access to in the AWS EKS managed control plane.
The short answer is as of this writing EKS (nor GKE) doesn't give you direct access to any of the control plane components: etcd, kube-apiserver, kube-controller-manager, coredns/kube-dns, kube-scheduler.
They do have some docs on how to install Calico on an EKS cluster, but if you want more control you'll have to set up your own standalone cluster.
They might allow you access to the master components in the future but the bottom line is that EKS is a 'managed' service where they are supposed to take care of all your control plane components.
I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed?
I hope I managed to be coherent.
Thanks!
Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways:
Using kubectl, you can use the scale command.
Using the Kubernetes API, you can update your config with a new value in the replicas field.
Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.