Google Cloud Kubernetes with 3 Micro Instances (Free tier) - Not useable AT ALL? - kubernetes

I'm using google cloud with the free tier and the free credits, and I wanted to try out kubernetes.
I create node pool of 3 f1-micro instances (only f1-micro instances are eligible for free tier). It seems each one should end up with 240mb of memory.
However, I tried to create a simple deployment with a pod requesting 100Mi of memory, and I'm getting an Insufficient Memory errors.
Does that mean google cloud Kubernetes isn't really usable with the free tier, hence it's not free at all?
Or am I missing something here?

For each of the nodes run kubectl describe nodename which will show lot of details about the node and look for Allocatable and Allocated resources. You may notice that almost all the memory is used, and f1-micro is limited to an average of 0.2 CPU which has been exceeded just by the k8s system pods.
You can try editing the deployment of system pods such as CoreDNS and reduce the requests of the pods

Related

what is kubernetes memory commitment limits on worker nodes?

I am running single node K8s cluster and my machine has 250GB memory. I am trying to launch multiple pods each needing atleast 50GB memory. I am setting my pod specifications as following.
Now I can launch first four pods and they starts "Running" immediately; however the fifth pod is "Pending". kubectl describe pods shows 0/1 node available; 1 insufficient memory
now at first this does not make sense - i have 250GB memory so i should be able to launch five pods each requesting 50GB memory. shouldn't i?
on second thought - this might be too tight for K8s to fulfill as, if it schedules the fifth pod than the worker node has no memory left for anything else (kubelet kube-proxy OS etc). if this is the case for denial than K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?
where can i find more details on related topic? i tried googling but nothing came which specifically addresses these questions. can you help?
now at first this does not make sense - i have 250GB memory so i should be able to launch five pods each requesting 50GB memory. shouldn't i?
This depends on how much memory that is "allocatable" on your node. Some memory may be reserved for e.g. OS or other system tasks.
First list your nodes:
kubectl get nodes
Then you get a list of your nodes, now describe one of your nodes:
kubectl describe node <node-name>
And now, you should see how much "allocatable" memory the node has.
Example output:
...
Allocatable:
cpu: 4
memory: 2036732Ki
pods: 110
...
Set custom reserved resources
K8s must be keeping some amount of node resources (CPU memory storage network) always free or reserved. What is this amount? is it static value or N% of total node resource capacity? can we change these values when we launch cluster?
Yes, this can be configured. See Reserve Compute Resources for System Daemons.

What's the maximum number of Kubernetes namespaces?

Is there a maximum number of namespaces supported by a Kubernetes cluster? My team is designing a system to run user workloads via K8s and we are considering using one namespace per user to offer logical segmentation in the cluster, but we don't want to hit a ceiling with the number of users who can use our service.
We are using Amazon's EKS managed Kubernetes service and Kubernetes v1.11.
This is quite difficult to answer which has dependency on a lot of factors, Here are some facts which were created on the k8s 1.7 cluster kubernetes-theresholds the Number of namespaces (ns) are 10000 with few assumtions
The are no limits from the code point of view because is just a Go type that gets instantiated as a variable.
In addition to link that #SureshVishnoi posted, the limits will depend on your setup but some of the factors that can contribute to how your namespaces (and resources in a cluster) scale can be:
Physical or VM hardware size where your masters are running
Unfortunately, EKS doesn't provide that yet (it's a managed service after all)
The number of nodes your cluster is handling.
The number of pods in each namespace
The number of overall K8s resources (deployments, secrets, service accounts, etc)
The hardware size of your etcd database.
Storage: how many resources can you persist.
Raw performance: how much memory and CPU you have.
The network connectivity between your master components and etcd store if they are on different nodes.
If they are on the same nodes then you are bound by the server's memory, CPU and storage.
There is no limit on number of namespaces. You can create as many as you want. It doesn't actually consume cluster resources like cpu, memory etc.

Kubernetes Cluster with different CPU configuration

I have created a K8S cluster of 10 machines. which is having cpus of different memory and cores (4 core 32 GB, 4 core 8 GB). Now when I am deploying any application on the cluster it is creating pods in a random manner. It is not creating the POD on the basis of memory or load.
How is Kubernetes master distributing the Pods in the cluster? I am not getting any significant answers. How can i configure the cluster for best use of resources?
Kubernetes uses a scheduler for deciding which pod is started on which node. One improvement is to tell the scheduler what your pods need as minimum and maximum resources.
Resources are Memory (measured in bytes), CPU (measured in cpu units) and ephemeral storage for things like emtpy dir(with 1.11). When you provide these information for your deployments Kubernetes can make better decisions where to run.
Without these information a nginx pod will be scheduled the same way as any heavy Java application.
The limits and requests config is described here. Setting both limits is a good idea to make scheduling easier and to avoid pods running amok and using all node resources.
If this is not enough there is also the possibility to add a custom scheduler which is explained in this documentation

GCP: Kubernetes engine allocatable resources

According to the documentation, Kubernetes reserves a significant amount of resources on the nodes in the cluster in order to run itself. Are the numbers in the documentation correct or is Google trying to sell me bigger nodes?
Aside: Taking kube-system pods and other reserved resources into account, am I right in saying it's better resource-wise to rent one machine equiped with 15GB of RAM instead of two with 7.5GB of RAM each?
Yes, kubernetes reserves a significant amount of resources on the nodes. So better consider that before renting the machine.
You can deploy custom machines in GCP. For the pricing you can use this calculator by Google

Kubernetes cluster seems to be unstable

Recently we've experienced issues with both Non-Production and Production clusters where the nodes encountered 'System OOM encountered' issue.
The nodes within the Non-Production cluster don't seem to be sharing the pods. It seems like a given node is running all the pods and putting a load on the system.
Also, the Pods are stuck in this status: 'Waiting: ContainerCreating'.
Any help/guidance with the above issues would be greatly appreciated. We are building more and more services in this cluster and want to make sure there's no instability and/or environment issues and place proper checks/configuration in place before we go live.
"I would recommend you manage container compute resources properly within your Kubernetes cluster. When creating a Pod, you can optionally specify how much CPU and memory (RAM) each Container needs to avoid OOM situations.
When Containers have resource requests specified, the scheduler can make better decisions about which nodes to place Pods on. And when Containers have their limits specified, contention for resources on a node can be handled in a specified manner. CPU specifications are in units of cores, and memory is specified in units of bytes.
An event is produced each time the scheduler fails, use the command below to see the status of events:
$ kubectl describe pod <pod-name>| grep Events
Also, read the official Kubernetes guide on “Configure Out Of Resource Handling”. Always make sure to:
reserve 10-20% of memory capacity for system daemons like kubelet and OS kernel
identify pods which can be evicted at 90-95% memory utilization to reduce thrashing and incidence of system OOM.
To facilitate this kind of scenario, the kubelet would be launched with options like below:
--eviction-hard=memory.available<xMi
--system-reserved=memory=yGi
Replacing x and y with actual memory values.
Having Heapster container monitoring in place should be helpful for visualization".
Read more reading on Kubernetes and Docker Administration
Unable to mount volumes for pod
"xxx-3615518044-6l1cf_xxx-qa(8a5d9893-230b-11e8-a943-000d3a35d8f4)":
timeout expired waiting for volumes to attach/mount for pod
"xxx-service-3615518044-6l1cf"/"xxx-qa"
That indicates your pod is having trouble mounting the volume specified in your configuration. This can often be a permissions issue. If you post your config files (like to a gist) with private info removed, we could probably be more helpful.