Why Kubernetes system requirements are so high? [closed] - kubernetes

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I tried different implentations of Kubernetes and realized that master node requires approximately 2GB RAM with 2 CPU cores and worker node 700MB with 1 core. Each component of k8s seems to be not so heavy-loaded, but it still requires a lot of resources.
What is a bottleneck and is it configurable?

Have you tried Lightweight Kubernetes K3S
K3s cluster to run in a single-node configuration can run with 1 CPU and 512MB RAM
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
RAM: 512MB Minimum
CPU: 1 Minimum
If k3s can do it so for sure the resources look configurable to use lower values.

Related

Cache mechanisme in kubernetes [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I've got the next setup:
Proxmox 7.2
CEPH 16.2.9
K3S v1.23.15+k3s1
CEPH CSI v3.7.2
CEPH using as RBD-storage for QEMU images and K8S PVC. When I do disk benchmark in QEMU I've got the next results:
Name
Read(MB/s)
Write(MB/s)
SEQ1M Q8 T1
16122.25
5478.27
SEQ1M Q1 T1
3180.51
2082.51
RND4K Q32T16
633.94
615.96
. IOPS
154771.09
150380.37
. latency us
3305.38
3401.61
RND4K Q1 T1
103.38
98.75
. IOPS
25238.15
24109.38
. latency us
39.06
40.30
But when I do the same in K8S results worse
Name
Read(MB/s)
Write(MB/s)
SEQ1M Q8 T1
810.36
861.11
SEQ1M Q1 T1
600.29
310.13
RND4K Q32T16
230.73
177.05
. IOPS
56331.27
43224.29
. latency us
9077.98
11831.65
RND4K Q1 T1
19.94
5.90
. IOPS
4868.23
1440.42
. latency us
204.76
692.60
I'm using writeback cache for QEMU. If i disable cache the results looks like K8S. Is there similar writeback mechanism in K8S or CEPH CSI?

Join a node to kubernetes cluster with only part of its resource(cpu/memory) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Saying I've got a physical machine with 32cores CPU/128Gi Mem spec.
Is it possible join it to an existing kubernetes cluster with half its resource like 16core CPU & 64Gi Mem ?
Actually I am expecting a kubeadm command like:
kubeadm join 10.74.144.255:16443 \
--token xxx \
--cpu 16
--mem 64g
...
The reason for this is because this physical machine is shared by other team and I don't want to affect their services on this machine.
No, you cant distribute the resources from a physical machine. Instead what you can do is install virtualization software and create VM's with the required cpu/memory from the physical machine. Then join those VM's to kubernetes cluster

Kubernetes Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have Azure Kubernetes Cluster, 2 Nodes, and 30 GB Os Disk.
We are deploying multiple application on the Cluster with 2 replicas and 2 pods per application. Now we are facing issue with the disk space. The 30GB Os disk is full and we have to change the 30GB to 60GB.
I tried to increase the disk size to 60GB but It will destroy the whole cluster and recreating it. So we loose all the deployments and we have to deploy all the applications again.
How can I overcome with the disk space issue?
There is no way to overcome this really.
Recreate the cluster with something like 100GB os disk (maybe use ephemeral os disks to cut costs). Alternatively - create a new system node pool, migrate resources to it and decommission the old one.

Trying to collect heap dump from kubernetes POD [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In one of our environment, few Kubernetes PODs are restarting very frequently and we are trying to find the reason for that by collecting heap and thread dumps.
Any idea of how to collect those if PODs are failing very often?
Yoh can try mounting a host volume to the pod,and than configure your app to dump in the path which is mapped to the host volume. Or any other way to save the heap dump in persitent place

MongoDB running as a service, in a container of in a virtual machine [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Which solution is generally the best for small to mid size Mongo db instances?
1) Running db as a service? E.g M-lab, Atlas
2) Running in Docker container on AWS, Google or Azure
3) Running db in virtual machine, Linux or Windows
Db size is approx:
15 Gbyte file size
1M docs
10K writings and 1M readings per day
We have been running the db in a virtual machine on own hardware for some time. Now we like to move into a cloud based solution and stop worrying about hardware failure.
Running any kind of database in a container is not a great idea.
I would run it as a normal service in the VM