Cache mechanisme in kubernetes [closed] - kubernetes

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I've got the next setup:
Proxmox 7.2
CEPH 16.2.9
K3S v1.23.15+k3s1
CEPH CSI v3.7.2
CEPH using as RBD-storage for QEMU images and K8S PVC. When I do disk benchmark in QEMU I've got the next results:
Name
Read(MB/s)
Write(MB/s)
SEQ1M Q8 T1
16122.25
5478.27
SEQ1M Q1 T1
3180.51
2082.51
RND4K Q32T16
633.94
615.96
. IOPS
154771.09
150380.37
. latency us
3305.38
3401.61
RND4K Q1 T1
103.38
98.75
. IOPS
25238.15
24109.38
. latency us
39.06
40.30
But when I do the same in K8S results worse
Name
Read(MB/s)
Write(MB/s)
SEQ1M Q8 T1
810.36
861.11
SEQ1M Q1 T1
600.29
310.13
RND4K Q32T16
230.73
177.05
. IOPS
56331.27
43224.29
. latency us
9077.98
11831.65
RND4K Q1 T1
19.94
5.90
. IOPS
4868.23
1440.42
. latency us
204.76
692.60
I'm using writeback cache for QEMU. If i disable cache the results looks like K8S. Is there similar writeback mechanism in K8S or CEPH CSI?

Related

Highly available Cloud SQL Postgres: Any downtime when adding a read replica? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I have highly available CloudSQL Postgres instances.
If I add read replicas to each one will it cause any downtime, will it require a restart?
It is highly likely there is no restart at all, I could not find something clear on the GCP Cloud SQL documentation.
According to Cloud SQL MySQL doc, if binary logging is not enabled then it requires a restart.
Creating Cloud SQL replicas in Postgres and SQL server don't have this caveat
Read replicas do NOT require any restart or downtime

How to add nodes if "kubectl get nodes" shows an empty list? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
This post was edited and submitted for review 4 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I am trying to run some installation instructions for a software development environment built on top of K3S.
I am getting the error "no nodes available to schedule pods", which when Googled takes me to the question no nodes available to schedule pods - Running Kubernetes Locally with No VM
The answer to that question tells me to run kubectl get nodes.
And when I do that, it shows me perhaps not surprisingly, that I don't have any nodes running.
Without having to learn how Kubernetes actually works, how can I start some nodes and get past this error?
This is a local environment running on a single VM (just like the linked question).
It would depend how your K8s was installed. Kubernetes is a complex system requiring multiple nodes all configured correctly in order to function.
If there are no nodes found for scheduling, my first though would be you only have a single node and its a master node (which runs the control plane services but not workloads) and have not attached any worker nodes. You would need to add another node to the cluster which is running as a worker for it to schedule workloads.
If you want to get up and running without understanding it, there are distributions such as minikube or k3s, which will set it up out of the box and are designed to run on a single machine.

Join a node to kubernetes cluster with only part of its resource(cpu/memory) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Saying I've got a physical machine with 32cores CPU/128Gi Mem spec.
Is it possible join it to an existing kubernetes cluster with half its resource like 16core CPU & 64Gi Mem ?
Actually I am expecting a kubeadm command like:
kubeadm join 10.74.144.255:16443 \
--token xxx \
--cpu 16
--mem 64g
...
The reason for this is because this physical machine is shared by other team and I don't want to affect their services on this machine.
No, you cant distribute the resources from a physical machine. Instead what you can do is install virtualization software and create VM's with the required cpu/memory from the physical machine. Then join those VM's to kubernetes cluster

Kubernetes Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have Azure Kubernetes Cluster, 2 Nodes, and 30 GB Os Disk.
We are deploying multiple application on the Cluster with 2 replicas and 2 pods per application. Now we are facing issue with the disk space. The 30GB Os disk is full and we have to change the 30GB to 60GB.
I tried to increase the disk size to 60GB but It will destroy the whole cluster and recreating it. So we loose all the deployments and we have to deploy all the applications again.
How can I overcome with the disk space issue?
There is no way to overcome this really.
Recreate the cluster with something like 100GB os disk (maybe use ephemeral os disks to cut costs). Alternatively - create a new system node pool, migrate resources to it and decommission the old one.

Why Kubernetes system requirements are so high? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I tried different implentations of Kubernetes and realized that master node requires approximately 2GB RAM with 2 CPU cores and worker node 700MB with 1 core. Each component of k8s seems to be not so heavy-loaded, but it still requires a lot of resources.
What is a bottleneck and is it configurable?
Have you tried Lightweight Kubernetes K3S
K3s cluster to run in a single-node configuration can run with 1 CPU and 512MB RAM
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
RAM: 512MB Minimum
CPU: 1 Minimum
If k3s can do it so for sure the resources look configurable to use lower values.