Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In one of our environment, few Kubernetes PODs are restarting very frequently and we are trying to find the reason for that by collecting heap and thread dumps.
Any idea of how to collect those if PODs are failing very often?
Yoh can try mounting a host volume to the pod,and than configure your app to dump in the path which is mapped to the host volume. Or any other way to save the heap dump in persitent place
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Saying I've got a physical machine with 32cores CPU/128Gi Mem spec.
Is it possible join it to an existing kubernetes cluster with half its resource like 16core CPU & 64Gi Mem ?
Actually I am expecting a kubeadm command like:
kubeadm join 10.74.144.255:16443 \
--token xxx \
--cpu 16
--mem 64g
...
The reason for this is because this physical machine is shared by other team and I don't want to affect their services on this machine.
No, you cant distribute the resources from a physical machine. Instead what you can do is install virtualization software and create VM's with the required cpu/memory from the physical machine. Then join those VM's to kubernetes cluster
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have Azure Kubernetes Cluster, 2 Nodes, and 30 GB Os Disk.
We are deploying multiple application on the Cluster with 2 replicas and 2 pods per application. Now we are facing issue with the disk space. The 30GB Os disk is full and we have to change the 30GB to 60GB.
I tried to increase the disk size to 60GB but It will destroy the whole cluster and recreating it. So we loose all the deployments and we have to deploy all the applications again.
How can I overcome with the disk space issue?
There is no way to overcome this really.
Recreate the cluster with something like 100GB os disk (maybe use ephemeral os disks to cut costs). Alternatively - create a new system node pool, migrate resources to it and decommission the old one.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
While installing Nginx controller, getting below error for Kubernetes initialisation.
I am using Postgres for DB
"error execution phase wait-control-plane:error couldn't initialize a Kubernetes cluster"
Log : Job k8s-ctrl-init failed to complete
Sorted. It was a k8s problem, some files didn't init correctly, so I just deleted and re-installed Nginx controller.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I tried different implentations of Kubernetes and realized that master node requires approximately 2GB RAM with 2 CPU cores and worker node 700MB with 1 core. Each component of k8s seems to be not so heavy-loaded, but it still requires a lot of resources.
What is a bottleneck and is it configurable?
Have you tried Lightweight Kubernetes K3S
K3s cluster to run in a single-node configuration can run with 1 CPU and 512MB RAM
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
RAM: 512MB Minimum
CPU: 1 Minimum
If k3s can do it so for sure the resources look configurable to use lower values.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Which solution is generally the best for small to mid size Mongo db instances?
1) Running db as a service? E.g M-lab, Atlas
2) Running in Docker container on AWS, Google or Azure
3) Running db in virtual machine, Linux or Windows
Db size is approx:
15 Gbyte file size
1M docs
10K writings and 1M readings per day
We have been running the db in a virtual machine on own hardware for some time. Now we like to move into a cloud based solution and stop worrying about hardware failure.
Running any kind of database in a container is not a great idea.
I would run it as a normal service in the VM