Error couldn't initialize a Kubernetes cluster [closed] - postgresql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
While installing Nginx controller, getting below error for Kubernetes initialisation.
I am using Postgres for DB
"error execution phase wait-control-plane:error couldn't initialize a Kubernetes cluster"
Log : Job k8s-ctrl-init failed to complete

Sorted. It was a k8s problem, some files didn't init correctly, so I just deleted and re-installed Nginx controller.

Related

Highly available Cloud SQL Postgres: Any downtime when adding a read replica? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I have highly available CloudSQL Postgres instances.
If I add read replicas to each one will it cause any downtime, will it require a restart?
It is highly likely there is no restart at all, I could not find something clear on the GCP Cloud SQL documentation.
According to Cloud SQL MySQL doc, if binary logging is not enabled then it requires a restart.
Creating Cloud SQL replicas in Postgres and SQL server don't have this caveat
Read replicas do NOT require any restart or downtime

Join a node to kubernetes cluster with only part of its resource(cpu/memory) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Saying I've got a physical machine with 32cores CPU/128Gi Mem spec.
Is it possible join it to an existing kubernetes cluster with half its resource like 16core CPU & 64Gi Mem ?
Actually I am expecting a kubeadm command like:
kubeadm join 10.74.144.255:16443 \
--token xxx \
--cpu 16
--mem 64g
...
The reason for this is because this physical machine is shared by other team and I don't want to affect their services on this machine.
No, you cant distribute the resources from a physical machine. Instead what you can do is install virtualization software and create VM's with the required cpu/memory from the physical machine. Then join those VM's to kubernetes cluster

Kubernetes Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We have Azure Kubernetes Cluster, 2 Nodes, and 30 GB Os Disk.
We are deploying multiple application on the Cluster with 2 replicas and 2 pods per application. Now we are facing issue with the disk space. The 30GB Os disk is full and we have to change the 30GB to 60GB.
I tried to increase the disk size to 60GB but It will destroy the whole cluster and recreating it. So we loose all the deployments and we have to deploy all the applications again.
How can I overcome with the disk space issue?
There is no way to overcome this really.
Recreate the cluster with something like 100GB os disk (maybe use ephemeral os disks to cut costs). Alternatively - create a new system node pool, migrate resources to it and decommission the old one.

Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am testing a pre-upgrade hook which just has a bash script that prints a string and sleep for 10 mins. When I run helm upgrade, it ran for some time and exited with the error in the title. I used kubectl to check the job and it was still running. Any idea on how to get rid of the error?
Thanks
The script in the container that the job runs:
#!/bin/bash
echo "Sleeping for testing..."
sleep 600
Use --timeout to your helm command to set your required timeout, the default timeout is 5m0s.
$ helm install <name> <chart> --timeout 10m30s
--timeout: A value in seconds to wait for Kubernetes commands to complete. This defaults to 5m0s (5 minutes).
Helm documentation: https://helm.sh/docs/intro/using_helm/#helpful-options-for-installupgraderollback

Trying to collect heap dump from kubernetes POD [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
In one of our environment, few Kubernetes PODs are restarting very frequently and we are trying to find the reason for that by collecting heap and thread dumps.
Any idea of how to collect those if PODs are failing very often?
Yoh can try mounting a host volume to the pod,and than configure your app to dump in the path which is mapped to the host volume. Or any other way to save the heap dump in persitent place