ExitCode:137 when execute a command postStart in Kubernetes - postgresql

I'm trying to create a pod with Postgres. After initialize, the Pod has to execute the following command:
"lifecycle": {
"postStart": {
"exec": {
"command": [
"export", "PGPASSWORD=password;", "psql", "-h", "myhost", "-U", "root", "-d", "AppPostgresDB", "<", "/db-backup/backup.sql"
]
}
}
},
Without these command the pod works perfectly.
I get the following status:
NAME READY STATUS RESTARTS AGE
postgres-import 0/1 ExitCode:137 0 15s
I get these events:
Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id 15ad0166af04
Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id d910391582e9
Mon, 16 Nov 2015 16:13:01 +0100 Mon, 16 Nov 2015 16:13:01 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id d910391582e9
Mon, 16 Nov 2015 16:13:11 +0100 Mon, 16 Nov 2015 16:13:11 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id d910391582e9
What can I do to solve this issue?
Thanks

Try passing this to the shell:
"command": [
"/bin/bash", "-c", "export PGPASSWORD=password; psql -h myhost -U root -d AppPostgresDB < /db-backup/backup.sql"
]

Related

How do i change Kubernetes DiskPressure status from true to false?

After creating a simple nginx deployment, my pod status shows as "PENDING". When I run kubectl get pods command, I get the following:
NAME READY STATUS RESTARTS AGE
nginx-deployment-6b474476c4-dq26w 0/1 Pending 0 50m
nginx-deployment-6b474476c4-wjblx 0/1 Pending 0 50m
If I check on my node health, I get:
Taints: node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubernetes-master
AcquireTime: <unset>
RenewTime: Wed, 05 Aug 2020 12:43:57 +0530
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 05 Aug 2020 09:12:31 +0530 Wed, 05 Aug 2020 09:12:31 +0530 CalicoIsUp Calico is running on this node
MemoryPressure False Wed, 05 Aug 2020 12:43:36 +0530 Tue, 04 Aug 2020 23:01:43 +0530 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Wed, 05 Aug 2020 12:43:36 +0530 Tue, 04 Aug 2020 23:02:06 +0530 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Wed, 05 Aug 2020 12:43:36 +0530 Tue, 04 Aug 2020 23:01:43 +0530 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 05 Aug 2020 12:43:36 +0530 Tue, 04 Aug 2020 23:02:06 +0530 KubeletReady kubelet is posting ready status. AppArmor enabled
You can remove the taint for disk pressure using below command but ideally you need to investigate why kubelet is reporting disk pressure . The node may be out of disk space.
kubectl taint nodes <nodename> node.kubernetes.io/disk-pressure-
This will get you out of pending state of the nginx pods.
#manjeet,
What's the out put of 'df -kh' on the node?
Find the disk/partiion/pv that has pressure. Increase it. Then restart kubelet. Then remove the taint. Things should work.

StatefulSet breaking Kafka on worker reboot (unordered start)

In a worker node reboot scenario (1.14.3), does the order of starting stateful sets pods matter, I have a confluent kafka (5.5.1) situation where 1 member start a lot before 0 and a bit ahead of 2, as a result I see a lot of crashes on 0 is there some mechanic here that breaks things? Starting is ordinal and delete is reversed, but what happens when order is broken?
Started: Sun, 02 Aug 2020 00:52:54 +0100 kafka-0
Started: Sun, 02 Aug 2020 00:50:25 +0100 kafka-1
Started: Sun, 02 Aug 2020 00:50:26 +0100 kafka-2
Started: Sun, 02 Aug 2020 00:28:53 +0100 zk-0
Started: Sun, 02 Aug 2020 00:50:29 +0100 zk-1
Started: Sun, 02 Aug 2020 00:50:19 +0100 zk-2

How do I find out what image is running in a Kubernetes VM on GCE?

I've created a Kubernetes cluster in Google Compute Engine using cluster/kube-up.sh. How can I find out what Linux image GCE used to create the virtual machines? I've logged into some nodes using SSH and the usual commands (uname -a etc) don't tell me.
The default config file at kubernetes/cluster/gce/config-default.sh doesn't seem to offer any clues.
It uses something called Google Container VM image. Check out the blogpost announcing it here:
https://cloudplatform.googleblog.com/2016/09/introducing-Google-Container-VM-Image.html
There are two simple ways to look at it
In the Kubernetes GUI based dashboard, click on the nodes
From command line of the kubernetes master node use kubectl describe
pods/{pod-name}
(Make sure to select the correct namespace, if you are using any.)
Here is a sample output, please look into the "image" label of the output
kubectl describe pods/fedoraapache
Name: fedoraapache
Namespace: default
Image(s): fedora/apache
Node: 127.0.0.1/127.0.0.1
Labels: name=fedoraapache
Status: Running
Reason:
Message:
IP: 172.17.0.2
Replication Controllers: <none>
Containers:
fedoraapache:
Image: fedora/apache
State: Running
Started: Thu, 06 Aug 2015 03:38:37 -0400
Ready: True
Restart Count: 0
Conditions:
Type Status
Ready True
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 06 Aug 2015 03:38:35 -0400 Thu, 06 Aug 2015 03:38:35 -0400 1 {scheduler } scheduled Successfully assigned fedoraapache to 127.0.0.1
Thu, 06 Aug 2015 03:38:35 -0400 Thu, 06 Aug 2015 03:38:35 -0400 1 {kubelet 127.0.0.1} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 06 Aug 2015 03:38:36 -0400 Thu, 06 Aug 2015 03:38:36 -0400 1 {kubelet 127.0.0.1} implicitly required container POD created Created with docker id 98aeb13c657b
Thu, 06 Aug 2015 03:38:36 -0400 Thu, 06 Aug 2015 03:38:36 -0400 1 {kubelet 127.0.0.1} implicitly required container POD started Started with docker id 98aeb13c657b
Thu, 06 Aug 2015 03:38:37 -0400 Thu, 06 Aug 2015 03:38:37 -0400 1 {kubelet 127.0.0.1} spec.containers{fedoraapache} created Created with docker id debe7fe1ff4f
Thu, 06 Aug 2015 03:38:37 -0400 Thu, 06 Aug 2015 03:38:37 -0400 1 {kubelet 127.0.0.1} spec.containers{fedoraapache} started Started with docker id debe7fe1ff4f

Kubernetes pod on Google Container Engine continually restarts, is never ready

I'm trying to get a ghost blog deployed on GKE, working off of the persistent disks with WordPress tutorial. I have a working container that runs fine manually on a GKE node:
docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog
I can also correctly create a pod using the following method from another tutorial:
kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368
When I do that I can curl the blog on the internal IP from within the cluster, and get the following output from kubectl get pod:
Name: ghosty-nqgt0
Namespace: default
Image(s): us.gcr.io/my_project_id/my-ghost-blog
Node: very-long-node-name/10.240.51.18
Labels: run=ghost
Status: Running
Reason:
Message:
IP: 10.216.0.9
Replication Controllers: ghost (1/1 replicas created)
Containers:
ghosty:
Image: us.gcr.io/my_project_id/my-ghost-blog
Limits:
cpu: 100m
State: Running
Started: Fri, 04 Sep 2015 12:18:44 -0400
Ready: True
Restart Count: 0
Conditions:
Type Status
Ready True
Events:
...
The problem arises when I instead try to create the pod from a yaml file, per the Wordpress tutorial. Here's the yaml:
metadata:
name: ghost
labels:
name: ghost
spec:
containers:
- image: us.gcr.io/my_project_id/my-ghost-blog
name: ghost
env:
- name: NODE_ENV
value: production
- name: VIRTUAL_HOST
value: myghostblog.com
ports:
- containerPort: 2368
When I run kubectl create -f ghost.yaml, the pod is created, but is never ready:
> kubectl get pod ghost
NAME READY STATUS RESTARTS AGE
ghost 0/1 Running 11 3m
The pod continuously restarts, as confirmed by the output of kubectl describe pod ghost:
Name: ghost
Namespace: default
Image(s): us.gcr.io/my_project_id/my-ghost-blog
Node: very-long-node-name/10.240.51.18
Labels: name=ghost
Status: Running
Reason:
Message:
IP: 10.216.0.12
Replication Controllers: <none>
Containers:
ghost:
Image: us.gcr.io/my_project_id/my-ghost-blog
Limits:
cpu: 100m
State: Running
Started: Fri, 04 Sep 2015 14:08:20 -0400
Ready: False
Restart Count: 10
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler } scheduled Successfully assigned ghost to very-long-node-name
Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created Created with docker id dbbc27b4d280
Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started Started with docker id dbbc27b4d280
Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id ceb14ba72929
Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id ceb14ba72929
Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id 0b8957fe9b61
Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id 0b8957fe9b61
Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id edaf0df38c01
Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id edaf0df38c01
Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id d33f5e5a9637
...
This cycle of created/started goes on forever, if I don't kill the pod. The only difference from the successful pod is the lack of a replication controller. I don't expect this is the problem because the tutorial mentions nothing about rc.
Why is this happening? How can I create a successful pod from config file? And where would I find more verbose logs about what is going on?
If the same docker image is working via kubectl run but not working in a pod, then something is wrong with the pod spec. Compare the full output of the pod as created from spec and as created by rc to see what differs by running kubectl get pods <name> -o yaml for both. Shot in the dark: is it possible the env vars specified in the pod spec are causing it to crash on startup?
Maybe you could use different restart Policy in the yaml file?
What you have I believe is equivalent to
- restartPolicy: Never
no replication controller. You may try to add this line to yaml and set it to Always (and this will provide you with RC), or to OnFailure.
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy
Container logs may be useful, with kubectl logs
Usage:
kubectl logs [-p] POD [-c CONTAINER]
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_logs.html

kubernetes pod status always "pending"

I am following the Fedora getting started guide (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md) and trying to run the pod fedoraapache. But kubectl always shows fedoraapache as pending:
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
fedoraapache fedoraapache fedora/apache 192.168.226.144/192.168.226.144 name=fedoraapache Pending
Since it is pending, I cannot run kubectl log pod fedoraapache. So,
I instead run kubectl describe pod fedoraapache, which shows the following errors:
Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4
Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4: (exit status 1)
Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747
Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 with error: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1)
Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1)
Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1)
Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e
Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e with error: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1)
Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 with error: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1)
Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614
Fri, 20 Mar 2015 21:42:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 109 {kubelet 192.168.226.144} implicitly required container POD pulled Successfully pulled image "kubernetes/pause:latest"
Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1)
There are several reasons container can fail to start:
the container command itself fails and exits -> check your docker image and start up script to make sure it works.
Use
sudo docker ps -a to find the offending container and
sudo docker logs <container> to check for failure inside the container
a dependency is not there: that happens for example when one tries to mount a volume that is not present, for example Secrets that are not created yet.
--> make sure the dependent volumes are created.
Kubelet is unable to start the container we use for holding the network namespace. Some things to try are:
Can you manually pull and run gcr.io/google_containers/pause:0.8.0? (This is the image used for the network namespace container at head right now.)
As mentioned already, /var/log/kubelet.log should have more detail; but log location is distro-dependent, so check https://github.com/GoogleCloudPlatform/kubernetes/wiki/Debugging-FAQ#checking-logs.
Step one is to describe the pod and see the problems:
$ kubectl describe pod <pod Name>
or
If you use master node, then you can configure the master node to run pods with:
$ kubectl taint nodes --all node-role.kubernetes.io/master-
Check if is Kubelet is running on your machine. I came across this problem one time and discovered that Kubelet was not running, which explains why the pod status was stuck in "pending". Kubelet runs as a systemd service in my environment, so if that is also the case for you then the following command will help you check its status:
systemctl status kubelet