Kubectl logs always empty - kubernetes

Im running a K3s on multiple RPis which works fine except for showing logs.
kubectl logs <pod-name> is always empty.
For testing, I'm running busybox:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: busybox
args: [/bin/sh, -c, 'while true; do echo $(date); sleep 1; done']
The pod is running, but still no logs.
I'm suspecting log2ram, which I installed to not destroy my SD-Cards in the long run.
However, I can't figure out, why this happens and how to fix this.

Just found out, that log2ram was full.
Clearing the /var/logs folder on the hosting node resolved the problem

Related

Kubernetes Pod with Sleep command takes time to get deleted

Currently it takes quite a long time before the pod can be terminated after a kubectl delete command. I have the feeling that it could be because of the sleep command.
How can I make the container stop faster?
What best practices should I use here?
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
spec:
containers:
- image: alpine
..
command:
- /bin/sh
- -c
- |
trap : TERM INT
while true; do
# some code to check something
sleep 10
done
Is my approach with "trap: TERM INT" correct? At the moment I don't see any positive effect...
When I terminate the pod it takes several seconds for the command to come back.
kubectl delete pod my-pod
Add terminationGracePeriodSeconds to your spec will do:
...
spec:
template:
spec:
terminationGracePeriodSeconds: 10 # <-- default is 30, can go as low as 0 to send SIGTERM immediately.
containers:
- image: alpine

kubernetes/minikube / Warning Back-off restarting failed container

I am using kubernetes/minikube and I've tried to install my application docker image by using a POD configuration yaml file as follow:
So far so good but problem starts just when I try to build my pod by executing:
kubectl create -f employee-service-pod.yaml -n dev-samples
apiVersion: v1
kind: Pod
metadata:
name: employee-service-pod
spec:
containers:
- name: employee-service-cont
image: doviche/employee-service:latest
imagePullPolicy: IfNotPresent
command: [ "echo", "SUCCESS" ]
restartPolicy: Always
imagePullSecrets:
- name: employee-service-secret
status: {}enter image description here
When I checkout my POD does show the following error:
Back-off restarting failed container
Soon after I ran:
kubectl describe pod employee-service-pod -n dev-samples
which shows what is on the image attached to this post:
Honestly I have not been able to identify what's causing the Warning and that's why I've decided to share it with you to see if some good eye sees the arcane.
I appreciate any help as I am stuck on this since so long
I am using minikube v1.11.0 on Linuxmint 19.1.
Many thanks in advance guys.
Your application completed its work successfully (echo SUCCESS) since you have mentioned restartPolicy of the container as Always, it is trying restart again and again and going to crashLoopbackoff state.
Please change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Thanks,

container does not start with status CrashLoopBackOff

I am trying to run a simple ubuntu container in a kubernetes cluster. It keeps on failing with CrashLoopBackOff status. I am not even able to see any logs as in to find the reason for it.
my yaml file looks like following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
labels:
app: jubuntu
spec:
selector:
matchLabels:
app: jubuntu
template:
metadata:
labels:
app: jubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
That's because you're using a Deployment that assumes you have a long-running task. In your case, it starts the container and immediately exits since there's nothing to be done there. In other words, this deployment doesn't make a lot of sense. You could add the following in the containers: field to see it running (still useless, but at least you don't see it crashing anymore):
command:
- sh
- '-c'
- "while true; do echo working ; sleep 5; done;"
See also this troubleshooting guide.
For your convenience, if you don't want to do it via editing a YAML manifest, you can also use this command:
$ kubectl run --image=ubuntu -- sh while true; do echo working ; sleep 5; done;
And if you're super curious and want to check if it's the same, then you can append the following to the run command: --dry-run --output=yaml (after --image, before -- sh).

Not able to see Pod when I create a Job

When I try to create Deployment as Type Job, it's not pulling any image.
Below is .yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: copyartifacts
spec:
backoffLimit: 1
template:
metadata:
name: copyartifacts
spec:
restartPolicy: "Never"
volumes:
- name: sharedvolume
persistentVolumeClaim:
claimName: shared-pvc
- name: dockersocket
hostPath:
path: /var/run/docker.sock
containers:
- name: copyartifacts
image: alpine:3.7
imagePullPolicy: Always
command: ["sh", "-c", "ls -l /shared; rm -rf /shared/*; ls -l /shared; while [ ! -d /shared/artifacts ]; do echo Waiting for artifacts to be copied; sleep 2; done; sleep 10; ls -l /shared/artifacts; "]
volumeMounts:
- mountPath: /shared
name: sharedvolume
Can you please guide here?
Regards,
Vikas
There could be two possible reasons for not seeing pod.
The pod hasn't been created yet.
The pod has completed it's task and terminated before you have noticed.
1. Pod hasn't been created:
If pod hasn't been created yet, you have to find out why the job failed to create pod. You can view job's events to see if there are any failure event. Use following command to describe a job.
kubectl describe job <job-name> -n <namespace>
Then, check the Events: field. There might be some events showing pod creation failure with respective reason.
2. Pod has completed and terminated:
Job's are used to perform one-time task rather than serving an application that require to maintain a desired state. When the task is complete, pod goes to completed state then terminate (but not deleted). If your Job is intended for a task that does not take much time, the pod may terminate after completing the task before you have noticed.
As the pod is terminated, kubectl get pods will not show that pod. However, you will able to see the pod using kubectl get pods -a command as it hasn't been deleted.
You can also describe the job and check for completion or success event.
if you use kind created the K8s cluster, all the cluster node run as docker. If you had reboot you computer or VM, the cluster (pod) ip address may change, leeding to the cluster node internet communication failed. In this case, see the cluster manager logs, it has error message. Job created, but pod not.
try to re-create the cluster, or change the node config about ip address.

Restart a Successful/Failed pod manually

running kubernetes v1.2.2 on coreos on vmware:
I have a pod with the restart policy set to Never. Is it possible to manually start the same pod back up?
In my use case we will have a postgres instance in this pod. If it was to crash I would like to leave the pod in a failed state until we can look at it closer to see why it failed and then start it manually. Rather than try to restart with a restartpolicy of Always.
Looking through kubectl it doesnt seem like there is a manual start option. I could delete and recreate but i think this would remove the data from my container. Maybe I should be mounting a local volume on my host, and I should not need to worry about losing data?
this is my sample pod yaml. I dont seem to be able to restart the 'health' pod.
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Never
One simple method that might address your needs is to add a unique instance label, maybe a simple counter. If each pod is labelled differently you can start as many as you like and keep around as many failed instances as you like.
e.g. first pod
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
instance: 0
spec:
containers: ...
second pod
apiVersion: v1
kind: Pod
metadata:
name: health
labels:
environment: dev
app: health
instance: 1
spec:
containers: ...
Based on your question and comments sounds like you want to restart a failed container to retain its state and data. In fact, application containers and pods are considered to be relatively ephemeral (rather than durable) entities. When a container crashes its files will be lost and kubelet will restart it with a clean state.
To retain your data and logs use persistent volume types in your deployment. This will let you to preserve data across container restarts.