problems with the "alpine" image for my initContainers - kubernetes

People,
I am trying to create a simple file /tmp/tarte.test with initContainers. I have a constraint, using an alpine image for the container. Please let me know what is NOT in this simple yaml file.
apiVersion: v1
kind: Pod
metadata:
name: initonpod
namespace: prod
labels:
app: myapp
spec:
containers:
- name: mycont-nginx
image: alpine
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200
the describe of the pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9s default-scheduler Successfully assigned prod/initonpod to k8s-node-1
Normal Pulled 8s kubelet, k8s-node-1 Container image "alpine" already present on machine
Normal Created 8s kubelet, k8s-node-1 Created container
Normal Started 7s kubelet, k8s-node-1 Started container
Normal Pulling 4s (x2 over 7s) kubelet, k8s-node-1 pulling image "alpine"
Normal Pulled 1s (x2 over 6s) kubelet, k8s-node-1 Successfully pulled image "alpine"
Normal Created 1s (x2 over 5s) kubelet, k8s-node-1 Created container
Normal Started 1s (x2 over 5s) kubelet, k8s-node-1 Started container
Warning BackOff 0s kubelet, k8s-node-1 Back-off restarting failed container
And if I change the alpine image for an nginx image container... it's work good.

Back-off restarting failed container because of your container spec.
spec:
containers:
- name: mycont-nginx
image: alpine
This alpine container doesn't run forever. In kubernetes, container has to run forever.That's why you are getting error. When you use nginx image, it runs forever. So to use alpine image change the spec as below:
apiVersion: v1
kind: Pod
metadata:
name: busypod
labels:
app: busypod
spec:
containers:
- name: busybox
image: alpine
command:
- "sh"
- "-c"
- >
while true; do
sleep 3600;
done
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200

Related

Why is Kubernetes pod failing to start?

I was trying to test one scenario where pod will mount a volume and it will try to write one file to it. Below mentioned yaml works fine when I exclude command and args. However with command and args it fails with "crashloopbackoff".
The describe command is not providing much information for the failure. What's wrong here?
Note: I was running this yaml on katacoda.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: voltest
name: voltest
spec:
replicas: 1
selector:
matchLabels:
run: voltest
template:
metadata:
creationTimestamp: null
labels:
run: voltest
spec:
containers:
- image: nginx
name: voltest
volumeMounts:
- mountPath: /var/local/aaa
name: mydir
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/local/aaa/testOut.txt"]
volumes:
- name: mydir
hostPath:
path: /var/local/aaa
type: DirectoryOrCreate
Describe command output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/voltest-78678dd56c-h5frs to controlplane
Normal Pulling 19s (x3 over 48s) kubelet, controlplane Pulling image "nginx"
Normal Pulled 17s (x3 over 39s) kubelet, controlplane Successfully pulled image "nginx"
Normal Created 17s (x3 over 39s) kubelet, controlplane Created container voltest
Normal Started 17s (x3 over 39s) kubelet, controlplane Started container voltest
Warning BackOff 5s (x4 over 35s) kubelet, controlplane Back-off restarting failed container
You've configured your pod to run a single shell command:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt"]
This means that the pod starts up, runs echo 'test complete' > /var/testOut.txt, and then immediately exits. From the perspective
of kubernetes, this is a crash.
You've replaced the default behavior of the nginx image ("run
nginx") with a shell command.
If you want the pod to continue running, you'll need to arrange for it
to run some sort of long-running command. A simple solution would be
something like:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt; sleep 3600"]
This will cause the pod to sleep for an hour before exiting, giving
you time to inspect the results of your shell command.
Note that your shell command isn't testing anything useful; you've
mounted your mydir volume on /var/local/aaa, but your shell
command is writing to /var/testOut.txt, so it's not making any use
of the volume.

Could the status be "Running" if I run "kubectl get pod freebox"?

Apply the following YAML file into a Kubernetes cluster:
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
Could the status be "Running" if I run kubectl get pod freebox? Why?
If formatting errors are ignored , no pod wont be in running status :
controlplane $ kubectl get pods freebox
NAME READY STATUS RESTARTS AGE
freebox 0/1 CrashLoopBackOff 3 81s
Becuase if you look at Dockerfile of busy box , The CMD argument "sh" which will complete immediately so pod gets restarted ( becuase default restart policy is always')
https://hub.docker.com/layers/busybox/library/busybox/latest/images/sha256-bc02457f8f5a4a3cd931028ec76c7468cfa8b44d7d89c4a91df1fd82285da681?context=explore
ADD file ... in /708.51 KB
CMD ["sh"]
see the describe of the pod as following :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/freebox to node01
Normal Pulled 7s (x2 over 8s) kubelet, node01 Container image "busybox:latest" already present on machine
Normal Created 6s (x2 over 7s) kubelet, node01 Created container busybox
Normal Started 6s (x2 over 7s) kubelet, node01 Started container busybox
Warning BackOff 5s (x2 over 6s) kubelet, node01 Back-off restarting failed container
the busybox image need to run a command for running.
add the command in the .spec.containers section under the busybox container
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
command:
- sleep
- 4800
image: busybox:latest
imagePullPolicy: IfNotPresent

Trying to create a Kubernetes deployment but it shows 0 pods available

I'm new to k8s, so some of my terminology might be off. But basically, I'm trying to deploy a simple web api: one load balancer in front of n pods (where right now, n=1).
However, when I try to visit the load balancer's IP address it doesn't show my web application. When I run kubectl get deployments, I get this:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tl-api 1 1 1 0 4m
Here's my YAML file. Let me know if anything looks off--I'm very new to this!
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
Edit 2: When I try using ACS (which supports Windows), I get this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned tl-api-3466491809-vd5kg to dc9ebacs9000
Normal SuccessfulMountVolume 11m kubelet, dc9ebacs9000 MountVolume.SetUp succeeded for volume "default-token-v3wz9"
Normal Pulling 4m (x6 over 10m) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 1s (x50 over 10m) kubelet, dc9ebacs9000 Error syncing pod
Normal BackOff 1s (x44 over 10m) kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
I then try examining the failed pod:
PS C:\users\<me>\source\repos\DeviceCloud\DeviceCloud\1- Presentation\DeviceCloud.Web.API> kubectl logs tl-api-3466491809-vd5kg
Error from server (BadRequest): container "tl-api" in pod "tl-api-3466491809-vd5kg" is waiting to start: trying and failing to pull image
When I run docker images I see the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 24 hours ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 24 hours ago 7.85GB
devicecloudwebapi dev bb33ab221910 25 hours ago 7.76GB
Your problem is that the container image tlk8s.azurecr.io/devicecloudwebapi:v1 is in a private container registry. See the events at the bottom of the following command:
$ kubectl describe po -l=app=tl-api
The official Kubernetes docs describe how to resolve this issue, see Pull an Image from a Private Registry, essentially:
Create a secret kubectl create secret docker-registry
Use it in your deployment, under the spec.imagePullSecrets key

How to overwrite file at a deployment time in kubernetes?

I'm trying to deploy Diffusion image in kubernetes and I need to overwrite one of Diffusion configuration files at deployment time.
Actually it is a SystemAuthentication.store file with default credentials in /opt/Diffusion6.0.3_01/etc/. I'm storing new file in secret and mount it into etc/test/ which can be seen in below deployment file.
template:
metadata:
labels:
run: diffusion
spec:
serviceAccountName: diffusion-role
volumes:
- name: diffusion-secrets
secret:
secretName: diffusion-license
- name: ssl-cert
secret:
secretName: ssl-certificate
- name: system-authentication
secret:
secretName: system-authentication-store
containers:
- image: pushtechnology/diffusion:6.0.3
imagePullPolicy: IfNotPresent
name: diffusion
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
volumeMounts:
- name: diffusion-secrets
mountPath: /etc/diffusion-secrets
readOnly: true
- name: ssl-cert
mountPath: /etc/test/
readOnly: true
- name: system-authentication
mountPath: /etc/test/
command: [ "/bin/sh", "-c", "cp etc/test/SystemAuthentication.store /opt/DIffusion6.0.3_01" ]
When I deploy this image pods are failing with
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned diffusion-db6d6df7b-f5tp4 to timmy.pushtechnology.com
Normal SuccessfulMountVolume 2m kubelet, timmy.pushtechnology.com MountVolume.SetUp succeeded for volume "diffusion-role-token-n59ds"
Normal SuccessfulMountVolume 2m kubelet, timmy.pushtechnology.com MountVolume.SetUp succeeded for volume "ssl-cert"
Normal SuccessfulMountVolume 2m kubelet, timmy.pushtechnology.com MountVolume.SetUp succeeded for volume "system-authentication"
Normal SuccessfulMountVolume 2m kubelet, timmy.pushtechnology.com MountVolume.SetUp succeeded for volume "diffusion-secrets"
Normal Killing 1m (x2 over 1m) kubelet, timmy.pushtechnology.com Killing container with id docker://diffusion:FailedPostStartHook
Warning BackOff 1m (x2 over 1m) kubelet, timmy.pushtechnology.com Back-off restarting failed container
Normal Pulled 1m (x3 over 2m) kubelet, timmy.pushtechnology.com Container image "pushtechnology/diffusion:6.0.3" already present on machine
Normal Created 1m (x3 over 1m) kubelet, timmy.pushtechnology.com Created container
Normal Started 1m (x3 over 1m) kubelet, timmy.pushtechnology.com Started container
Warning FailedPostStartHook 1m (x3 over 1m) kubelet, timmy.pushtechnology.com
Warning FailedSync 1m (x5 over 1m) kubelet, timmy.pushtechnology.com Error syncing pod
I have tried also workaruond described here: https://github.com/kubernetes/kubernetes/issues/19764#issuecomment-269879587
with same results.
You overwrote the container command with cp etc/test/SystemAuthentication.store /opt/DIffusion6.0.3_01, which is a command with exits after it is done. Kubernetes assumes that this is a failure.
You need to replace it with something like cp etc/test/SystemAuthentication.store /opt/DIffusion6.0.3_01 && /path/to/original/binary, where the last command is the command the image would start without overwriting command. This depends on your image.
I think #svenwtl answer might be correct, but a Dockerfile of the image I'm using has some complicated constructs that I had no idea how to use in the deployment file.
The fix which has worked for me (after a long try/fail loop) was to actually use a container lifecycle hook:
volumeMounts:
- name: diffusion-secrets
mountPath: /etc/diffusion-secrets
readOnly: true
- name: ssl-cert
mountPath: /etc/test/
readOnly: true
- name: system-authentication
mountPath: /etc/test1/
lifecycle:
postStart:
exec:
command: [ "/bin/sh", "-c", "cp -f /etc/test1/SystemAuthentication.store /opt/Diffusion6.0.3_01/etc/" ]
I also mounted SystemAuthentication in different folder /etc/test1, but I don't think this was part of the fix.

Pulling private image from docker hub using minikube

I'm using minikube on macOS 10.12 and trying to use a private image hosted at docker hub. I know that minikube launches a VM that as far as I know will be the unique node of my local kubernetes cluster and that will host all my pods.
I read that I could use the VM's docker runtime by running eval $(minikube docker-env). So I used those variables to change from my local docker runtime to the other. Running docker images I could see that the change was done effectively.
My next step was to log in at docker hub using docker login and then pulling my image manually, which ended without error. After that I thought that the image will by ready to by be used by any pod in the cluster but I'm always getting ImagePullBackOff. I also tried to ssh into the VM via minikube ssh and the result is the same, the image is there to be used but for some reason I don't know it's refusing to use it.
In case it helps, this is my deployment description file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
And this is the output of kubectl describe pod <podname>:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned web-deployment-2451628605-vtbl8 to minikube
1m 23s 4 {kubelet minikube} spec.containers{nginx} Normal Pulling pulling image "godraude/nginx"
1m 20s 4 {kubelet minikube} spec.containers{nginx} Warning Failed Failed to pull image "godraude/nginx": Error: image godraude/nginx not found
1m 20s 4 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ErrImagePull: "Error: image godraude/nginx not found"
1m 4s 5 {kubelet minikube} spec.containers{nginx} Normal BackOff Back-off pulling image "godraude/nginx"
1m 4s 5 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with ImagePullBackOff: "Back-off pulling image \"godraude/nginx\""
i think what u need is to create a secrete which will tell kube from where it can pull your private image and its credentials
kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
below command to list your secretes
kubectl get secret
NAME TYPE DATA AGE
my-secret kubernetes.io/dockercfg 1 100d
now in deployment defination u need to define whcih secret to use
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: godraude/nginx
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
imagePullSecrets:
- name: my-secret
The problem was the image pull policy. It was set to Always so docker was trying to pull the imagen even if it was present. Setting imagePullPolicy: Never solved the issue.