Getting "CrashLoopBackOff" as status of deployed pod - kubernetes

How to debug why it's status is CrashLoopBackOff?
I am not using minikube , working on Aws Kubernetes instance.
I followed this tutorial.
https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample
When I do
kubectl create -f specs/spring-boot-app.yml
and check status by
kubectl get pods
it gives
spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m
Below Command
kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg
gives
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container
Command kubectl get pods --all-namespaces gives
NAMESPACE NAME READY STATUS RESTARTS AGE
default constraintpod 1/1 Running 1 88d
default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m
default rcsise-krbxg 1/1 Running 1 87d
default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s
default twocontainers 2/2 Running 479 89d
kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d
kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d
kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d
kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d
kube-system kube-proxy-5sgjb 1/1 Running 1 89d
kube-system kube-proxy-hd7tr 1/1 Running 1 89d
kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d
Command kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx
doesn't print anything.

Why don't you...
run a dummy container (run an endless sleep command)
kubectl exec -it bash
Run the program directly and have a look at the logs directly.
Its an easier form of debugging on K8s.

First of all I fixed by postgres deployment, there was some error of "pod has unbound PersistentVolumeClaims" , so i fixed that error by this post
pod has unbound PersistentVolumeClaims
So now my postgres deployment is running.
kubectl logs spring-boot-postgres-sample-67f9cbc8c-qnkzg doesn't print anything, it means there is something wrong in config file.
kubectl describe pod spring-boot-postgres-sample-67f9cbc8c-qnkzg stating that container is terminated and reason is completed,
I fixed it by running container infinity time
by adding
# Just sleep forever
command: [ "sleep" ]
args: [ "infinity" ]
So now my deployment is running.
But now i Exposed my service by
kubectl expose deployment spring-boot-postgres-sample --type=LoadBalancer --port=8080
but can't able to get External-Ip , so I did
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
So I get my external-Ip as "172.31.71.218"
But now the problem is curl http://172.31.71.218:8080/ getting timeout
Anything i did wrong?
Here is my deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-postgres-sample
namespace: default
spec:
replicas: 1
template:
metadata:
name: spring-boot-postgres-sample
labels:
app: spring-boot-postgres-sample
spec:
containers:
- name: spring-boot-postgres-sample
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: hostname-config
key: postgres_host
image: <mydockerHUbaccount>/spring-boot-postgres-on-k8s:v1
Here is my postgres.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: default
data:
postgres_user: postgresuser
postgres_password: password
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
Here How i got host-config map
kubectl create configmap hostname-config --from-literal=postgres_host=$(kubectl get svc postgres -o jsonpath="{.spec.clusterIP}")

I was able to reproduce the scenario. Seems there is a connectivity issue between the app and Postgres DB. So the app failed to initiate. Please find the logs below it might help you.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
spring-boot-postgres-sample-5d7c85d98b-qwvjr 0/1 CrashLoopBackOff 19 1h
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
2019-05-23 10:53:01.889 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to :5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]

Related

minikube service URL gives ECONNREFUSED on mac os Monterey

I have a spring-boot postgres setup that I am trying to containerize and deploy in minikube. My pods and services show that they are up.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-deployment-5bc57dcd4f-zrwzs 1/1 Running 0 14m
postgres-7f887f4d7d-5b8v5 1/1 Running 0 25m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
server-service NodePort 10.100.21.232 <none> 8080:31457/TCP 15m
postgres ClusterIP 10.97.19.125 <none> 5432/TCP 26m
$ minikube service list
|-------------|------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------------|--------------|-----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| custom | server-service | http/8080 | http://192.168.59.106:31457 |
| custom | postgres | No node port |
|-------------|------------------|--------------|-----------------------------|
But when I try to hit any of my endpoints using postman, I get:
Could not send request. Error: connect ECONNREFUSED 192.168.59.106:31457
I don't know where I am going wrong. I tried deploying the individual containers directly in docker (I had to modify some of the application.properties to get the rest server talking to the db container) and that works without a problem so clearly my server side code should not be a problem.
Here is the yml for the rest-server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
app: server-deployment
template:
metadata:
name: server-deployment
labels:
app: server-deployment
spec:
containers:
- name: server-deployment
image: aruns1494/rest-server-k8s:latest
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_SERVICE
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_service
---
apiVersion: v1
kind: Service
metadata:
name: server-service
namespace: custom
spec:
selector:
name: server-deployment
ports:
- name: http
port: 8080
type: NodePort
I have not changed the spring boot's default port so I expect it to work on 8080. I tried connecting to that URL through chrome and Firefox and I get the same error message. I expect it to fall back to a default error message page when I try to hit the / endpoint.
I did look up several online articles but none of them seem to help. I am also attaching my kube-system pods if that helps:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-x6mv6 1/1 Running 0 39m
etcd-minikube 1/1 Running 0 40m
kube-apiserver-minikube 1/1 Running 0 40m
kube-controller-manager-minikube 1/1 Running 0 40m
kube-proxy-dnr8p 1/1 Running 0 39m
kube-scheduler-minikube 1/1 Running 0 40m
storage-provisioner 1/1 Running 1 (39m ago) 40m
My proposition is to check that provided Deployment and Service have the same labels and selectors, because now in the Deployment config pod label is app: server-deployment, but in the Service config selector is name: server-deployment.
If we want to use name: server-deployment selector for the Service, then we need to update the Deployment as shown below (matchLabels and labels fields):
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
namespace: custom
spec:
selector:
matchLabels:
name: server-deployment
template:
metadata:
name: server-deployment
labels:
name: server-deployment
spec:
containers:
...
Possibly MacOS Firewall is blocking the connection. Could you try navigating to System Preferences > Security & Privacy and see if the port is being blocked in General tab? You can also disable Firewall in Firewall tab.

How to get logs of deployment from Kubernetes?

I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 163d
kubernetes-dashboard 1/1 1 1 164d
monitoring-grafana 0/1 0 0 12m
monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system
Name: monitoring-influxdb
Namespace: kube-system
CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800
Labels: k8s-app=influxdb
task=monitoring
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"...
Selector: k8s-app=influxdb,task=monitoring
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=influxdb
task=monitoring
Containers:
influxdb:
Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from influxdb-storage (rw)
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb
error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-569fd64d84-5q5pj 1/1 Running 0 46h
kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h
traefik-ingress-controller-hx4xd 1/1 Running 0 11h
kubectl logs deployment/<name-of-deployment> # logs of deployment
kubectl logs -f deployment/<name-of-deployment> # follow logs
You can try kubectl describe deploy monitoring-influxdb to get some high-level view of the deployment, maybe some information here.
For more detailed logs, first get the pods: kubectl get po
Then, request the pod logs: kubectl logs <pod-name>
Adding references of two great tools that might help you view cluster logs:
If you wish to view logs from your terminal without using a "heavy" 3rd party logging solution I would consider using K9S which is a great CLI tool that help you get control over your cluster.
If you are not bound only to the CLI and still want run locally I would recommend on Lens.

Error: failed to prepare subPath for volumeMount "postgres-storage" of container "postgres"

I am trying to use persistent volume claims and facing this issue
This is my postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
subPath: postgres
when i debug pod using describe
kubectl describe pod postgres-deployment-8576df7bfc-8mp5t
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m4s default-scheduler Successfully assigned default/postgres-deployment-8576df7bfc-8mp5t to docker-desktop
Normal Pulled 67s (x8 over 2m58s) kubelet, docker-desktop Successfully pulled image "postgres"
Warning Failed 67s (x8 over 2m58s) kubelet, docker-desktop Error: failed to prepare subPath for volumeMount "postgres-storage" of container "postgres"
Normal Pulling 53s (x9 over 3m3s) kubelet, docker-desktop Pulling image "postgres"
My pod is showing me this error
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-8576df7bfc-8mp5t 0/1 CreateContainerConfigError 0 5m5
I am not sure where is the problem in the config. but I am sure this is related to volumes because after adding volumes this problem appears
remove subpath. can you try below yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
I just deployed and it works
master $ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
postgres-deployment 1/1 1 1 4m13s
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
postgres-deployment-6b66bdd748-5q76h 1/1 Running 0 4m13s

Can't ping postgres pod from another pod in kubernetes

I created one busy pod to test db connection by following yaml
pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: marks-dummy-pod
spec:
containers:
- name: marks-dummy-pod
image: djtijare/ubuntuping:v1
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
restartPolicy: Never
Dockerfile used :-
FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash
I create service as
postgresservice.yaml
kind: Service
apiVersion: v1
metadata:
name: postgressvc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
Endpoint for created service as
kind: Endpoints
apiVersion: v1
metadata:
name: postgressvc
subsets:
- addresses:
- ip: 172.31.6.149
ports:
- port: 5432
Then i ran ping 172.31.6.149 inside pod (kubectl exec -it mark-dummy-pod bash) but not working.(ping localhost is working)
output of kubectl get pods,svc,ep -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/marks-dummy-pod 1/1 Running 0 43m 192.168.1.63 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgressvc ClusterIP 10.107.58.81 <none> 5432/TCP 33m <none>
NAME ENDPOINTS AGE
endpoints/postgressvc 172.31.6.149:5432 32m
Output for answer by P Ekambaram
kubectl get pods,svc,ep -o wide gives
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/postgres-855696996d-w6h6c 1/1 Running 0 44s 192.168.1.66 ip-172-31-11-87 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/postgres NodePort 10.110.203.204 <none> 5432:31076/TCP 44s app=postgres
NAME ENDPOINTS AGE
endpoints/postgres 192.168.1.66:5432 44s
So problem was in my DNS pod in namespace=kube-system
I just create new kubernetes setup and make sure that DNS is working
For new setup refer to my answer of another question
How to start kubelet service??
postgres pod is missing?
did you create endpoint object or was it auto generated?
share the pod definition YAML
you shouldnt be creating endpoint. it is wrong. follow the below deployment for postgres.
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
emptyDir:
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
undeploy postgres service and endpoint and deploy the above YAML.
it should work
why NODE ip is prefixed with ip-
you should create deployment for your database and then make a service that target this deployment and then ping using this service why ping with ip ?

How to schedule a cronjob which executes a kubectl command?

How to schedule a cronjob which executes a kubectl command?
I would like to run the following kubectl command every 5 minutes:
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
For this, I have created a cronjob as below:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
But it is failing to start the container, showing the message :
Back-off restarting failed container
And with the error code 127:
State: Terminated
Reason: Error
Exit Code: 127
From what I checked, the error code 127 says that the command doesn't exist. How could I run the kubectl command then as a cron job ? Am I missing something?
Note: I had posted a similar question ( Scheduled restart of Kubernetes pod without downtime ) , but that was more of having the main deployment itself as a cronjob, here I'm trying to run a kubectl command (which does the restart) using a CronJob - so I thought it would be better to post separately
kubectl describe cronjob hello -n jp-test:
Name: hello
Namespace: jp-test
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"hello","namespace":"jp-test"},"spec":{"jobTemplate":{"spec":{"templ...
Schedule: */5 * * * *
Concurrency Policy: Allow
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Wed, 27 Feb 2019 14:10:00 +0100
Active Jobs: hello-1551273000
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 6m cronjob-controller Created job hello-1551272700
Normal SuccessfulCreate 1m cronjob-controller Created job hello-1551273000
Normal SawCompletedJob 16s cronjob-controller Saw completed job: hello-1551272700
kubectl describe job hello -v=5 -n jp-test
Name: hello-1551276000
Namespace: jp-test
Selector: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:00:02 +0100
Pods Statuses: 0 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m job-controller Created pod: hello-1551276000-lz4dp
Normal SuccessfulDelete 1m job-controller Deleted pod: hello-1551276000-lz4dp
Warning BackoffLimitExceeded 1m (x2 over 1m) job-controller Job has reached the specified backoff limit
Name: hello-1551276300
Namespace: jp-test
Selector: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:05:02 +0100
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m job-controller Created pod: hello-1551276300-8d5df
Long story short BusyBox doesn' have kubectl installed.
You can check it yourself using kubectl run -i --tty busybox --image=busybox -- sh which will run a BusyBox pod as interactive shell.
I would recommend using bitnami/kubectl:latest.
Also keep in mind that You will need to set proper RBAC, as you will get Error from server (Forbidden): services is forbidden
You could use something like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: jp-test
name: jp-runner
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jp-runner
namespace: jp-test
subjects:
- kind: ServiceAccount
name: sa-jp-runner
namespace: jp-test
roleRef:
kind: Role
name: jp-runner
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-jp-runner
namespace: jp-test
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
You need to make the CronJob's container to download the cluster configuration so then you can run kubectl commands against it. Here is an example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: drupal-cron
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: drupal-cron
image: juampynr/digital-ocean-cronjob:latest
env:
- name: DIGITALOCEAN_ACCESS_TOKEN
valueFrom:
secretKeyRef:
name: api
key: key
command: ["/bin/bash","-c"]
args:
- doctl kubernetes cluster kubeconfig save drupster;
POD_NAME=$(kubectl get pods -l tier=frontend -o=jsonpath='{.items[0].metadata.name}');
kubectl exec $POD_NAME -c drupal -- vendor/bin/drush core:cron;
restartPolicy: OnFailure
I posted an answer describing how I did this in a different thread: https://stackoverflow.com/a/62321138/1120652