How to schedule a cronjob which executes a kubectl command?
I would like to run the following kubectl command every 5 minutes:
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
For this, I have created a cronjob as below:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
But it is failing to start the container, showing the message :
Back-off restarting failed container
And with the error code 127:
State: Terminated
Reason: Error
Exit Code: 127
From what I checked, the error code 127 says that the command doesn't exist. How could I run the kubectl command then as a cron job ? Am I missing something?
Note: I had posted a similar question ( Scheduled restart of Kubernetes pod without downtime ) , but that was more of having the main deployment itself as a cronjob, here I'm trying to run a kubectl command (which does the restart) using a CronJob - so I thought it would be better to post separately
kubectl describe cronjob hello -n jp-test:
Name: hello
Namespace: jp-test
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"hello","namespace":"jp-test"},"spec":{"jobTemplate":{"spec":{"templ...
Schedule: */5 * * * *
Concurrency Policy: Allow
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Wed, 27 Feb 2019 14:10:00 +0100
Active Jobs: hello-1551273000
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 6m cronjob-controller Created job hello-1551272700
Normal SuccessfulCreate 1m cronjob-controller Created job hello-1551273000
Normal SawCompletedJob 16s cronjob-controller Saw completed job: hello-1551272700
kubectl describe job hello -v=5 -n jp-test
Name: hello-1551276000
Namespace: jp-test
Selector: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:00:02 +0100
Pods Statuses: 0 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m job-controller Created pod: hello-1551276000-lz4dp
Normal SuccessfulDelete 1m job-controller Deleted pod: hello-1551276000-lz4dp
Warning BackoffLimitExceeded 1m (x2 over 1m) job-controller Job has reached the specified backoff limit
Name: hello-1551276300
Namespace: jp-test
Selector: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:05:02 +0100
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m job-controller Created pod: hello-1551276300-8d5df
Long story short BusyBox doesn' have kubectl installed.
You can check it yourself using kubectl run -i --tty busybox --image=busybox -- sh which will run a BusyBox pod as interactive shell.
I would recommend using bitnami/kubectl:latest.
Also keep in mind that You will need to set proper RBAC, as you will get Error from server (Forbidden): services is forbidden
You could use something like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: jp-test
name: jp-runner
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jp-runner
namespace: jp-test
subjects:
- kind: ServiceAccount
name: sa-jp-runner
namespace: jp-test
roleRef:
kind: Role
name: jp-runner
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-jp-runner
namespace: jp-test
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
You need to make the CronJob's container to download the cluster configuration so then you can run kubectl commands against it. Here is an example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: drupal-cron
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: drupal-cron
image: juampynr/digital-ocean-cronjob:latest
env:
- name: DIGITALOCEAN_ACCESS_TOKEN
valueFrom:
secretKeyRef:
name: api
key: key
command: ["/bin/bash","-c"]
args:
- doctl kubernetes cluster kubeconfig save drupster;
POD_NAME=$(kubectl get pods -l tier=frontend -o=jsonpath='{.items[0].metadata.name}');
kubectl exec $POD_NAME -c drupal -- vendor/bin/drush core:cron;
restartPolicy: OnFailure
I posted an answer describing how I did this in a different thread: https://stackoverflow.com/a/62321138/1120652
Related
was hoping to get a little help, my Google-Fu didnt get me much closer. I'm trying to install the metrics server for my fedora-coreos kubernetes 4 node cluster like so:
kubectl apply -f deploy/kubernetes/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
the service seems to never start
kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apiregistration.k8s.io/v1beta1","kind":"APIService","metadata":{"annotations":{},"name":"v1beta1.metrics.k8s.io"},"spec":{"...
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2020-03-04T16:53:33Z
Resource Version: 1611816
Self Link: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
UID: 65d9a56a-c548-4d7e-a647-8ce7a865a266
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: kube-system
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2020-03-04T16:53:33Z
Message: failing or missing response from https://10.3.230.59:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.3.230.59:443/apis/metrics.k8s.io/v1beta1: 403
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
Diagnosing I have found googling around:
kubectl get deploy,svc -n kube-system |egrep metrics-server
deployment.apps/metrics-server 1/1 1 1 8m7s
service/metrics-server ClusterIP 10.3.230.59 <none> 443/TCP 8m7s
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
Error from server (ServiceUnavailable): the server is currently unable to handle the request
kubectl get all --all-namespaces | grep -i metrics-server
kube-system pod/metrics-server-75b5d446cd-zj4jm 1/1 Running 0 9m11s
kube-system service/metrics-server ClusterIP 10.3.230.59 <none> 443/TCP 9m11s
kube-system deployment.apps/metrics-server 1/1 1 1 9m11s
kube-system replicaset.apps/metrics-server-75b5d446cd 1 1 1 9m11s
kubectl logs -f metrics-server-75b5d446cd-zj4jm -n kube-system
I0304 16:53:36.475657 1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0304 16:53:38.229267 1 authentication.go:296] Cluster doesn't provide requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
I0304 16:53:38.267760 1 secure_serving.go:116] Serving securely on [::]:4443
kubectl get -n kube-system deployment metrics-server -o yaml | grep -i args -A 10
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"metrics-server","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"metrics-server"}},"template":{"metadata":{"labels":{"k8s-app":"metrics-server"},"name":"metrics-server"},"spec":{"containers":[{"args":["--cert-dir=/tmp","--secure-port=4443","--kubelet-insecure-tls","--kubelet-preferred-address-types=InternalIP"],"image":"k8s.gcr.io/metrics-server-amd64:v0.3.6","imagePullPolicy":"IfNotPresent","name":"metrics-server","ports":[{"containerPort":4443,"name":"main-port","protocol":"TCP"}],"securityContext":{"readOnlyRootFilesystem":true,"runAsNonRoot":true,"runAsUser":1000},"volumeMounts":[{"mountPath":"/tmp","name":"tmp-dir"}]}],"nodeSelector":{"beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64"},"serviceAccountName":"metrics-server","volumes":[{"emptyDir":{},"name":"tmp-dir"}]}}}}
creationTimestamp: "2020-03-04T16:53:33Z"
generation: 1
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
resourceVersion: "1611810"
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/metrics-server
uid: 006e758e-bd33-47d7-8378-d3a8081ee8a8
spec:
--
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
name: metrics-server
ports:
- containerPort: 4443
name: main-port
finally my deployment config:
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
imagePullPolicy: IfNotPresent
volumeMounts:
- name: tmp-dir
mountPath: /tmp
hostNetwork: true
nodeSelector:
beta.kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
I'm at a loss of what it could be getting the metrics service to start and just get the basic kubectl top node to display any info all I get is
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
I have searched the internet and tried adding the args: and command: lines but no luck
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Can anyone shed light on how to fix this? Thanks
Pastebin log file
Log File
I've reproduced your issue. I have used Calico as CNI.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fedora-master Ready master 6m27s v1.17.3
fedora-worker-1 Ready <none> 4m48s v1.17.3
fedora-worker-2 Ready <none> 4m46s v1.17.3
fedora-master:~/metrics-server$ kubectl describe apiservice v1beta1.metrics.k8s.io
Status:
Conditions:
Last Transition Time: 2020-03-12T16:04:59Z
Message: failing or missing response from https://10.99.122.196:443/apis/metrics.k8s.io/v
1beta1: Get https://10.99.122.196:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting
for connection (Client.Timeout exceeded while awaiting headers)
fedora-master:~/metrics-server$ kubectl top pod
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
When you have only one node in cluster, default settings in metrics-server repo works correctly. Issue occurs when you have more than 2 nodes. Ive used 1 master and 2 workers to reproduce. Below example deployment which works correct (have all required args). Before, please remove your current metrics-server YAMLs (kubectl delete -f deploy/kubernetes) and execute:
$ git clone https://github.com/kubernetes-sigs/metrics-server
$ cd metrics-server/deploy/kubernetes/
$ vi metrics-server-deployment.yaml
Paste below YAML:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
hostNetwork: true
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=4443
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
save and quit using :wq
$ cd ~/metrics-server
$ kubectl apply -f deploy/kubernetes/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Wait a while for metrics-server to gather a few metrics from nodes.
$ kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
Namespace:
...
Metadata:
Creation Timestamp: 2020-03-12T16:57:58Z
...
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: kube-system
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2020-03-12T16:58:01Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Events: <none>
after a few minutes you can use top.
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
fedora-master 188m 9% 1315Mi 17%
fedora-worker-1 109m 5% 982Mi 13%
fedora-worker-2 84m 4% 969Mi 13%
If you will still encounter some issues, please add - --v=6 to deployment and provide logs from metrics-server pod.
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
args:
- /metrics-server
- --v=6
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
You need to carefully check logs for calico-node pods. In my case i have some other network interfaces and the autodetection mechanism in calico was detecting wrong interface (ip address). You need to consult this documentation https://projectcalico.docs.tigera.io/reference/node/configuration.
What i did in my case, was simply:
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=cidr=172.16.8.0/24
cidr is my "working network". After this, all calico-nodes restarted and suddenly everything was fine.
I am trying to use persistent volume claims and facing this issue
This is my postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
subPath: postgres
when i debug pod using describe
kubectl describe pod postgres-deployment-8576df7bfc-8mp5t
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m4s default-scheduler Successfully assigned default/postgres-deployment-8576df7bfc-8mp5t to docker-desktop
Normal Pulled 67s (x8 over 2m58s) kubelet, docker-desktop Successfully pulled image "postgres"
Warning Failed 67s (x8 over 2m58s) kubelet, docker-desktop Error: failed to prepare subPath for volumeMount "postgres-storage" of container "postgres"
Normal Pulling 53s (x9 over 3m3s) kubelet, docker-desktop Pulling image "postgres"
My pod is showing me this error
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-deployment-8576df7bfc-8mp5t 0/1 CreateContainerConfigError 0 5m5
I am not sure where is the problem in the config. but I am sure this is related to volumes because after adding volumes this problem appears
remove subpath. can you try below yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
I just deployed and it works
master $ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
postgres-deployment 1/1 1 1 4m13s
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
postgres-deployment-6b66bdd748-5q76h 1/1 Running 0 4m13s
We have a kubernetes cluster with istio proxy running.
At first I created a cronjob which reads from database and updates a value if found. It worked fine.
Then it turns out we already had a service that does the database update so I changed the database code into a service call.
conn := dial.Service("service:3550", grpc.WithInsecure())
client := protobuf.NewServiceClient(conn)
client.Update(ctx)
But istio rejects the calls with an RBAC error. It just rejects and doesnt say why.
Is it possible to add a role to a cronjob? How can we do that?
The mTLS meshpolicy is PERMISSIVE.
Kubernetes version is 1.17 and istio version is 1.3
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2019-12-05T16:06:08Z
Generation: 1
Resource Version: 6578
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: 25f36b0f-1779-11ea-be8c-42010a84006d
Spec:
Peers:
Mtls:
Mode: PERMISSIVE
The cronjob yaml
Name: cronjob
Namespace: serve
Labels: <none>
Annotations: <none>
Schedule: */30 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 1
Failed Job History Limit: 3
Pod Template:
Labels: <none>
Containers:
service:
Image: service:latest
Port: <none>
Host Port: <none>
Environment:
JOB_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 17 Dec 2019 09:00:00 +0100
Active Jobs: <none>
Events:
edit
I have turned off RBA for my namespace in ClusterRBACConfig and now it works. So cronjobs are affected by roles is my conclusion then and it should be possible to add a role and call other services.
The cronjob needs proper permissions in order to run if RBAC is enabled.
One of the solutions in this case would be to add a ServiceAccount to the cronjob configuration file that has enough privileges to execute what it needs to.
Since You already have existing services in the namespace You can check if You have existing ServiceAccount for specific NameSpace by using:
$ kubectl get serviceaccounts -n serve
If there is existing ServiceAccount You can add it into Your cronjob manifest yaml file.
Like in this example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: adwords-api-scale-up-cron-job
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 100
template:
spec:
serviceAccountName: scheduled-autoscaler-service-account
containers:
- name: adwords-api-scale-up-container
image: bitnami/kubectl:1.15-debian-9
command:
- bash
args:
- "-xc"
- |
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
volumeMounts:
- name: kubectl-config
mountPath: /.kube/
readOnly: true
volumes:
- name: kubectl-config
hostPath:
path: $HOME/.kube # Replace $HOME with an evident path location
restartPolicy: OnFailure
Then under Pod Template there should be Service Account visable:
$ kubectl describe cronjob adwords-api-scale-up-cron-job
Name: adwords-api-scale-up-cron-job
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"adwords-api-scale-up-cron-job","namespace":"default"},...
Schedule: */2 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 1
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Active Deadline Seconds: 100s
Pod Template:
Labels: <none>
Service Account: scheduled-autoscaler-service-account
Containers:
adwords-api-scale-up-container:
Image: bitnami/kubectl:1.15-debian-9
Port: <none>
Host Port: <none>
Command:
bash
Args:
-xc
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
Environment: <none>
Mounts:
/.kube/ from kubectl-config (ro)
Volumes:
kubectl-config:
Type: HostPath (bare host directory volume)
Path: $HOME/.kube
HostPathType:
Last Schedule Time: <unset>
Active Jobs: <none>
Events: <none>
In case of custom RBAC configuration i suggest referring to kubernetes documentation.
Hope this helps.
I am deploying web agent via web-agent-deployment.yaml. So I ran the below command
root#ip-10-11.x.x.:~/ignite# kubectl create -f web-agent-deployment.yaml
deployment web-agent created
But still no web-agent pod spin at all. Please check below command output.
root#ip-10-10-11.x.x:~/ignite# kubectl get pods -n ignite
NAME READY STATUS RESTARTS AGE
ignite-cluster-6qhmf 1/1 Running 0 2h
ignite-cluster-lpgrt 1/1 Running 0 2h
as per official Doc, it should come Like blow.
$ kubectl get pods -n ignite
NAME READY STATUS RESTARTS AGE
web-agent-5596bd78c-h4272 1/1 Running 0 1h
This is my web-agent file:-
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: web-agent
namespace: ignite
spec:
selector:
matchLabels:
app: web-agent
replicas: 1
template:
metadata:
labels:
app: web-agent
spec:
serviceAccountName: ignite-cluster
containers:
- name: web-agent
image: apacheignite/web-agent
resources:
limits:
cpu: 500m
memory: 500Mi
env:
- name: DRIVER_FOLDER
value: "./jdbc-drivers"
- name: NODE_URI
value: ""https://10.11.X.Y:8080"" #my One of worker Node IP
- name: SERVER_URI
value: "http://frontend.web-console.svc.cluster.local"
- name: TOKENS
value: ""
- name: NODE_LOGIN
value: web-agent
- name: NODE_PASSWORD
value: password
How to debug why it's status is CrashLoopBackOff?
I am not using minikube , working on Aws Kubernetes instance.
I followed this tutorial.
https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample
When I do
kubectl create -f specs/spring-boot-app.yml
and check status by
kubectl get pods
it gives
spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m
Below Command
kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg
gives
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container
Command kubectl get pods --all-namespaces gives
NAMESPACE NAME READY STATUS RESTARTS AGE
default constraintpod 1/1 Running 1 88d
default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m
default rcsise-krbxg 1/1 Running 1 87d
default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s
default twocontainers 2/2 Running 479 89d
kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d
kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d
kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d
kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d
kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d
kube-system kube-proxy-5sgjb 1/1 Running 1 89d
kube-system kube-proxy-hd7tr 1/1 Running 1 89d
kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d
Command kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx
doesn't print anything.
Why don't you...
run a dummy container (run an endless sleep command)
kubectl exec -it bash
Run the program directly and have a look at the logs directly.
Its an easier form of debugging on K8s.
First of all I fixed by postgres deployment, there was some error of "pod has unbound PersistentVolumeClaims" , so i fixed that error by this post
pod has unbound PersistentVolumeClaims
So now my postgres deployment is running.
kubectl logs spring-boot-postgres-sample-67f9cbc8c-qnkzg doesn't print anything, it means there is something wrong in config file.
kubectl describe pod spring-boot-postgres-sample-67f9cbc8c-qnkzg stating that container is terminated and reason is completed,
I fixed it by running container infinity time
by adding
# Just sleep forever
command: [ "sleep" ]
args: [ "infinity" ]
So now my deployment is running.
But now i Exposed my service by
kubectl expose deployment spring-boot-postgres-sample --type=LoadBalancer --port=8080
but can't able to get External-Ip , so I did
kubectl patch svc <svc-name> -n <namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
So I get my external-Ip as "172.31.71.218"
But now the problem is curl http://172.31.71.218:8080/ getting timeout
Anything i did wrong?
Here is my deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-postgres-sample
namespace: default
spec:
replicas: 1
template:
metadata:
name: spring-boot-postgres-sample
labels:
app: spring-boot-postgres-sample
spec:
containers:
- name: spring-boot-postgres-sample
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: hostname-config
key: postgres_host
image: <mydockerHUbaccount>/spring-boot-postgres-on-k8s:v1
Here is my postgres.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: default
data:
postgres_user: postgresuser
postgres_password: password
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
containers:
- image: postgres
name: postgres
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres_password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: postgres
Here How i got host-config map
kubectl create configmap hostname-config --from-literal=postgres_host=$(kubectl get svc postgres -o jsonpath="{.spec.clusterIP}")
I was able to reproduce the scenario. Seems there is a connectivity issue between the app and Postgres DB. So the app failed to initiate. Please find the logs below it might help you.
$ kubectl get po
NAME READY STATUS RESTARTS AGE
spring-boot-postgres-sample-5d7c85d98b-qwvjr 0/1 CrashLoopBackOff 19 1h
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.service.spi.ServiceException: Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
2019-05-23 10:53:01.889 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
org.postgresql.util.PSQLException: Connection to :5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) ~[postgresql-9.4.1212.jre7.jar!/:9.4.1212.jre7]