Using the rollout restart command in cronjob, in GKE - kubernetes

I want to periodically restart the deployment using k8s cronjob.
Please check what is the problem with the yaml file.
When I execute the command from the local command line, the deployment restarts normally, but it seems that the restart is not possible with cronjob.
e.g $ kubectl rollout restart deployment my-ingress -n my-app
my cronjob yaml file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: my-app
spec:
schedule: '0 8 */60 * *'
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: deployment-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/my-ingress -n my-app'

as David suggested run cron of kubectl is like by executing the command
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl rollout restart deployment my-ingress -n my-app
restartPolicy: OnFailure
i would also suggest you to check the role and service account permissions
example for ref :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: kubectl-cron
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubectl-cron
namespace: default
subjects:
- kind: ServiceAccount
name: sa-kubectl-cron
namespace: default
roleRef:
kind: Role
name: kubectl-cron
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-kubectl-cron
namespace: default
---

Related

CronJob Pod with RBAC Role via Serviceaccount Keeps Throwing Forbidden Error

I want to run patching of statefulsets for a specific use case from a Pod via a cronjob. To do so I created the following plan with a custom service account, role and rolebinding to permit the Pod access to the apps api group with the patch verb but I keep running into the following error:
Error from server (Forbidden): statefulsets.apps "test-statefulset" is forbidden: User "system:serviceaccount:test-namespace:test-serviceaccount" cannot get resource "statefulsets" in API group "apps" in the namespace "test-namespace"
my k8s plan:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
env: test
name: test-serviceaccount
namespace: test-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
env: test
name: test-role
namespace: test-namespace
rules:
- apiGroups:
- apps/v1
resourceNames:
- test-statefulset
resources:
- statefulsets
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
name: test-binding
namespace: test-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-serviceaccount
namespace: test-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
name:test-job
namespace: test-namespace
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 3
jobTemplate:
metadata:
labels:
env: test
spec:
activeDeadlineSeconds: 900
backoffLimit: 1
parallelism: 1
template:
metadata:
labels:
env: test
spec:
containers:
- args:
- kubectl -n test-namespace patch statefulset test-statefulset -p '{"spec":{"replicas":0}}'
- kubectl -n test-namespace patch statefulset test-statefulset -p '{"spec":{"replicas":1}}'
command:
- /bin/sh
- -c
image: bitnami/kubectl
restartPolicy: Never
serviceAccountName: test-serviceaccount
schedule: '*/5 * * * *'
startingDeadlineSeconds: 300
successfulJobsHistoryLimit: 3
suspend: false
So far to debug:
I have checked if the pod and serviceaccount association worked as expected and it looks like it did. I see the name of secret mounted on the Pod the cronjob starts is correct.
Used a simpler role where apiGroups was "" i.e. all core groups and tried to "get pods" from that pod, same error
role description:
Name: test-role
Labels: env=test
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
statefulsets.apps/v1 [] [test-statefulset] [patch]
rolebinding description:
Name: test-binding
Labels: env=test
Annotations: <none>
Role:
Kind: Role
Name: test-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount test-serviceaccount test-namespace
Stateful sets need two verbs to apply a patch :
GET and PATCH. PATCH alone wont work

Is there a kubernetes role definition to allow the command `kubectl rollout restart deploy <deployment>`?

I want a deployment in kubernetes to have the permission to restart itself, from within the cluster.
I know I can create a serviceaccount and bind it to the pod, but I'm missing the name of the most specific permission (i.e. not just allowing '*') to allow for the command
kubectl rollout restart deploy <deployment>
here's what I have, and ??? is what I'm missing
apiVersion: v1
kind: ServiceAccount
metadata:
name: restart-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: restarter
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "???"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: restart-sa
namespace: default
roleRef:
kind: Role
name: restarter
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- image: nginx
name: nginx
serviceAccountName: restart-sa
I believe the following is the minimum permissions required to restart a deployment:
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: [$DEPLOYMENT]
verbs: ["get", "patch"]
If you want permission to restart kubernetes deployment itself from within the cluster you need to set permission on rbac authorisation.
In the yaml file you have missed some specific permissions under Role:rules you need to add in the below format
verbs: ["get", "watch", "list"]
Instead of “Pod” you need to add “deployment” in the yaml file.
Make sure that you add “serviceAccountName: restart-sa” in the deployment yaml file under “spec:containers.” As mentioned below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
serviceAccountName: restart-sa
Then you can restart the deployment using the below command:
$ kubectl rollout restart deployment [deployment_name]

Time-based scaling with Kubernetes CronJob: How to avoid deployments overriding minReplicas

I have a HorizontalPodAutoscalar to scale my pods based on CPU. The minReplicas here is set to 5:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I've then added Cron jobs to scale up/down my horizontal pod autoscaler based on time of day:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
This works really well, except that now when I deploy it overwrites this minReplicas value with the minReplicas in the HorizontalPodAutoscaler spec (in my case, this is set to 5)
I'm deploying my HPA using kubectl apply -f ~/autoscale.yaml
Is there a way of handling this situation? Do I need to create some kind of shared logic so that my deployment scripts can work out what the minReplicas value should be? Or is there a simpler way of handling this?
I think you could also consider the following two options:
Use helm to manage the life-cycle of your application with lookup function:
The main idea behind this solution is to query the state of specific cluster resource (here HPA) before trying to create/recreate it with helm install/upgrade commands.
Helm.sh: Docs: Chart template guide: Functions and pipelines: Using the lookup function
I mean to check the current minReplicas value each time before you upgrade your application stack.
Manage the HPA resource separately to application manifest files
Here you can handover this task to a dedicated HPA operator, which can coexist with your CronJobs that adjust minReplicas according specific schedule:
Banzaicloud.com: Blog: K8S HPA Operator

How to delete Kubernetes job automatically after job completion

I am running a kubernetes job on GKE and want to delete the job automatically after the job is completed.
Here is my configuration file for the job.
I set ttlSecondsAfterFinished: 0 but the job was not deleted automatically.
Am I missing something?
cluster / node version: 1.12.8-gke.10
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
# automatically clean up finished job
ttlSecondsAfterFinished: 0
template:
metadata:
name: myjob
spec:
containers:
- name: myjob
image: gcr.io/GCP_PROJECT/myimage:COMMIT_SHA
command: ["bash"]
args: ["deploy.sh"]
# Do not restart containers after they exit
restartPolicy: Never
Looks like this feature is still not available on GKE now.
https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters#about_feature_stages
To ensure stability and production quality, normal GKE clusters only enable features that
are beta or higher. Alpha features are not enabled on normal clusters because they are not
production-ready or upgradeable.
It depends how did you create job.
If you are using CronJob you can use spec.successfulJobsHistoryLimit and spec.failedJobsHistoryLimit and set values to 0. It will say K8s to not sotre any previously finished jobs.
If you are creating pods using YAMLs you have to deleted it manually. However you can also set CronJob to execute command each 5 minutes.
kubectl delete job $(kubectl get job -o=jsonpath='{.items[?(#.status.succeeded==1)].metadata.name}')
It will delete all jobs with status succeded.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jp-runner
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jp-runner
subjects:
- kind: ServiceAccount
name: sa-jp-runner
roleRef:
kind: Role
name: jp-runner
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-jp-runner
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: clean-jobs
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: clean-jobs
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl delete jobs $(kubectl get jobs -o=jsonpath='{.items[?(#.status.succeeded==1)].metadata.name}')
restartPolicy: Never
backoffLimit: 0

Kubernetes puzzle: Populate environment variable from file (mounted volume)

I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using kubectl create -f my_spec.yaml)
The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.
I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).
P.S. It's obvious how to do that if you have control over the command of the container. But in case of launching arbitrary image, I have no control over the command attribute as I do not know it.
apiVersion: batch/v1
kind: Job
metadata:
generateName: puzzle
spec:
template:
spec:
containers:
- name: main
image: arbitrary-image
env:
- name: my_var
valueFrom: <Contents of /mnt/my_var_value.txt>
volumeMounts:
- name: my-vol
path: /mnt
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc
You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).
apiVersion: v1
kind: ServiceAccount
metadata:
name: cm-creator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: cm-creator
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-creator
namespace: default
subjects:
- kind: User
name: system:serviceaccount:default:cm-creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cm-creator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-creator
namespace: default
labels:
app: cm-creator
spec:
replicas: 1
serviceAccountName: cm-creator
selector:
matchLabels:
app: cm-creator
template:
metadata:
labels:
app: cm-creator
spec:
containers:
- name: cm-creator
image: bitnami/kubectl
command:
- /bin/bash
- -c
args:
- while true;
kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-;
sleep 60;
done
volumeMounts:
- name: my-vol
path: /mnt
readOnly: true
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc