no kind "TZCronJob" is registered for version "cronjobber.hidde.co/v1alpha1" - kubernetes

Background
I am using TZCronJob to run cronjobs with timezones in Kubernetes. A sample cronjob.yaml might look like the following (as per the cronjobber docs). Note the timezone specified, the schedule, and kind=TZCronJob:
apiVersion: cronjobber.hidde.co/v1alpha1
kind: TZCronJob
metadata:
name: hello
spec:
schedule: "05 09 * * *"
timezone: "Europe/Amsterdam"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
Nrmally, with any old cronjob in Kubernetes, you can run kubectl create job test-job --from=tzcronjob/name_of_my_cronjob, as per the kubectl create cronjob docs.
Error
However, when I try to run it with kubectl create job test-job --from=tzcronjob/name_of_my_cronjob (switching the from command to --from=tzcronjob/) I get:
error: from must be an existing cronjob: no kind "TZCronJob" is registered for version "cronjobber.hidde.co/v1alpha1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
When I try to take a peek at https://kubernetes.io/kubernetes/pkg/kubectl/scheme/scheme.go:28 I get 404, not found.
This almost worked, but to no avail:
kubectl create job test-job-name-v1 --image=tzcronjob/name_of_image
How can I create a new one-off job from my chart definition?

In Helm there are mechanisms called Hooks.
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle. For example, you can use hooks to:
Load a ConfigMap or Secret during install before any other charts are
loaded
Execute a Job to back up a database before installing a new chart,
and then execute a second job after the upgrade in order to restore
data
Run a Job before deleting a release to gracefully take a service out
of rotation before removing it.
Hooks work like regular templates, but they have special annotations that cause Helm to utilize them differently. In this section, we cover the basic usage pattern for hooks.
Hooks are declared as an annotation in the metadata section of a manifest:
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
If the resources is a Job kind, Tiller will wait until the job successfully runs to completion. And if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.
HOW TO WRITE HOOKS:
Hooks are just Kubernetes manifest files with special annotations in the metadata section. Because they are template files, you can use all of the normal template features, including reading .Values, .Release, and .Template.
For example, this template, stored in templates/post-install-job.yaml, declares a job to be run on post-install:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
What makes this template a hook is the annotation:
annotations:
"helm.sh/hook": post-install

hava you register the custom resource TZCronJob? you can use kubectl get crd or kubectl api-versions to check.

kubernetes natively supports CronJobs. you dont need to use custom resource definition or other third party objects. just update the yaml as below. It should work
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "05 09 * * *"
timezone: "Europe/Amsterdam"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo "Hello, World!"
restartPolicy: OnFailure
If you want to use timezone supported cronjob then follow the below steps to install cronjobber
# Install CustomResourceDefinition
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/crd.yaml
# Setup service account and RBAC
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/rbac.yaml
# Deploy Cronjobber (using the timezone db from the node)
$ kubectl apply -f https://raw.githubusercontent.com/hiddeco/cronjobber/master/deploy/deploy.yaml

Related

How to tell Helm not to run a job

I am trying to make a Kubernetes deployment script using helm.
I created following 2 jobs (skipped the container template since I guess it does not matter)
templates/jobs/migrate.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-migrate
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
...
templates/jobs/seed.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-seed
namespace: {{ .Release.Namespace }}
spec:
...
First job is updating the database structure.
Second job will reset the database contents and fill it with example data.
Since I did not add post-install hook to the seed job I was expecting that job to not run automatically but only when I manually ask it to run.
But it not only ran automatically, it tried to run before migrate.
How can I define a job that I have to manually trigger for it to run?
In vanilla kubernetes jobs run only when I explicitly execute their files using
kubectl apply -f job/seed-database.yaml
How can I do the same using helm?
Replying to your last comment and thanks to #HubertNNN for his idea:
Can I run a suspended job multiple times? From documentation it seems
like a one time job that cannot be rerun like normal jobs
It's normal job, you just editing yaml file with the .spec.suspend: true and it's startTime:
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
suspend: true
parallelism: 1
completions: 5
template:
spec:
...
If all Jobs were created in the suspended state and placed in a pending queue, I can achieve priority-based Job scheduling by resuming Jobs in the right order.
More information is here

How do I make Helm chart Hooks post-install work if other charts are in running state

I have a couple of helm charts in a myapp/templates/ directory, and they deploy as expected with helm install myapp.
These two templates are for example:
database.yaml
cronjob.yaml
I'd like for the cronjob.yaml to only run after the database.yaml is in a running state. I currently have an issue where database.yaml fairly regularly fails in a way we half expect (it's not ideal, but it is what it is).
I've found hooks, but I think I'm either using them incorrectly, or they don't determine whether the pod is in Running, Pending, some state of crashed, etc...
There are no changes I've made to database.yaml in order to use hooks, but my cronjob.yaml which I only want to run if database.yaml is in a running state, I added the annotations as follows:
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: database
annotations:
"helm.sh/hook": "post-install"
labels:
app: database
service: database
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: customtask
image: "{{ .Values.myimage }}"
command:
- /bin/sh
- -c
- supercooltask.sh
restartPolicy: Never
How can I change this hook configuration to allow cronjob.yaml to only run if database.yaml deploys and runs successfully?
Use init containers in the Pod Spec of Cron Job to check DB is up and running.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core
Example:
spec:
template:
spec:
initContainers:
..
containers:
..
restartPolicy: OnFailure

Helm not deleting all the related resources of a chart

I had a helm release whose deployment was not successful. I tried uninstalling it so that I can create a fresh one.
The weird thing which I found is that there were some resources created partially (a couple of Jobs) because of the failed deployment, Uninstalling the failed deployment using helm does not removes those partially created resources which could cause issues when I try to install the release again with some changes.
My question is: Is there a way where I can ask helm to delete all the related resources of a release completely.
Since there are no details on partially created resources. One scenario could be where helm uninstall/delete would not delete the PVC's in the namespace. We resolved this by creating a separate namespace to deploy the application and helm release is uninstalled/deleted, we delete the namespace as well. For a fresh deployment, create a namespace again and do a helm installation on the namespace for a clean install or you can also change the reclaimPolicy to "Delete" while creating the storageClass (by default Reclaimpolicy is retain) as mentioned in the below post
PVC issue on helm: https://github.com/goharbor/harbor-helm/issues/268#issuecomment-505822451
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
clusterNamespace: rook-ceph-system
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Delete
As you said in the comment that the partially created object is a job. In helm there is a concept name hook, which also runs a job for different situations like: pre-install, post-install etc. I thing you used one of this.
The yaml of an example is given below, where you can set the "helm.sh/hook-delete-policy": hook-failed instead of hook-succeeded then if the hook failed the job will be deleted. For more please see the official doc of helm hook
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: pre-install-job
image: "ubuntu"
#command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
args:
- /bin/bash
- -c
- echo
- "pre-install hook"

How run post-init-container like job in k8s

I am deploying k8s service using helm. Whenever i scale or update, i want to run a job. I am looking for something like post-init-container which can run everytime and get terminated when completed.
How we can achieve this case on k8s cluster. I am considering side car but wanted to know if k8s can support as this case as platform.
Thanks.
You can use helm post-install hook: https://helm.sh/docs/topics/charts_hooks/
Just add the "helm.sh/hook": post-install annotation.
For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
Why not just use a Job? Isn't that exactly what you're looking for? Why even complicate it with a post commit hook. Does it have to be after the container is established? A job runs once and then is terminated.

How to kubernetes "kubectl apply" does not update existing deployments

I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using
kubectl apply -f testdeployment.yaml
with the yaml-file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
This works splendid, but when I change some code, push new code to container and run the
kubectl apply -f testdeployment
again, the AKS/website does not get updated, until I remove the deployment with
kubectl remove deployment myweb
What should I do to make it overwrite whatever is deployed? I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps).
I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.
https://kubernetes.io/docs/concepts/containers/images/
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
To ensure that the pod is recreated, rather run:
kubectl delete -f testdeployment && kubectl apply -f testdeployment
kubectl does not see any changes in your deployment yaml file, so it will not make any changes. That's one of the problems using the latest tag.
Tag your image to some incremental version or build number and replace latest with that tag in your CI pipeline (for example with envsubst or similar). This way kubectl knows the image has changed. And you also know what version of the image is running. The latest tag could be any image version.
Simplified example for Azure DevOps:
# <snippet>
image: mycontainerregistry.azurecr.io/myweb:${TAG}
# </snippet>
Pipeline YAML:
stages:
- stage: Build
jobs:
- job: Build
variables:
- name: TAG
value: $(Build.BuildId)
steps:
- script: |
envsubst '${TAG}' < deployment-template.yaml > deployment.yaml
displayName: Replace Environment Variables
Alternatively you could also use another tool like Replace Tokens (different syntax: #{TAG}#).
First delete the deployment config file by running below command on the relative path of the deployment file.
kubectl delete -f .\deployment-file-name.yaml
earlier I used to get
deployment.apps/deployment-file-name unchanged
meaning the deployment file remains cached.
It happens while you're fixing some errors / typos on the deployment YAML & the config got cached once the error got cleared.
Only a kubectl delete -f .\deployment-file-name.yaml could remove the cache.
Later you can do the deployment by
kubectl apply -f .\deployment-file-name.yaml
Sample yaml file as follows :
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-file-name
spec:
replicas: 1
selector:
matchLabels:
app: myservicename
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: /platformservice:latest