Manually trigger kubernetes cronjob fails - kubernetes

My k8s version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
My cronjob file:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: git-updater
name: git-updater
namespace: my-ns
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
metadata:
labels:
app: git-updater
name: git-updater
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache git openssh-client && cd /repos && ls -d */*.git/
| while read dir; do cd /repos/$dir; git gc; git fetch --prune; done
image: alpine
name: updater
volumeMounts:
- mountPath: test
name: test
- mountPath: test
name: test
restartPolicy: Never
volumes:
- persistentVolumeClaim:
claimName: pvc
name: repos
schedule: '#daily'
successfulJobsHistoryLimit: 4
...
When I create the job from file, all goes well:
kubectl -n my-ns create -f git-updater.yaml
cronjob.batch/git-updater created
But I would like to trigger it manually just for testing purposes, so I do:
kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
Which is strange because I have just been able to create it from file.
In another similar post it was suggested to switch from ApiVersion v1beta1 to v1.... but when I do so then I get:
kubectl -n my-ns create -f git-updater.yaml
error: unable to recognize "git-updater.yaml": no matches for kind "CronJob" in version "batch/v1"
I am a little stuck here, how can I test my newly and successfully created CronJob?

That's your v1.20 cluster version, that doing that.
Short answer is you should upgrade cluster to 1.21, where cronjobs works more or less stable.
Check
no matches for kind "CronJob" in version "batch/v1" , especially comment
One thing is the api-version, another one is in which version the
resource you are creating is available. By version 1.19.x you do have
batch/v1 as an available api-resource, but you don't have the resource
CronJob under it.
Kubernetes Create Job from Cronjob not working
I have an 1.20.9 GKE cluster and face the same issue as you
$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.9-gke.1001", GitCommit:"1fe18c314ed577f6047d2712a9d1c8e498e22381", GitTreeState:"clean", BuildDate:"2021-08-23T23:06:28Z", GoVersion:"go1.15.13b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
cronjob.batch/git-updater created
$kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
At the same time, your cronjob yaml perfectly works with apiVersion: batch/v1 on 1.21 GKE one.
$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3-gke.2001", GitCommit:"77cdf52c2a4e6c1bbd8ae4cd75c864e9abf4c79e", GitTreeState:"clean", BuildDate:"2021-08-20T18:38:46Z", GoVersion:"go1.16.6b7", Compiler:"gc", Platform:"linux/amd64"}
$kubectl -n my-ns create -f cronjob.yaml
cronjob.batch/git-updater created
$ kubectl -n my-ns create job --from=cronjob/git-updater test-job
job.batch/test-job created
$kubectl describe job test-job -n my-ns
Name: test-job
Namespace: my-ns
Selector: controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
Labels: app=git-updater
controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
job-name=test-job
Annotations: cronjob.kubernetes.io/instantiate: manual
Parallelism: 1
Completions: 1
Start Time: Tue, 02 Nov 2021 17:54:38 +0000
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 16m job-controller Created pod: test-job-vcxx9

Related

JOB_COMPLETION_INDEX environment variable is empty when submitting an Indexed Job to kubernetes

I've made my cluster using minukube.
as I know, Indexed-Job feature is added at kubernetes version 1.21.
but when I made my job, it looks like there is no $JOB_COMPLETION_INDEX environ variable.
here is my yaml
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
parallelism: 2
completions: 2
completionMode: Indexed
template:
spec:
restartPolicy: Never
containers:
- name: 'my-container'
image: 'busybox'
command: [
"sh",
"-c",
"echo My Index is $JOB_COMPLETION_INDEX && sleep 3600",
]
$ job.batch/my-job created
but as I mentioned before, it looks like the job is NOT Indexed-Job
below is logs of my pods(controlled by my-job)
$ kubectl logs pod/my-job-wxhh6
My Index is
$ kubectl logs pod/my-job-nbxkr
My Index is
It seems that the $JOB_COMPLETION_INDEX environ variable is empty.
I'll skip it to make it brief, but when I directly accessed the container, there is also no $JOB_COMPLETION_INDEX.
and below is result of kubectl describe job.batch/my-job command
Name: my-job
Namespace: default
Selector: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
job-name=my-job
Annotations: <none>
Parallelism: 2
Completions: 2
Start Time: Sat, 28 Aug 2021 03:56:46 +0900
Pods Statuses: 2 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf
job-name=my-job
Containers:
my-container:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
echo My Index is $JOB_COMPLETION_INDEX && sleep 3600
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 100s job-controller Created pod: my-job-wxhh6
Normal SuccessfulCreate 100s job-controller Created pod: my-job-nbxkr
there is no annotation. as document, batch.kubernetes.io/job-completion-index annotation must be there.
My version is upper than kubernetes 1.21, where the Indexed-Job feature is introduced.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
$ minukube kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
What should I do?
as of now(2021.08.29), the IndexedJob is an alpha feature, so I started minikube with feature-gates flag
minikube start --feature-gates=IndexedJob=true
and It works well
The documentation is incorrect; you need to be on the rapid release channel, at least version 1.22.

EKS cannot create persistent volume

I am deploying prometheus which needs persistent volume(i have also tried with other statefulset), but persistent volume is not created and persistent volume clam shows the flowing error after kubectl describe -n {namespace} {pvc-name}.
Type: Warning
Reason: ProvisioningFailed
From: persistentvolume-controller
Message: (combined from similar events): Failed to provision volume with StorageClass "gp2": error querying for all zones: error listing AWS instances: "UnauthorizedOperation: You are not authorized to perform this operation.\n\tstatus code: 403, request id: d502ce90-8af0-4292-b872-ca04900d41dc"
kubectl get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 7d17h
kubectl describe sc gp2
Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
K8s versions(aws eks):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
I solved this problem by adding AmazonEKSClusterPolicy and AmazonEKSServicePolicy to the eks cluster role.
Check your IAM policies, and ensure that you are using the correct access keys.
Also check IAM role has permission to provision dynamic storage

GitLab Kubernetes integration error; configuration of Helm Tiller already exists

After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:
Something went wrong while installing Helm Tiller
Kubernetes error: configmaps "values-content-configuration-helm" already exists
There are no pods running on the cluster and kubectl version returns:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
update
the output of kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue!
Find the gitlab-managed-apps namespace with kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue:
kubectl delete namespace gitlab-managed-apps
Thanks to Lev Kuznetsov.

Creating a Kubernetes job results in 'batch/, Kind=Job matches multiple kinds'

I recently upgraded from Kubernetes 1.2.0 to Kubernetes 1.3.0, and now I get the following error when I try to start a job:
$ kubectl create -f pijob.yaml
unable to recognize "pijob.yaml": batch/, Kind=Job matches multiple kinds [batch/v1, Kind=Job batch/v2alpha1, Kind=Job]
where pijob.yaml is the job definition from the tutorial:
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
The error is confusing because it suggests that apiVersion: batch/v1, Kind: Job should be valid. If I try apiVersion: batch/v2alpha1, Kind: Job, I also get an error:
$ kubectl create -f pijob.yaml
error validating "pijob.yaml": error validating data: couldn't find type: v2alpha1.Job
What am I doing wrong?
Have you tried with apiVersion: extensions/v1beta1?
check your kubernetes server and client version and make same one.
[root#allinone dan]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
[root#allinone dan]# kubectl create -f ./job.yaml
error: unable to recognize "./job.yaml": batch/, Kind=Job matches multiple kinds [batch/v1, Kind=Job batch/v2alpha1, Kind=Job]
[root#allinone dan]# wget https://storage.googleapis.com/kubernetes-release/release/v1.5.1/bin/linux/amd64/kubectl
[root#allinone dan]# chmod +x kubectl
[root#allinone dan]# mv kubectl /usr/local/bin/kubectl
[root#allinone dan]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
[root#allinone dan]# kubectl create -f ./job.yaml
job "pi" created
I had the same error, so I followed the method below:
[root#host141 tensorflow]#wget https://storage.googleapis.com/kubernetes-release/release/v1.5.1/bin/linux/amd64/kubectl ./
[root#host141 tensorflow]# cp /usr/bin/kubectl /usr/bin/kubectl.bak
[root#host141 tensorflow]# cp kubectl /usr/bin/kubectl
[root#host141 tensorflow]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Than I create the job and didn't have any errors left.
I had the same error message, it turned out that I was not logged in...

nodeSelector not working 1.2.0

apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
nodeSelector:
docker: twelve
On the kube master:
kubectl get nodes -l docker=twelve
NAME LABELS STATUS AGE
10.10.1.4 docker=twelve,kubernetes.io/hostname=10.10.1.4 Ready 115d
10.10.1.5 docker=twelve,kubernetes.io/hostname=10.10.1.5 Ready 115d
from the event log
4m 17s 20 busybox Pod FailedScheduling {scheduler } Failed for reason MatchNodeSelector and possibly others
If I remove the nodeSelector, it deploys w/o issue.
I am trying to handle docker 1.9.1 and docker 1.12.1 for various teams and this is preventing it.
This is a kube cluster on CentOS 7.2.-1511 servers
kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Restarting the kube-scheduler fixed the issue.
I guess when you add a new label to a node, you need to restart the scheduler.