I have created Kubernetes job and job got created, but it is failing on deployment to the Kubernetes cluster. When I try to redeploy it using Helm the job is not redeploying (deleting old job and recreating new one, unlike a microservice deployment).
How can I achieve this redeploying job without manually deleting it in Kubernetes? Could I tell it to recreate the container?
job.yaml contains:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-init-job"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
annotations:
linkerd.io/inject: disabled
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook": pre-install,pre-upgrade,pre-delete
"helm.sh/hook-weight": "-5"
spec:
serviceAccountName: {{ .Release.Name }}-init-service-account
containers:
- name: app-installer
image: some image
command:
- /bin/bash
- -c
- echo Hello executing k8s init-container
securityContext:
readOnlyRootFilesystem: true
restartPolicy: OnFailure
Job is not getting redeployed
kubectl get jobs -n namespace
NAME COMPLETIONS DURATION AGE
test-init-job 0/1 13h 13h
kubectl describe job test-init-job -n test
Name: test-init-job
Namespace: test
Selector: controller-uid=86370470-0964-42d5-a9c1-00c8a462239f
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: test
meta.helm.sh/release-namespace: test
Parallelism: 1
Completions: 1
Start Time: Fri, 14 Oct 2022 18:03:31 +0530
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=86370470-0964-42d5-a9c1-00c8a462239f
job-name=test-init-job
Annotations: helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
helm.sh/hook-weight: -5
linkerd.io/inject: disabled
Service Account: test-init-service-account
Containers:
test-app-installer:
Image: artifactory/test-init-container:1.0.0
Port: <none>
Host Port: <none>
Environment:
test.baseUrl: baser
test.authType: basic
test.basic.username: jack
test.basic.password: password
Mounts:
/etc/test/config from test-installer-config-vol (ro)
Volumes:
test-installer-config-vol:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test-installer-config-files
Optional: false
Events: <none>
Related
I run
docker pull ghcr.io/.../test-service
Everything works just fine, however, when I try to use it in a deployment and apply the deployment to a Minikube instance I get...
Warning Failed 14s (x4 over 93s) kubelet Error: ErrImagePull
Normal BackOff 1s (x6 over 92s) kubelet Back-off pulling image "ghcr.io/.../test-service:latest"
Warning Failed 1s (x6 over 92s) kubelet Error: ImagePullBackOff
How do I configure Minikube to use my Github PAT?
My deployment looks like this...
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: test-app
version: v1
template:
metadata:
labels:
app: test-app
version: v1
spec:
serviceAccountName: test-app
containers:
- image: ghcr.io/.../test-service:latest
imagePullPolicy: Always
name: test-app
ports:
- containerPort: 8000
It's because you did not create a docker registry secret. To do so, you can follow https://dev.to/asizikov/using-github-container-registry-with-kubernetes-38fb
Another simpler way is to add registry-creds addon to Minikube
$ minikube addons configure registry-creds
# setup ghcr.io with your github username and PAT (Personal Access Token) token
$ minikube addons enable registry-creds
Then you can refer to the github container registry creds as dpr-secret as below deployment example shows
# deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: b2b-integration-api
labels:
app: api-app
namespace: default
spec:
template:
metadata:
name: b2b-integration-api-pods
labels:
app: api-app
tier: api-layer
spec:
containers:
- name: b2b-integration-api
image: ghcr.io/<your github user or org>/<image>:<version>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
imagePullSecrets:
- name: dpr-secret
replicas: 2
selector:
matchLabels:
tier: api-layer
im practicing with kubernetes taints , i have tainted my node and than make a deploy like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
tolerations:
- key: "test"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
kubectl describe nodes knode2 :
Name: knode2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=knode2
kubernetes.io/os=linux
testing=test
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: **********
projectcalico.org/IPv4IPIPTunnelAddr: ********
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 27 Oct 2020 17:23:47 +0200
Taints: test=blue:NoSchedule
but when i deploy this yaml file the pods are not going only to that tainted node.
Why is that?
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. That's exactly opposite of what you intend to do.
You can constrain a Pod to only be able to run on particular Node(s), or to prefer to run on particular nodes using NodeSelector or NodeAffinity.
NodeSelector example
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
I see a failed job that created no pods.And also there is no information in the events.Since there are no pods,I could not check the logs.
Here is the description of the job which failed.
kubectl describe job time-limited-rbac-1604010900 -n add-ons
Name: time-limited-rbac-1604010900
Namespace: add-ons
Selector: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Annotations: <none>
Controlled By: CronJob/time-limited-rbac
Parallelism: 1
Completions: <unset>
Start Time: Thu, 29 Oct 2020 15:35:08 -0700
Active Deadline Seconds: 280s
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=0816b9b3-814c-4802-83cf-5d5f3456701d
job-name=time-limited-rbac-1604010900
Service Account: time-limited-rbac
Containers:
time-limited-rbac:
Image: bitnami/kubectl:latest
Port: <none>
Host Port: <none>
Command:
/bin/bash
Args:
/var/tmp/time-limited-rbac.sh
Environment: <none>
Mounts:
/var/tmp/ from script (rw)
Volumes:
script:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: time-limited-rbac-script
Optional: false
Events: <none>
Here is the description of CronJob.
apiVersion: v1
items:
- apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
meta.helm.sh/release-name: time-limited-rbac
meta.helm.sh/release-namespace: add-ons
labels:
app.kubernetes.io/name: time-limited-rbac
name: time-limited-rbac
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
activeDeadlineSeconds: 280
backoffLimit: 3
parallelism: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /var/tmp/time-limited-rbac.sh
command:
- /bin/bash
image: bitnami/kubectl:latest
imagePullPolicy: Always
name: time-limited-rbac
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/tmp/
name: script
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: time-limited-rbac
serviceAccountName: time-limited-rbac
terminationGracePeriodSeconds: 0
volumes:
- configMap:
defaultMode: 356
name: time-limited-rbac-script
name: script
schedule: '*/5 * * * *'
successfulJobsHistoryLimit: 3
suspend: false
Is there any way to tune thie cronjob to avoid such scenarios? We are receiving this issue atleast once or twice everyday.
we have istio installed without the side car enabled gloablly , and I want to enable it to specific service in a new namespace
I’ve added to my deployment the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gow
labels:
app: gowspec:
replicas: 2
template:
metadata:
labels:
app: gow
tier: service
annotations:
sidecar.istio.io/inject: "true"
while running
get namespace -L istio-injection I don’t see anything enabled , everything is empty…
How can I verify that the side car is created ? I dont see anything new ...
You can use istioctl kube-inject to make that
kubectl create namespace asdd
istioctl kube-inject -f nginx.yaml | kubectl apply -f -
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: asdd
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
annotations:
sidecar.istio.io/inject: "True"
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Result:
nginx-deployment-55b6fb474b-77788 2/2 Running 0 5m36s
nginx-deployment-55b6fb474b-jrkqk 2/2 Running 0 5m36s
Let me know if You have any more questions.
You can describe your pod to see list of containers and one of those should be sidecar container. Look for something called istio-proxy.
kubectl describe pod pod name
It should look something like below
$ kubectl describe pod demo-red-pod-8b5df99cc-pgnl7
SNIPPET from the output:
Name: demo-red-pod-8b5df99cc-pgnl7
Namespace: default
.....
Labels: app=demo-red
pod-template-hash=8b5df99cc
version=version-red
Annotations: sidecar.istio.io/status={"version":"3c0b8d11844e85232bc77ad85365487638ee3134c91edda28def191c086dc23e","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 10.32.0.6
Controlled By: ReplicaSet/demo-red-pod-8b5df99cc
Init Containers:
istio-init:
Container ID: docker://bef731eae1eb3b6c9d926cacb497bb39a7d9796db49cd14a63014fc1a177d95b
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://docker.io/istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
.....
State: Terminated
Reason: Completed
.....
Ready: True
Containers:
demo-red:
Container ID: docker://8cd9957955ff7e534376eb6f28b56462099af6dfb8b9bc37aaf06e516175495e
Image: chugtum/blue-green-image:v3
Image ID: docker-pullable://docker.io/chugtum/blue-green-image#sha256:274756dbc215a6b2bd089c10de24fcece296f4c940067ac1a9b4aea67cf815db
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
istio-proxy:
Container ID: docker://ca5d690be8cd6557419cc19ec4e76163c14aed2336eaad7ebf17dd46ca188b4a
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://docker.io/istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Args:
proxy
sidecar
.....
State: Running
Started: Sun, 09 Dec 2018 18:12:31 -0800
Ready: True
.....
You need to have the admission webhook for automatic sidecar injection.
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
There may be many reasons for sidecar injection failures as described here
Here is a table which shows final injection status based on different scenarios.
Based on above table its mandatory to label the namespace with a label istio-injection=enabled
kubectl label namespace default istio-injection=enabled --overwrite
We have a kubernetes cluster with istio proxy running.
At first I created a cronjob which reads from database and updates a value if found. It worked fine.
Then it turns out we already had a service that does the database update so I changed the database code into a service call.
conn := dial.Service("service:3550", grpc.WithInsecure())
client := protobuf.NewServiceClient(conn)
client.Update(ctx)
But istio rejects the calls with an RBAC error. It just rejects and doesnt say why.
Is it possible to add a role to a cronjob? How can we do that?
The mTLS meshpolicy is PERMISSIVE.
Kubernetes version is 1.17 and istio version is 1.3
API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2019-12-05T16:06:08Z
Generation: 1
Resource Version: 6578
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: 25f36b0f-1779-11ea-be8c-42010a84006d
Spec:
Peers:
Mtls:
Mode: PERMISSIVE
The cronjob yaml
Name: cronjob
Namespace: serve
Labels: <none>
Annotations: <none>
Schedule: */30 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 1
Failed Job History Limit: 3
Pod Template:
Labels: <none>
Containers:
service:
Image: service:latest
Port: <none>
Host Port: <none>
Environment:
JOB_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 17 Dec 2019 09:00:00 +0100
Active Jobs: <none>
Events:
edit
I have turned off RBA for my namespace in ClusterRBACConfig and now it works. So cronjobs are affected by roles is my conclusion then and it should be possible to add a role and call other services.
The cronjob needs proper permissions in order to run if RBAC is enabled.
One of the solutions in this case would be to add a ServiceAccount to the cronjob configuration file that has enough privileges to execute what it needs to.
Since You already have existing services in the namespace You can check if You have existing ServiceAccount for specific NameSpace by using:
$ kubectl get serviceaccounts -n serve
If there is existing ServiceAccount You can add it into Your cronjob manifest yaml file.
Like in this example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: adwords-api-scale-up-cron-job
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 100
template:
spec:
serviceAccountName: scheduled-autoscaler-service-account
containers:
- name: adwords-api-scale-up-container
image: bitnami/kubectl:1.15-debian-9
command:
- bash
args:
- "-xc"
- |
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
volumeMounts:
- name: kubectl-config
mountPath: /.kube/
readOnly: true
volumes:
- name: kubectl-config
hostPath:
path: $HOME/.kube # Replace $HOME with an evident path location
restartPolicy: OnFailure
Then under Pod Template there should be Service Account visable:
$ kubectl describe cronjob adwords-api-scale-up-cron-job
Name: adwords-api-scale-up-cron-job
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"adwords-api-scale-up-cron-job","namespace":"default"},...
Schedule: */2 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 1
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Active Deadline Seconds: 100s
Pod Template:
Labels: <none>
Service Account: scheduled-autoscaler-service-account
Containers:
adwords-api-scale-up-container:
Image: bitnami/kubectl:1.15-debian-9
Port: <none>
Host Port: <none>
Command:
bash
Args:
-xc
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
Environment: <none>
Mounts:
/.kube/ from kubectl-config (ro)
Volumes:
kubectl-config:
Type: HostPath (bare host directory volume)
Path: $HOME/.kube
HostPathType:
Last Schedule Time: <unset>
Active Jobs: <none>
Events: <none>
In case of custom RBAC configuration i suggest referring to kubernetes documentation.
Hope this helps.