Hey there so I was trying to deploy my first and simple webapp with no database on minikube but this Imagepulloff error keeps coming in the pod.
Yes I have checked the name of Image,tag several times;
Here are the logs and yml files.
Namespace: default
Priority: 0
Service Account: default
Labels: app=nodeapp1
pod-template-hash=589c6bd468
Annotations: <none>
Status: Pending
Controlled By: ReplicaSet/nodeapp1-deployment-589c6bd468
Containers:
nodeserver:
Container ID:
Image: ayushftw/nodeapp1:latest
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6mkb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k6mkb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapOptional: <nil>
DownwardAPI: true
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m3s default-scheduler Successfully assigned default/nodeapp1-deployment-589c6bd468-5lg2n to minikube
Normal Pulling 2m2s kubelet Pulling image "ayushftw/nodeapp1:latest"
Warning Failed 3s kubelet Failed to pull image "ayushftw/nodeapp1:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3s kubelet Error: ErrImagePull
Normal BackOff 2s kubelet Back-off pulling image "ayushftw/nodeapp1:latest"
Warning Failed 2s kubelet Error: ImagePullBackOff
deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp1-deployment
labels:
app: nodeapp1
spec:
replicas: 1
selector:
matchLabels:
app: nodeapp1
template:
metadata:
labels:
app: nodeapp1
spec:
containers:
- name: nodeserver
image: ayushftw/nodeapp1:latest
ports:
- containerPort: 3000
service.yml fie
apiVersion: v1
kind: Service
metadata:
name: nodeapp1-service
spec:
selector:
app: nodeapp1
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31011
Please Help If anybody knows anything about this .
I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull ayushftw/nodeapp1:latest
Then load the downloaded image to minikube
minikube image load ayushftw/nodeapp1:latest
And then everything will work because now kubectl will use the image that is stored locally.
It seems to be an issue with the ability to reach container registry or registry in use for your images. Can you try to pull the image manually from the node?
Related
I am trying to create a Kubernetes pod with the following config file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
However, I get an ImagePullBackOff, and when I use kubectl describe pod, here's what's shown:
Name: mongodb-deployment-8f6675bc5-jzmvw
Namespace: default
Priority: 0
Node: minikube/192.168.64.2
Start Time: Thu, 10 Dec 2020 16:30:21 +0800
Labels: app=mongodb
pod-template-hash=8f6675bc5
Annotations: <none>
Status: Pending
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/mongodb-deployment-8f6675bc5
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'> Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w5ltt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-w5ltt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w5ltt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 115m default-scheduler Successfully assigned default/mongodb-deployment-8f6675bc5-jzmvw to minikube
Normal Pulling 115m kubelet Pulling image "mongo"
Warning Failed 114m kubelet Failed to pull image "mongo": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 114m kubelet Error: ErrImagePull
Normal BackOff 114m kubelet Back-off pulling image "mongo"
Warning Failed 114m kubelet Error: ImagePullBackOff
I don't think it's a problem with the image/image name. Is there something wrong with my config file?
Any advice would be greatly appreciated!
There are few reasons for ImagePullBackOff error in Kubernetes, to determine it's cause you should run kubectl describe pod
Failed to pull image "mongo": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
This error means that image cannot be pulled from your container registry.
You can try to login to docker using docker login command.
If it fails you can try modifying your /etc/resolv.conf and adding nameserver 8.8.8.8 and then restarting docker using sudo systemctl restart docker.
Other cause might be using proxy or VPN connection. You can refer to answers in this question.
When a pod gets stuck in a Waiting state, what can I do to find out why it's Waiting?
For instance, I have a deployment to AKS which uses ACI.
When I deploy the yaml file, a number of the pods will be stuck in a Waiting state. Running kubectl describe pod selenium121157nodechrome-7bf598579f-kqfqs returns;
State: Waiting
Reason: Waiting
Ready: False
Restart Count: 0
kubectl logs selenium121157nodechrome-7bf598579f-kqfqs returns nothing.
How can I find out what is the pod Waiting for?
Here's the yaml deployment;
apiVersion: apps/v1
kind: Deployment
metadata:
name: aci-helloworld2
spec:
replicas: 20
selector:
matchLabels:
app: aci-helloworld2
template:
metadata:
labels:
app: aci-helloworld2
spec:
containers:
- name: aci-helloworld
image: microsoft/aci-helloworld
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
- key: azure.com/aci
effect: NoSchedule
Here's the output from a describe pod that's been Waiting for 5 minutes;
matt#Azure:~/2020$ kubectl describe pod aci-helloworld2-86b8d7866d-b9hgc
Name: aci-helloworld2-86b8d7866d-b9hgc
Namespace: default
Priority: 0
Node: virtual-node-aci-linux/
Labels: app=aci-helloworld2
pod-template-hash=86b8d7866d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/aci-helloworld2-86b8d7866d
Containers:
aci-helloworld:
Container ID: aci://95919def19c28c2a51a806928030d84df4bc6b60656d026d19d0fd5e26e3cd86
Image: microsoft/aci-helloworld
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: Waiting
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hqrj8 (ro)
Volumes:
default-token-hqrj8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hqrj8
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
kubernetes.io/role=agent
type=virtual-kubelet
Tolerations: azure.com/aci:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
virtual-kubelet.io/provider
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/aci-helloworld2-86b8d7866d-b9hgc to virtual-node-aci-linux
Based on the official documentation if your pod is in waiting state it means that it was scheduled on the node but it can't run on that machine with the image pointed out as the most common issue. You can try to run your image manually with docker pull and docker run and rule out the issues with image.
The information from kubectl describe <pod-name> should give you some information, especially the events section down to the bottom. Here`s an example how they can look like:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/testpod to cafe
Normal BackOff 50s (x6 over 2m16s) kubelet, cafe Back-off pulling image "busybox"
Normal Pulling 37s (x4 over 2m17s) kubelet, cafe Pulling image "busybox"
It could be also issue with your NodeSelector and Tolerations but again that would be shown in your events once you describe your pod.
Let me know if it helps and what are your outputs from describe pod.
I have just moved my first cluster from minikube up to AWS EKS. All went pretty smoothly so far, except I'm running into some DNS issues I think, but only on one of the cluster nodes.
I have two nodes in the cluster running v1.14, and 4 pods of one type, and 4 of another, 3 of each work, but 1 of each - both on the same node - start then error (CrashLoopBackOff) with the script inside the container erroring because it can't resolve the hostname for the database. Deleting the errored pod, or even all pods, results in one pod on the same node failing every time.
The database is in its own pod and has a service assigned, none of the other pods of the same type have problems resolving the name or connecting. The database pod is on the same node as the pods that can't resolve the hostname. I'm not sure how to migrate the pod to a different node, but that might be worth trying to see if the problem follows.
No errors in the coredns pods. I'm not sure where to start looking to discover the issue from here, and any help or suggestions would be appreciated.
Providing the configs below. As mentioned, they all work on Minikube, and also they work on one node.
kubectl get pods - note age, all pod1's were deleted at the same time and they recreated themselves, 3 worked fine, 4th does not.
NAME READY STATUS RESTARTS AGE
pod1-85f7968f7-2cjwt 1/1 Running 0 34h
pod1-85f7968f7-cbqn6 1/1 Running 0 34h
pod1-85f7968f7-k9xv2 0/1 CrashLoopBackOff 399 34h
pod1-85f7968f7-qwcrz 1/1 Running 0 34h
postgresql-865db94687-cpptb 1/1 Running 0 3d14h
rabbitmq-667cfc4cc-t92pl 1/1 Running 0 34h
pod2-94b9bc6b6-6bzf7 1/1 Running 0 34h
pod2-94b9bc6b6-6nvkr 1/1 Running 0 34h
pod2-94b9bc6b6-jcjtb 0/1 CrashLoopBackOff 140 11h
pod2-94b9bc6b6-t4gfq 1/1 Running 0 34h
postgresql service
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
selector:
app: postgresql
pod1 deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod1
spec:
replicas: 4
selector:
matchLabels:
app: pod1
template:
metadata:
labels:
app: pod1
spec:
containers:
- name: pod1
image: us.gcr.io/gcp-project-8888888/pod1:latest
env:
- name: rabbitmquser
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: rmquser
volumeMounts:
- mountPath: /data/files
name: datafiles
volumes:
- name: datafiles
persistentVolumeClaim:
claimName: datafiles-pv-claim
imagePullSecrets:
- name: container-readonly
pod2 depoloyment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod2
spec:
replicas: 4
selector:
matchLabels:
app: pod2
template:
metadata:
labels:
app: pod2
spec:
containers:
- name: pod2
image: us.gcr.io/gcp-project-8888888/pod2:latest
env:
- name: rabbitmquser
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: rmquser
volumeMounts:
- mountPath: /data/files
name: datafiles
volumes:
- name: datafiles
persistentVolumeClaim:
claimName: datafiles-pv-claim
imagePullSecrets:
- name: container-readonly
CoreDNS config map to forward DNS to external service if it doesn't resolve internally. This is the only place I can think that would be causing the issue - but as said it works for one node.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . 8.8.8.8
cache 30
loop
reload
loadbalance
}
Errored Pod output. Same for both pods, as it occurs in library code common to both. As mentioned, this does not occur for all pods so the issue likely doesn't lie with the code.
Error connecting to database (psycopg2.OperationalError) could not translate host name "postgresql" to address: Try again
Errored Pod1 description:
Name: xyz-94b9bc6b6-jcjtb
Namespace: default
Priority: 0
Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230
Start Time: Tue, 15 Oct 2019 19:43:11 +1030
Labels: app=pod1
pod-template-hash=94b9bc6b6
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.70.63
Controlled By: ReplicaSet/xyz-94b9bc6b6
Containers:
pod1:
Container ID: docker://f7dc735111bd94b7c7b698e69ad302ca19ece6c72b654057627626620b67d6de
Image: us.gcr.io/xyz/xyz:latest
Image ID: docker-pullable://us.gcr.io/xyz/xyz#sha256:20110cf126b35773ef3a8656512c023b1e8fe5c81dd88f19a64c5bfbde89f07e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 16 Oct 2019 07:21:40 +1030
Finished: Wed, 16 Oct 2019 07:21:46 +1030
Ready: False
Restart Count: 139
Environment:
xyz: <set to the key 'xyz' in secret 'xyz-secrets'> Optional: false
Mounts:
/data/xyz from xyz (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
xyz:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: xyz-pv-claim
ReadOnly: false
default-token-m72kz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-m72kz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m22s (x3143 over 11h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container
Errored Pod 2 description:
Name: xyz-85f7968f7-k9xv2
Namespace: default
Priority: 0
Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230
Start Time: Mon, 14 Oct 2019 21:19:42 +1030
Labels: app=pod2
pod-template-hash=85f7968f7
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.84.69
Controlled By: ReplicaSet/pod2-85f7968f7
Containers:
pod2:
Container ID: docker://f7c7379f92f57ea7d381ae189b964527e02218dc64337177d6d7cd6b70990143
Image: us.gcr.io/xyz-217300/xyz:latest
Image ID: docker-pullable://us.gcr.io/xyz-217300/xyz#sha256:b9cecdbc90c5c5f7ff6170ee1eccac83163ac670d9df5febd573c2d84a4d628d
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 16 Oct 2019 07:23:35 +1030
Finished: Wed, 16 Oct 2019 07:23:41 +1030
Ready: False
Restart Count: 398
Environment:
xyz: <set to the key 'xyz' in secret 'xyz-secrets'> Optional: false
Mounts:
/data/xyz from xyz (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
xyz:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: xyz-pv-claim
ReadOnly: false
default-token-m72kz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-m72kz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m28s (x9208 over 34h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container
At the suggestion of a k8s community member, I applied the following change to my coredns configuration to be more in line with the best practice:
Line: proxy . 8.8.8.8 changed to forward . /etc/resolv.conf 8.8.8.8
I then deleted the pods, and after they were recreated by k8s, the issue did not appear again.
EDIT:
Turns out, that was not the issue at all as shortly afterwards the issue re-occurred and persisted. In the end, it was this: https://github.com/aws/amazon-vpc-cni-k8s/issues/641
Rolled back to 1.5.3 as recommended by Amazon, restarted the cluster, and the issue was resolved.
i am trying to deploy the back-end component of my application for testing REST API's. i have dockerized the components and created an image in minikube.i have created a yaml file for deploying and creating services. Now when i try to deploy it through sudo kubectl create -f frontend-deployment.yaml, it deploys without any error but when i check the status of deployments this is what is shown :
NAME READY UP-TO-DATE AVAILABLE AGE
back 0/3 3 0 2m57s
Interestingly the service corresponding to this deployment is available.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
back ClusterIP 10.98.73.249 <none> 8080/TCP 3m9s
i also tried to create deployment by running deplyment statemnts individually like sudo kubectl run back --image=back --port=8080 --image-pull-policy Never but the result was same.
Here is how my `deployment.yaml file looks like :
kind: Service
apiVersion: v1
metadata:
name: back
spec:
selector:
app: back
ports:
- protocol: TCP
port: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: back
spec:
selector:
matchLabels:
app: back
replicas: 3
template:
metadata:
labels:
app: back
spec:
containers:
- name: back
image: back
imagePullPolicy: Never
ports:
- containerPort: 8080
How can i make this deployment up and running as this causes internal server error on my front end side of application?
Description of pod back
Name: back-7fd9995747-nlqhq
Namespace: default
Priority: 0
Node: minikube/10.0.2.15
Start Time: Mon, 15 Jul 2019 12:49:52 +0200
Labels: pod-template-hash=7fd9995747
run=back
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/back-7fd9995747
Containers:
back:
Container ID: docker://8a46e16c52be24b12831bb38d2088b8059947d099299d15755d77094b9cb5a8b
Image: back:latest
Image ID: docker://sha256:69218763696932578e199b9ab5fc2c3e9087f9482ac7e767db2f5939be98a534
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 15 Jul 2019 12:49:54 +0200
Finished: Mon, 15 Jul 2019 12:49:54 +0200
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c247f (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-c247f:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c247f
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6s default-scheduler Successfully assigned default/back-7fd9995747-nlqhq to minikube
Normal Pulled 4s (x2 over 5s) kubelet, minikube Container image "back:latest" already present on machine
Normal Created 4s (x2 over 5s) kubelet, minikube Created container back
Normal Started 4s (x2 over 5s) kubelet, minikube Started container back
Warning BackOff 2s (x2 over 3s) kubelet, minikube Back-off restarting failed container
As you can see zero of three Pods have Ready status:
NAME READY AVAILABLE
back 0/3 0
To find out what is going on you should check the underlying Pods:
$ kubectl get pods -l app=back
and then look at the Events in their description:
$ kubectl describe pod back-...
I have a Kubernetes cluster in which I've created a deployment to run a pod. Unfortunately after running it the pod does not want to self-terminate, instead it enters a continuous state of restart/CrashLoopBackOff cycle.
The command (on the entry point) runs correctly when first deployed, and I want it to run only one time.
I am programatically deploying the docker image with the entrypoint configured, using the Python K8s API. Here is my deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kio
namespace: kmlflow
labels:
app: kio
name: kio
spec:
replicas: 1
selector:
matchLabels:
app: kio
name: kio
template:
metadata:
labels:
app: kio
name: kio
spec:
containers:
- name: kio-ingester
image: MY_IMAGE
command: ["/opt/bin/kio"]
args: ["some", "args"]
imagePullPolicy: Always
restart: Never
backofflimit: 0
Thanks for any help
Output from kubectl pod is:
Name: ingest-160-779874b676-8pgv5
Namespace: kmlflow
Priority: 0
PriorityClassName: <none>
Node: 02-w540-02.glebe.kinetica.com/172.30.255.205
Start Time: Thu, 11 Oct 2018 13:31:20 -0400
Labels: app=kio
name=kio
pod-template-hash=3354306232
Annotations: <none>
Status: Running
IP: 10.244.0.228
Controlled By: ReplicaSet/ingest-160-779874b676
Containers:
kio-ingester:
Container ID: docker://b67a682d04e69c2dc5c1be7e02bf2e4cf7a12a7557dfbe642dfb531ca4b03f07
Image: kinetica/kinetica-intel
Image ID: docker-pullable://docker.io/kinetica/kinetica-intel#sha256:eefbb6595eb71822300ef97d5cbcdac7ec58f2041f8190d3a2ba9cffd6a0d87c
Port: <none>
Host Port: <none>
Command:
/opt/gpudb/bin/kio
Args:
--source
kinetica://172.30.50.161:9191::dataset_iris
--destination
kinetica://172.30.50.161:9191::iris5000
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 11 Oct 2018 13:33:27 -0400
Finished: Thu, 11 Oct 2018 13:33:32 -0400
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-69wkn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-69wkn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-69wkn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m39s default-scheduler Successfully assigned kmlflow/ingest-160-779874b676-8pgv5 to 02-w540-02.glebe.kinetica.com
Normal Created 89s (x4 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Created container
Normal Started 89s (x4 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Started container
Warning BackOff 44s (x7 over 2m15s) kubelet, 02-w540-02.glebe.kinetica.com Back-off restarting failed container
Normal Pulling 33s (x5 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com pulling image "kinetica/kinetica-intel"
Normal Pulled 33s (x5 over 2m28s) kubelet, 02-w540-02.glebe.kinetica.com Successfully pulled image "kinetica/kinetica-intel"
There is no output from Kubectl logs <crashing-pod> because a successful run of the command KIO with the injected parameters does not print anything to standard output.
If you'd like to run your task one time and finish after a successful completion you should consider using Kubernetes Jobs or CronJobs
Something like this:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
labels:
app: kio
name: kio
spec:
template:
metadata:
labels:
app: kio
name: kio
spec:
containers:
- name: kio-ingester
image: MY_IMAGE
command: ["/opt/bin/kio"]
args: ["some", "args"]
imagePullPolicy: Always
restart: Never
backoffLimit: 4
To delete the jobs automatically if you have Kubernetes 1.12 or later you can use ttlSecondsAfterFinished. Unfortunately, you if you are using Kuberbetes 1.11 or earlier you will have to delete them manually or you can set up a CronJob to do it.