What is the kubernetes equivalent of docker inspect? - kubernetes

When a docker container is running it is sometimes helpful to look at runtime configuration.
What is the equivalent command for kubernetes?
I did a search on so for this and came up with some similar questions: See https://stackoverflow.com/search?q=What+is+the+kubernetes+equivalent, but not this question.
What's the kubectl equivalent of docker exec bash in Kubernetes?
Docker volume and kubernetes volume

Kubernetes is a container orchestrator, so you'll not find container-level commands.
You can check the container logs:
kubectl logs pod-name
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
You can describe a pod to see pod details, as well as possible pull image errors:
kubectl describe pod nginx-deployment-1006230814-6winp
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-1956810328","uid":"14e607e7-8ba1-11e7-b5cb-fa16" ...
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Containers:
nginx:
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
cpu: 500m
memory: 128Mi
Requests:
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5kdvl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
If you need see details about a container, use the docker client or whatever other container runtime client for this purpose.

Related

Kubernetes CronJob Not exited

I am running a cronjob in kubernetes. Cronjob started and but not exited. Status of pod is always in RUNNING.
Below is logs
kubectl get pods
cronjob-1623253800-xnwwx 1/1 Running 0 13h
When i describe the JOB below are noticed
kubectl describe job cronjob-1623300120
Name: cronjob-1623300120
Namespace: cronjob
Selector: xxxxx
Labels: xxxxx
Annotations: <none>
Controlled By: CronJob/cronjob
Parallelism: 1
Completions: 1
Start Time: Thu, 9 Jun 2021 10:12:03 +0530
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cronjob
controller-xxxx
job-name=cronjob-1623300120
Containers:
plannercronjob:
Image: xxxxxxxxxxxxx
Port: <none>
Host Port: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 13h job-controller Created pod: cronjob-1623300120
I Noticed that Pods Statuses: 1 Running / 0 Succeeded / 0 Failed. This means that the when code return zero , then job Succeeded/Failed. Is that correct ?.
When i enter into the pod using execute command
kubectl exec --stdin --tty cronjob-1623253800-xnwwx -n cronjob -- /bin/bash
root#cronjob-1623253800-xnwwx:/# ps ax| grep python
1 ? Ssl 0:01 python -m sfit.src.app
18 pts/0 S+ 0:00 grep python
I found that python process is still running. Is this a code issue deadlock or something else.
pod describe
Name: cronjob-1623302220-xnwwx
Namespace: default
Priority: 0
Node: aks-agentpool-xxxxvmss000000/10.240.0.4
Start Time: Thu, 9 Jun 2021 10:47:02 +0530
Labels: app=cronjob
controller-uid=xxxxxx
job-name=cronjob-1623302220
Annotations: <none>
Status: Running
IP: 10.244.1.30
IPs:
IP: 10.244.1.30
Controlled By: Job/cronjob-1623302220
Containers:
plannercronjob:
Container ID: docker://xxxxxxxxxxxxxxxx
Image: xxxxxxxxxxx
Image ID: docker-xxxx
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 9 Jun 2021 10:47:06 +0530
Ready: True
Restart Count: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-97xzv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-97xzv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-97xzv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13h default-scheduler Successfully assigned cronjob/cronjob-1623302220-xnwwx to aks-agentpool-xxx-vmss000000
Normal Pulling 13h kubelet, aks-agentpool-xxx-vmss000000 Pulling image "xxxx.azurecr.io/xxx:1.1.1"
Normal Pulled 13h kubelet, aks-agentpool-xxx-vmss000000 Successfully pulled image "xxx.azurecr.io/xx:1.1.1"
Normal Created 13h kubelet, aks-agentpool-xxx-vmss000000 Created container cronjob
Normal Started 13h kubelet, aks-agentpool-xxx-vmss000000 Started container cronjob
#KrishnaChaurasia . I run the docker image in my system. There is some error in my python code. But it is exit with error. But in the kubernetes it is not exited and not stop
docker run xxxxx/cronjob:1
File "/usr/local/lib/python3.8/site-packages/azure/core/pipeline/transport/_requests_basic.py", line 261, in send
raise error
azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x7f113f6480a0>: Failed to establish a new connection: [Errno -2] Name or service not known
echo $?
1
If you are seeing your pod is always running and never completed, try to add staratingDeadlineSeconds.
https://medium.com/#hengfeng/what-does-kubernetes-cronjobs-startingdeadlineseconds-exactly-mean-cc2117f9795f

Pod with Debian image is getting created but container is continuously crashing

Below is my Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: pod-debian-container
spec:
containers:
- name: pi
image: debian
command: ["/bin/echo"]
args: ["Hello, World."]
And below is the output of "describe" command for this Pod:
C:\Users\so.user\Desktop>kubectl describe pod/pod-debian-container
Name: pod-debian-container
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 15 Feb 2021 21:47:43 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.21
IPs:
IP: 10.244.0.21
Containers:
pi:
Container ID: cri-o://f9081af183308f01bf1de6108b2c988e6bcd11ab2daedf983e99e1f4d862981c
Image: debian
Image ID: docker.io/library/debian#sha256:102ab2db1ad671545c0ace25463c4e3c45f9b15e319d3a00a1b2b085293c27fb
Port: <none>
Host Port: <none>
Command:
/bin/echo
Args:
Hello, World.
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 15 Feb 2021 21:56:49 +0530
Finished: Mon, 15 Feb 2021 21:56:49 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sxlc9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sxlc9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sxlc9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/pod-debian-container to minikube
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.1633901s
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.4271866s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.0252907s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.1897469s
Normal Started 14m (x4 over 15m) kubelet Started container pi
Normal Pulling 13m (x5 over 15m) kubelet Pulling image "debian"
Normal Created 13m (x5 over 15m) kubelet Created container pi
Normal Pulled 13m kubelet Successfully pulled image "debian" in 9.1170801s
Warning BackOff 5m25s (x31 over 15m) kubelet Back-off restarting failed container
Warning Failed 10s kubelet Error: ErrImagePull
And below is another output:
C:\Users\so.user\Desktop>kubectl get pod,job,deploy,rs
NAME READY STATUS RESTARTS AGE
pod/pod-debian-container 0/1 CrashLoopBackOff 6 15m
Below are my question:
I can see that Pod is running but Container inside it is crashing. I can't understand "why" because I see that Debian image is successfully pulled
As you can see in "kubectl get pod,job,deploy,rs" output, RESTARTS is equal to 6, is it the Pod which has restarted 6 times or is it the container?
Why 6 restart happened, I didn't mention anything in my spec
This looks like a liveness problem related to the CrashLoopBackOff have you cosidered taking a look into this blog it explains very well how to debug the problem blog

Kafka Pod doesn't start on GKE

I followed this tutorial and when I tried to run it on GKE I was not able to start kafka pod.
It returns CrashLoopBackOff all the time. And I don't know how to show pod error logs.
Here is the result when I hit kubectl describe pod my-pod-xxx:
Name: kafka-broker1-54cb95fb44-hlj5b
Namespace: default
Node: gke-xxx-default-pool-f9e313ed-zgcx/10.146.0.4
Start Time: Thu, 25 Oct 2018 11:40:21 +0900
Labels: app=kafka
id=1
pod-template-hash=1076519600
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container kafka
Status: Running
IP: 10.48.8.10
Controlled By: ReplicaSet/kafka-broker1-54cb95fb44
Containers:
kafka:
Container ID: docker://88ee6a1df4157732fc32b7bd8a81e329dbdxxxx9cbe614689e775d183dbcd61
Image: wurstmeister/kafka
Image ID: docker-pullable://wurstmeister/kafka#sha256:4f600a95fa1288f7b1xxxxxa32ca00b4fb13b83b31533fa6b40499bd9bdf192f
Port: 9092/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 25 Oct 2018 14:35:32 +0900
Finished: Thu, 25 Oct 2018 14:35:51 +0900
Ready: False
Restart Count: 37
Requests:
cpu: 100m
Environment:
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_HOST_NAME: 35.194.100.32
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_BROKER_ID: 1
KAFKA_CREATE_TOPICS: topic1:3:3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w6s7n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-w6s7n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w6s7n
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 5m (x716 over 2h) kubelet, gke-xxx-default-pool-f9e313ed-zgcx Back-off restarting failed container
Normal Pulling 36s (x38 over 2h) kubelet, gke-xxxdefault-pool-f9e313ed-zgcx pulling image "wurstmeister/kafka"
I noticed that on the first run it is going well but after that,Node is changing status to NotReady and kafka pod is entering the CrashLoopBackOff
state.
Here is the log before it goes down:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned kafka-broker1-54cb95fb44-wwf2h to gke-xxx-default-pool-f9e313ed-8mr6
Normal SuccessfulMountVolume 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 MountVolume.SetUp succeeded for volume "default-token-w6s7n"
Normal Pulling 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 pulling image "wurstmeister/kafka"
Normal Pulled 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Successfully pulled image "wurstmeister/kafka"
Normal Created 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Created container
Normal Started 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Started container
Normal NodeControllerEviction 38s node-controller Marking for deletion Pod kafka-broker1-54cb95fb44-wwf2h from Node gke-dev-centurion-default-pool-f9e313ed-8mr6
Could anyone tell me what's wrong with my pod and how can I catch the error for pod failure?
I just figured out that my cluster's nodes have not enough resources.
After creating a new cluster with more memory, it works.

Kubernetes init containers run every hour

I have recently set up redis via https://github.com/tarosky/k8s-redis-ha, this repo includes an init container, and I have included an extra init container in order to get passwords etc set up.
I am seeing some strange (and it seems undocumented) behavior, whereby the init containers run as expected before the redis container starts, however then they run subsequently every hour, close to an hour. I have tested this behavior using a busybox init container (which does nothing) on deployments & statefulset and experience the same behavior, so it is not specific to this redis pod.
I have tested this on bare metal with k8s 1.6 and 1.8 with the same results, however when applying init containers to GKE (k8s 1.7) this behavior does not happen. I can't see any flags for GKE's kubelet to dictate this behavior.
See below for kubectl describe pod showing that the init containers are run when the main pod has not exited/crashed.
Name: redis-sentinel-1
Namespace: (redacted)
Node: (redacted)/(redacted)
Start Time: Mon, 12 Mar 2018 06:20:55 +0000
Labels: app=redis-sentinel
controller-revision-hash=redis-sentinel-7cc557cf7c
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"(redacted)","name":"redis-sentinel","uid":"759a3a3b-25bd-11e8-a8ce-0242ac110...
security.alpha.kubernetes.io/unsafe-sysctls=net.core.somaxconn=1024
Status: Running
IP: (redacted)
Controllers: StatefulSet/redis-sentinel
Init Containers:
redis-ha-server:
Container ID: docker://557d777a7c660b062662426ebe9bbf6f9725fb9d88f89615a8881346587c1835
Image: tarosky/k8s-redis-ha:sentinel-3.0.1
Image ID: docker-pullable://tarosky/k8s-redis-ha#sha256:98e09ef5fbea5bfd2eb1858775c967fa86a92df48e2ec5d0b405f7ca3f5ada1c
Port:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 13 Mar 2018 03:01:12 +0000
Finished: Tue, 13 Mar 2018 03:01:12 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
-redis-init:
Container ID: docker://18c4e353233a6827999ae4a16adf1f408754a21d80a8e3374750fdf9b54f9b1a
Image: gcr.io/(redacted)/redis-init
Image ID: docker-pullable://gcr.io/(redacted)/redis-init#sha256:42042093d58aa597cce4397148a2f1c7967db689256ed4cc8d9f42b34d53aca2
Port:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 13 Mar 2018 03:01:25 +0000
Finished: Tue, 13 Mar 2018 03:01:25 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt from opt (rw)
/secrets/redis-password from redis-password (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
Containers:
redis-sentinel:
Container ID: docker://a54048cbb7ec535c841022c543a0d566c9327f37ede3a6232516721f0e37404d
Image: redis:3.2
Image ID: docker-pullable://redis#sha256:474fb41b08bcebc933c6337a7db1dc7131380ee29b7a1b64a7ab71dad03ad718
Port: 26379/TCP
Command:
/opt/bin/k8s-redis-ha-sentinel
Args:
/opt/sentinel.conf
State: Running
Started: Mon, 12 Mar 2018 06:21:02 +0000
Ready: True
Restart Count: 0
Readiness: exec [redis-cli -p 26379 info server] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVICE: redis-server
SERVICE_PORT: redis-server
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
redis-sword:
Container ID: docker://50279448bbbf175b6f56f96dab59061c4652c2117452ed15b3a5380681c7176f
Image: tarosky/k8s-redis-ha:sword-3.0.1
Image ID: docker-pullable://tarosky/k8s-redis-ha#sha256:2315c7a47d9e47043d030da270c9a1252c2cfe29c6e381c8f50ca41d3065db6d
Port:
State: Running
Started: Mon, 12 Mar 2018 06:21:03 +0000
Ready: True
Restart Count: 0
Environment:
SERVICE: redis-server
SERVICE_PORT: redis-server
SENTINEL: redis-sentinel
SENTINEL_PORT: redis-sentinel
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
opt:
Type: HostPath (bare host directory volume)
Path: /store/redis-sentinel/opt
redis-password:
Type: Secret (a volume populated by a Secret)
SecretName: redis-password
Optional: false
default-token-hkj6d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hkj6d
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
20h 30m 21 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Pulling pulling image "tarosky/k8s-redis-ha:sentinel-3.0.1"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Started Started container
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Created Created container
20h 30m 21 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Pulled Successfully pulled image "tarosky/k8s-redis-ha:sentinel-3.0.1"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Pulling pulling image "gcr.io/(redacted)/redis-init"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Pulled Successfully pulled image "gcr.io/(redacted)/redis-init"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Created Created container
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Started Started container
Note the Containers in the pod which started at Mon, 12 Mar 2018 06:21:02 +0000 (with 0 restarts) and the Init Containers which started from Tue, 13 Mar 2018 03:01:12 +0000. These seem to re-run every hour pretty much in an interval of hour.
Our bare metal must be misconfigured for init containers somewhere? Can anyone shed any light on this strange behavior?
If you are pruning away exited containers, then the container pruning/removal is a likely cause. In my testing, it appears that exited init containers which are removed from Docker Engine (hourly, or otherwise), such as with "docker system prune -f" will cause Kubernetes to re-launch the init containers. Is this the issue in your case, if this is still persisting?
Also, see https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ for Kubelet garbage collection documentation, which appears to support these types of tasks (rather than needing to implement it yourself)

Mounting config maps as volumes in pods, but the mounts are empty

I'm trying to mount a ConfigMap as a directory with files in a Pod. The directory is showing up, but there are no files in it. Here's a spec.yaml I'm applying with kubectl apply -f spec.yaml :
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map-1
data:
key-1: Wow I am such a config
---
apiVersion: v1
kind: Pod
metadata:
name: file-printer
spec:
volumes:
- name: config-volume
configMap:
name: config-map-1
containers:
- name: file-printer
image: gcr.io/google_containers/busybox
volumeMounts:
- name: config-volume
mountPath: /config-map-file
command: ["ls", "/", "/config-map-file"]
restartPolicy: Never
I am running ls / /config-map-file so I can see that the directory is being created, but it is not populated:
bash-3.2$ kubectl logs file-printer
/:
bin
config-map-file
dev
[...snip...]
usr
var
/config-map-file:
bash-3.2$
I would expect /config-map-file to have a file in it called key-1, but it's empty :(
Here's the output of kubectl describe pod file-printer :
bash-3.2$ kubectl describe pod file-printer
Name: file-printer
Namespace: default
Node: 127.0.0.1/127.0.0.1
Start Time: Mon, 04 Apr 2016 20:28:33 -0500
Labels: <none>
Status: Succeeded
IP: 172.17.0.2
Controllers: <none>
Containers:
file-printer:
Container ID: docker://5d08d31a8b06665ce50ba1147df8473a5997406813edff1fa2126cb48464a979
Image: gcr.io/google_containers/busybox
Image ID: docker://sha256:e7d168d7db455c45f4d0315d89dbd18806df4784f803c3cc99f8a2e250585b5b
Port:
Command:
ls
/
/config-map-file
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 04 Apr 2016 20:28:36 -0500
Finished: Mon, 04 Apr 2016 20:28:36 -0500
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: config-map-1
default-token-e5g2d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-e5g2d
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3s 3s 1 {default-scheduler } Normal Scheduled Successfully assigned file-printer to 127.0.0.1
2s 2s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "addNDotsOption: ResolvConfPath \"/mnt/sda1/var/lib/docker/containers/9472967c2cb9f471067e87560f061647afb9b353c6548f3b64cc06c034e5fe1f/resolv.conf\" does not exist"
1s 1s 1 {kubelet 127.0.0.1} spec.containers{file-printer} Normal Pulling pulling image "gcr.io/google_containers/busybox"
0s 0s 1 {kubelet 127.0.0.1} spec.containers{file-printer} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
0s 0s 1 {kubelet 127.0.0.1} spec.containers{file-printer} Normal Created Created container with docker id 5d08d31a8b06
0s 0s 1 {kubelet 127.0.0.1} spec.containers{file-printer} Normal Started Started container with docker id 5d08d31a8b06
0s 0s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "file-printer" with RunContainerError: "failed to apply oom-score-adj to container \"exceeded maxTries, some processes might not have desired OOM score\"- /k8s_file-printer.903c39c6_file-printer_default_b64f103c-facd-11e5-b027-7eac684371ee_aa7b4c91"
What am I missing? Client & Server are both running 1.2.0.