Can not pull any image in minikube - kubernetes

Im on macOS and im using minikube with hyperkit driver: minikube start --driver=hyperkit
and everything seems ok...
with minikube status:
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
with minikube version:
minikube version: v1.24.0
with kubectl version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
and with kubectl get no:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 13m v1.22.3
my problem is when i deploy anything, it wont pull any image...
for instance:
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
then kubectl get pods:
NAME READY STATUS RESTARTS AGE
hello-minikube-6ddfcc9757-nfc64 0/1 ImagePullBackOff 0 13m
then i tried to figure out what is the problem?
k describe pod/hello-minikube-6ddfcc9757-nfc64
here is the result:
Name: hello-minikube-6ddfcc9757-nfc64
Namespace: default
Priority: 0
Node: minikube/192.168.64.8
Start Time: Sun, 16 Jan 2022 10:49:27 +0330
Labels: app=hello-minikube
pod-template-hash=6ddfcc9757
Annotations: <none>
Status: Pending
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/hello-minikube-6ddfcc9757
Containers:
echoserver:
Container ID:
Image: k8s.gcr.io/echoserver:1.4
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5qql (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k5qql:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/hello-minikube-6ddfcc9757-nfc64 to minikube
Normal Pulling 16m (x4 over 18m) kubelet Pulling image "k8s.gcr.io/echoserver:1.4"
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "k8s.gcr.io/echoserver:1.4": rpc error: code = Unknown desc = Error response from daemon: Get "https://k8s.gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 16m (x4 over 18m) kubelet Error: ErrImagePull
Warning Failed 15m (x6 over 18m) kubelet Error: ImagePullBackOff
Normal BackOff 3m34s (x59 over 18m) kubelet Back-off pulling image "k8s.gcr.io/echoserver:1.4"
then tried to get some logs!:
k logs pod/hello-minikube-6ddfcc9757-nfc64 and k logs deploy/hello-minikube
both returns the same result:
Error from server (BadRequest): container "echoserver" in pod "hello-minikube-6ddfcc9757-nfc64" is waiting to start: trying and failing to pull image
this deployment was an example from minikube documentation
but i have no idea why it doesnt pull any image...

I had exactly same problem.
I found out that my internet connection was slow,
the timout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
first use minikube to pull the image you need
for example:
minikube image load k8s.gcr.io/echoserver:1.4
and then everything will work because now kubectl will use the image that is stored locally.

According to this article:
The status ImagePullBackOff means that a Pod couldn’t start, because Kubernetes couldn’t pull a container image. The ‘BackOff’ part means that Kubernetes will keep trying to pull the image, with an increasing delay (‘back-off’).
Here is also a handbook about pushing images into a minikube cluster.
This handbook describes your issue:
Unable to pull images..Client.Timeout exceeded while awaiting headers
Unable to pull images, which may be OK:
failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.3": output: Error response from daemon:
Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)
This error indicates that the container runtime running within the VM does not have access to the internet.
See possible workarounds.

I encountered some similar issue, it is fixed by using echoserver:1.10 instead of echoserver:1.4

Related

HashiCorp Vault Enable REST Call

I am following the HashiCorp tutorial and it all looks fine until I try to launch the "webapp" pod - a simple pod whose only function is to demonstrate that it can start and mount a secret volume.
The error (permission denied on a REST call) is shown at the bottom of this command output:
kubectl describe pod webapp
Name: webapp
Namespace: default
Priority: 0
Service Account: webapp-sa
Node: docker-desktop/192.168.65.4
Start Time: Tue, 14 Feb 2023 09:32:07 -0500
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
webapp:
Container ID:
Image: jweissig/app:0.0.1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt/secrets-store from secrets-store-inline (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b76r (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
secrets-store-inline:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: secrets-store.csi.k8s.io
FSType:
ReadOnly: true
VolumeAttributes: secretProviderClass=vault-database
kube-api-access-5b76r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42m default-scheduler Successfully assigned default/webapp to docker-desktop
Warning FailedMount 20m (x8 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline kube-api-access-5b76r]: timed out waiting for the condition
Warning FailedMount 12m (x23 over 42m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/webapp, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "db-password": Error making API request.
URL: GET http://vault.default:8200/v1/secret/data/db-pass
Code: 403. Errors:
* 1 error occurred:
* permission denied
Warning FailedMount 2m19s (x4 over 38m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[kube-api-access-5b76r secrets-store-inline]: timed out waiting for the condition
So it seems that this REST call fails: GET http://vault.default:8200/v1/secret/data/db-pass. Indeed, it fails from curl as well:
curl -vik -H "X-Vault-Token: root" http://localhost:8200/v1/secret/data/db-pass
* Trying 127.0.0.1:8200...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8200 failed: Connection refused
* Failed to connect to localhost port 8200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8200: Connection refused
At this point I am a bit lost. I am not sure that the REST call is configured correctly, i.e. in such a way that Vault will accept it; but I am also not sure how to configure it differently.
The Vault logs show the information below, so I seems that the port and token I use are correct:
2023-02-14 09:07:14 You may need to set the following environment variables:
2023-02-14 09:07:14 $ export VAULT_ADDR='http://[::]:8200'
2023-02-14 09:07:14 The root token is displayed below
2023-02-14 09:07:14 Root Token: root
Vault seems to be running fine in Kubernetes:
kubectl get pods
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 1 (22m ago) 32m
vault-agent-injector-77fd4cb69f-mf66p 1/1 Running 1 (22m ago) 32m
If I try to show the Vault status:
vault status
Error checking seal status: Get "http://[::]:8200/v1/sys/seal-status": dial tcp [::]:8200: connect: connection refused
I don't think the Vault is sealed, but if I try to unseal it:
vault operator unseal
Unseal Key (will be hidden):
Error unsealing: Put "http://[::]:8200/v1/sys/unseal": dial tcp [::]:8200: connect: connection refused
Any ideas?
As pertains to the tutorial, it works. Not sure what I was doing wrong, but I ran it all again and it worked. If I had to guess, I would suspect that some of the YAML involved in configuring the pods got malformed (since white space is significant).
The vault status command works, but only from a terminal running inside the Vault pod. The Kubernetes-in-Docker-on-DockerDesktop cluster does not expose any ports for these pods, so even though I have vault-cli installed on my PC, I cannot use vault status from outside the pods.

kubernetes pod (mssql-tools) failing with CrashLoopBackOff error and restarting

I'm using Rancher Dekstop for K8 in WSL 2 in Windows 11.
I'm trying to create a pod using the simple yaml:
apiVersion: v1
kind: Pod
metadata:
name: mssql-tools
labels:
name: mssql-tools
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
But it is continuously giving CrashLoopBackOff error.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mssql-tools 0/1 CrashLoopBackOff 11 (8s ago) 14m
And here is the result of kubectl describe pod mssql-tool:
$ kubectl describe pod mssql-tools
Name: mssql-tools
Namespace: default
Priority: 0
Service Account: default
Node: desktop-2ohsprk/172.22.97.204
Start Time: Mon, 26 Dec 2022 04:34:19 +0500
Labels: name=mssql-tools
Annotations: <none>
Status: Running
IP: 10.42.0.57
IPs:
IP: 10.42.0.57
Containers:
mssql-tools:
Container ID: docker://76343010f4344a5d26fb35f3b0278271d3336e8e10d695cc22e78520262f34bf
Image: mcr.microsoft.com/mssql-tools:latest
Image ID: docker-pullable://mcr.microsoft.com/mssql-tools#sha256:62556500522072535cb3df2bb5965333dded9be47000473e9e0f84118e248642
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 26 Dec 2022 04:46:20 +0500
Finished: Mon, 26 Dec 2022 04:46:20 +0500
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 26 Dec 2022 04:45:51 +0500
Finished: Mon, 26 Dec 2022 04:45:51 +0500
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkqlg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkqlg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/mssql-tools to desktop-2ohsprk
Normal Pulled 12m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 1.459473213s
Normal Pulled 12m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 823.403008ms
Normal Pulled 11m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 835.697509ms
Normal Pulled 11m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 873.802598ms
Normal Created 11m (x4 over 12m) kubelet Created container mssql-tools
Normal Started 11m (x4 over 12m) kubelet Started container mssql-tools
Normal Pulling 10m (x5 over 12m) kubelet Pulling image "mcr.microsoft.com/mssql-tools:latest"
Normal Pulled 10m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 740.64559ms
Warning BackOff 6m56s (x25 over 11m) kubelet Back-off restarting failed container
Normal SandboxChanged 50s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 48s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 951.332457ms
Normal Pulled 32s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 828.839917ms
Normal Pulling 4s (x3 over 49s) kubelet Pulling image "mcr.microsoft.com/mssql-tools:latest"
Normal Pulled 3s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 713.951656ms
Normal Created 3s (x3 over 48s) kubelet Created container mssql-tools
Normal Started 3s (x3 over 48s) kubelet Started container mssql-tools
Warning BackOff 2s (x5 over 47s) kubelet Back-off restarting failed container
The same container works perfectly if I run it via docker and I can use its shell to execute sqlcmd properly.
I can't figure out any reason for this.
Any help would be really appreciated.
Thanks
Crashloopbackoff is the common error which indicates that pod failed to start and it continued to fail repeatedly when kubernetes tried to restart this.
To troubleshoot this issue follow the below steps:
Check for “Back off Restarting Failed Container” by running the command Run kubectl describe pod [name].
If you get a Liveness probe failed and Back-off restarting failed container messages from the kubelet, this indicates the container is not responding and is in the process of restarting.
Check from the previous container instance. Run kubectl get pods to identify the Kubernetes pod that causes CrashLoopBackOff error. You can run kubectl logs --previous --tail 10command to get the last ten log lines from the pod.
Check deployment logs by running the command: kubectl logs -f deploy/ -n
Refer to this link for more detailed troubleshooting steps.
So after trying and digging through multiple options, finally it worked by executing the command sleep 3600000 i.e. delaying it so that the pod initializes itself properly and then executes the container.
Here is the working yaml:
apiVersion: v1
kind: Pod
metadata:
name: mssql-tools
labels:
name: mssql-tools
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
command: ["sleep"]
args:
- "3600000"
imagePullPolicy: IfNotPresent
The command and argument passing portion can also be mentioned like the following:
apiVersion: v1
...
...
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
command:
- sleep
- "3600000"
...
and btw, you can also deploy a container by passing a command with the kubectl run command line: i.e.
kubectl run mssql --image=mcr.microsoft.com/mssql-tools --command sleep 3600000 -n myNameSpace
Note: You can omit -n myNameSpace if you are not deploying it in a specific namespace or deploying it in the default namespace.

Microk8s ImagePullBackOff cannot be fixed by modifying the config

I have installed a microk8s to ubuntu (arm64 bit version), I would like to access my local image registry provided by the microk8s enable registry. But I get a ImagePullBackOff error, I have tried to modify /var/snap/microk8s/current/args/containerd.toml config, but it not works:
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins.cri.registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
[plugins.cri.registry.mirrors."192.168.0.45:32000"]
endpoint = ["http://192.168.0.45:32000"]
[plugins.cri.registry.configs."192.168.0.45:32000".tls]
insecure_skip_verify = true
My pod status:
microk8s.kubectl describe pod myapp-7d655f6ccd-gpgkx
Name: myapp-7d655f6ccd-gpgkx
Namespace: default
Priority: 0
Node: 192.168.0.66/192.168.0.66
Start Time: Mon, 15 Mar 2021 16:53:30 +0000
Labels: app=myapp
pod-template-hash=7d655f6ccd
Annotations: <none>
Status: Pending
IP: 10.1.54.7
IPs:
IP: 10.1.54.7
Controlled By: ReplicaSet/myapp-7d655f6ccd
Containers:
myapp:
Container ID:
Image: 192.168.0.45:32000/myapp:latest
Image ID:
Port: 9000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Limits:
memory: 384Mi
Requests:
memory: 128Mi
Environment:
REDIS: redis
MYSQL: mysql
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dn4bk (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-dn4bk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dn4bk
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/myapp-7d655f6ccd-gpgkx to 192.168.0.66
Normal Pulling 14m (x4 over 15m) kubelet Pulling image "192.168.0.45:32000/myapp:latest"
Warning Failed 14m (x4 over 15m) kubelet Failed to pull image "192.168.0.45:32000/myapp:latest": rpc error: code = Unknown desc = failed to resolve image "192.168.0.45:32000/myapp:latest": no available registry endpoint: failed to do request: Head "https://192.168.0.45:32000/v2/myapp/manifests/latest": http: server gave HTTP response to HTTPS client
Warning Failed 14m (x4 over 15m) kubelet Error: ErrImagePull
Warning Failed 13m (x6 over 15m) kubelet Error: ImagePullBackOff
Normal BackOff 20s (x63 over 15m) kubelet Back-off pulling image "192.168.0.45:32000/myapp:latest"
version info:
microk8s.kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:14:05Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/arm64"}
It seems that it want to use https instead of http.
How can I use insecure option in microk8s with containerd?

My pod is in Container Creating state, showing TLS handshake timeout

I use docker pull command can pull mirror image correctly,But when I use the kubectl run command,my pod is in ContainerCreating state.How can I fix it.
[root#centos-master etc]# kubectl run my-nginx --image=nginx
deployment "my-nginx" created
[root#centos-master etc]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-2723453542-5s33f 0/1 ContainerCreating 0 7s
[root#centos-master etc]# kubectl describe pod my-nginx-2723453542-5s33f
Name: my-nginx-2723453542-5s33f
Namespace: default
Node: centos-minion-2/104.21.51.35
Start Time: Fri, 30 Aug 2019 16:11:57 +0800
Labels: pod-template-hash=2723453542
run=my-nginx
Status: Pending
IP:
Controllers: ReplicaSet/my-nginx-2723453542
Containers:
my-nginx:
Container ID:
Image: nginx
Image ID:
Port:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned my-nginx-2723453542-5s33f to centos-minion-2
<invalid> <invalid> 5 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (Get https://registry.access.redhat.com/v1/_ping: proxyconnect tcp: net/http: TLS handshake timeout)"
<invalid> <invalid> 11 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
As was recommended by #char and #prometherion, in order to sort out this issue you probably need to supply KUBELET_ARGS parameters with appropriate --pod-infra-container-image flag as per link provided :
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
You can also take into the consideration solution mentioned by #Matthew installing subscription-manager package and subscribe host OS, as described here.

kubernetes local cluster create pods got errors like ‘ErrImagePull’ and ‘ImagePullBackOff’

I just installed a kubernetes local cluster, but when I tried the command
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
to create and run pods, here is what I got:
NAME READY STATUS RESTARTS AGE
my-nginx-00t7f 0/1 ContainerCreating 0 23m
my-nginx-spy2b 0/1 ContainerCreating 0 23m
and I used kubectl logs, I got
Pod "my-nginx-00t7f" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
Seems it got stuck in 'pending' status.
Then I used 'kubectl describe' and got
Name: my-nginx-00t7f
Namespace: default
Image(s): nginx
Node: 127.0.0.1/127.0.0.1
Start Time: Thu, 17 Dec 2015 22:27:18 +0800
Labels: run=my-nginx
Status: Pending
Reason:
Message:
IP:
Replication Controllers: my-nginx (2/2 replicas created)
Containers:
my-nginx:
Container ID:
Image: nginx
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-p09p6:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-p09p6
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
26m 26m 1 {scheduler } Normal Scheduled Successfully assigned my-nginx-00t7f to 127.0.0.1
22m 1m 79 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ImagePullBackOff
24m 5s 8 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ErrImagePull
It seems my docker can not pull images, but actually it can, there is no problem when I docker pull nginx.
I assume that you figured out that it was the pause container that couldn't be pulled from the Kubelet logs.
Kubernetes needs to create a container for the pod in order to hold shared resources, such as the network namespace. It uses the pause container for this, which is a very small container that just sleeps forever.
If your container remains in pending status then please check the kube-schedular services. If its stopped state, turn it on and check.