Unable to pull from private docker hub registry on kubernetes - kubernetes

I'm running a k8 cluster on google container engine. I'm having trouble getting it to pull images from a private docker repo.
I get the following when trying to boot:
Name: ds-expected-date
Namespace: default
Node: gke-ds-cluster-1-default-pool-8980b100-l64j/10.132.0.3
Start Time: Wed, 24 May 2017 13:24:11 +0100
Labels: <none>
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container ds-expected-date-flask
Status: Pending
IP: 10.40.0.23
Controllers: <none>
Containers:
ds-expected-date-flask:
Container ID:
Image: fluidy/ds-expected-date:latest
Image ID:
Port:
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h340m (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-h340m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h340m
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
21s 21s 1 default-scheduler Normal Scheduled Successfully assigned ds-expected-date to gke-ds-cluster-1-default-pool-8980b100-l64j
18s 18s 1 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Normal BackOff Back-off pulling image "fluidy/ds-expected-date:latest"
18s 18s 1 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ds-expected-date-flask" with ImagePullBackOff: "Back-off pulling image \"fluidy/ds-expected-date:latest\""
20s 6s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Normal Pulling pulling image "fluidy/ds-expected-date:latest"
19s 5s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j spec.containers{ds-expected-date-flask} Warning Failed Failed to pull image "fluidy/ds-expected-date:latest": Error response from daemon: unauthorized: authentication required
19s 5s 2 kubelet, gke-ds-cluster-1-default-pool-8980b100-l64j Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ds-expected-date-flask" with ErrImagePull: "Error response from daemon: unauthorized: authentication required"
I have followed all the instructions on the docs page. I'm confident my registry secret is being read - if I put duff credentials in it, the error changes to 'invalid user name or password'.

You have not configured your cluster to pull private images from Docker Hub with your credentials.
Read and apply this guide: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Google Container Engine can automatically pull from Google Container Registry (http://gcr.io), consider using that without pulling images from a private registry.

Related

Pulling a image from gcr.to fails

I am able to create a kubernetes cluster and I followed the steps in to pull a private image from GCR repository.
https://cloud.google.com/container-registry/docs/advanced-authentication
https://cloud.google.com/container-registry/docs/access-control
I am unable to pull the image from GCR. I have used the below commands
gcloud auth login
I have authendiacted the service accounts.
Connection between the local machine and gcr as well.
Below is the error
$ kubectl describe pod test-service-55cc8f947d-5frkl
Name: test-service-55cc8f947d-5frkl
Namespace: default
Priority: 0
Node: gke-test-gke-clus-test-node-poo-c97a8611-91g2/10.128.0.7
Start Time: Mon, 12 Oct 2020 10:01:55 +0530
Labels: app=test-service
pod-template-hash=55cc8f947d
tier=test-service
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container test-service
Status: Pending
IP: 10.48.0.33
IPs:
IP: 10.48.0.33
Controlled By: ReplicaSet/test-service-55cc8f947d
Containers:
test-service:
Container ID:
Image: gcr.io/test-256004/test-service:v2
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
test_SERVICE_BUCKET: test-pt-prod
COPY_FILES_DOCKER_IMAGE: gcr.io/test-256004/test-gcs-copy:latest
test_GCP_PROJECT: test-256004
PIXALATE_GCS_DATASET: test_pixalate
PIXALATE_BQ_TABLE: pixalate
APP_ADS_TXT_GCS_DATASET: test_appadstxt
APP_ADS_TXT_BQ_TABLE: appadstxt
Mounts:
/test/output from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6g7nl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
default-token-6g7nl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6g7nl
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned default/test-service-55cc8f947d-5frkl to gke-test-gke-clus-test-node-poo-c97a8611-91g2
Normal SuccessfulAttachVolume 38s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-25025b4c-2e89-4400-8e0e-335298632e74"
Normal SandboxChanged 31s kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Failed to pull image "gcr.io/test-256004/test-service:v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/test-256004/test-service, repository does not exist or may require 'docker login': denied: Permission denied for "v2" from request "/v2/test-256004/test-service/manifests/v2".
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ErrImagePull
Normal BackOff 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Back-off pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ImagePullBackOff
If you don't use workload identity, the default service account of your pod is this one of the nodes, and the nodes, by default, use the Compute Engine service account.
Make sure to grant it the correct permission to access to GCR.
If you use another service account, grant it with the Storage Object Reader role (when you pull an image, you read a blob stored in Cloud Storage (at least it's the same permission)).
Note: even if it's the default service account, I don't recommend to use the Compute Engine service account with any change in its roles. Indeed, it is project editor, that is a lot of responsability.

My pod is in Container Creating state, showing TLS handshake timeout

I use docker pull command can pull mirror image correctly,But when I use the kubectl run command,my pod is in ContainerCreating state.How can I fix it.
[root#centos-master etc]# kubectl run my-nginx --image=nginx
deployment "my-nginx" created
[root#centos-master etc]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-2723453542-5s33f 0/1 ContainerCreating 0 7s
[root#centos-master etc]# kubectl describe pod my-nginx-2723453542-5s33f
Name: my-nginx-2723453542-5s33f
Namespace: default
Node: centos-minion-2/104.21.51.35
Start Time: Fri, 30 Aug 2019 16:11:57 +0800
Labels: pod-template-hash=2723453542
run=my-nginx
Status: Pending
IP:
Controllers: ReplicaSet/my-nginx-2723453542
Containers:
my-nginx:
Container ID:
Image: nginx
Image ID:
Port:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned my-nginx-2723453542-5s33f to centos-minion-2
<invalid> <invalid> 5 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (Get https://registry.access.redhat.com/v1/_ping: proxyconnect tcp: net/http: TLS handshake timeout)"
<invalid> <invalid> 11 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
As was recommended by #char and #prometherion, in order to sort out this issue you probably need to supply KUBELET_ARGS parameters with appropriate --pod-infra-container-image flag as per link provided :
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
You can also take into the consideration solution mentioned by #Matthew installing subscription-manager package and subscribe host OS, as described here.

Kafka Pod doesn't start on GKE

I followed this tutorial and when I tried to run it on GKE I was not able to start kafka pod.
It returns CrashLoopBackOff all the time. And I don't know how to show pod error logs.
Here is the result when I hit kubectl describe pod my-pod-xxx:
Name: kafka-broker1-54cb95fb44-hlj5b
Namespace: default
Node: gke-xxx-default-pool-f9e313ed-zgcx/10.146.0.4
Start Time: Thu, 25 Oct 2018 11:40:21 +0900
Labels: app=kafka
id=1
pod-template-hash=1076519600
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container kafka
Status: Running
IP: 10.48.8.10
Controlled By: ReplicaSet/kafka-broker1-54cb95fb44
Containers:
kafka:
Container ID: docker://88ee6a1df4157732fc32b7bd8a81e329dbdxxxx9cbe614689e775d183dbcd61
Image: wurstmeister/kafka
Image ID: docker-pullable://wurstmeister/kafka#sha256:4f600a95fa1288f7b1xxxxxa32ca00b4fb13b83b31533fa6b40499bd9bdf192f
Port: 9092/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 25 Oct 2018 14:35:32 +0900
Finished: Thu, 25 Oct 2018 14:35:51 +0900
Ready: False
Restart Count: 37
Requests:
cpu: 100m
Environment:
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_HOST_NAME: 35.194.100.32
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_BROKER_ID: 1
KAFKA_CREATE_TOPICS: topic1:3:3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w6s7n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-w6s7n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w6s7n
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 5m (x716 over 2h) kubelet, gke-xxx-default-pool-f9e313ed-zgcx Back-off restarting failed container
Normal Pulling 36s (x38 over 2h) kubelet, gke-xxxdefault-pool-f9e313ed-zgcx pulling image "wurstmeister/kafka"
I noticed that on the first run it is going well but after that,Node is changing status to NotReady and kafka pod is entering the CrashLoopBackOff
state.
Here is the log before it goes down:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned kafka-broker1-54cb95fb44-wwf2h to gke-xxx-default-pool-f9e313ed-8mr6
Normal SuccessfulMountVolume 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 MountVolume.SetUp succeeded for volume "default-token-w6s7n"
Normal Pulling 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 pulling image "wurstmeister/kafka"
Normal Pulled 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Successfully pulled image "wurstmeister/kafka"
Normal Created 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Created container
Normal Started 5m kubelet, gke-xxx-default-pool-f9e313ed-8mr6 Started container
Normal NodeControllerEviction 38s node-controller Marking for deletion Pod kafka-broker1-54cb95fb44-wwf2h from Node gke-dev-centurion-default-pool-f9e313ed-8mr6
Could anyone tell me what's wrong with my pod and how can I catch the error for pod failure?
I just figured out that my cluster's nodes have not enough resources.
After creating a new cluster with more memory, it works.

Kubernetes reports ImagePullBackOff for pod on minikube

I've built a docker image within the minikube VM. However I don't understand why Kubernetes is not finding it?
minikube ssh
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
diyapploopback latest 9590c4dc2ed1 2 hours ago 842MB
And if I describe the pod:
kubectl describe pods abcxyz12-6b4d85894-fhb2p
Name: abcxyz12-6b4d85894-fhb2p
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 13:49:51 +0000
Labels: appId=abcxyz12
pod-template-hash=260841450
Annotations: <none>
Status: Pending
IP: 172.17.0.6
Controllers: <none>
Containers:
nginx:
Container ID:
Image: diyapploopback:latest
Image ID:
Port: 80/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
mariadb:
Container ID: docker://fe09e08f98a9f972f2d086b56b55982e96772a2714ad3b4c2adf4f2f06c2986a
Image: mariadb:10.3
Image ID: docker-pullable://mariadb#sha256:8d4b8fd12c86f343b19e29d0fdd0c63a7aa81d4c2335317085ac973a4782c1f5
Port:
State: Running
Started: Wed, 07 Mar 2018 14:21:00 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Mar 2018 13:49:54 +0000
Finished: Wed, 07 Mar 2018 14:18:43 +0000
Ready: True
Restart Count: 1
Environment:
MYSQL_ROOT_PASSWORD: passwordTempXyz
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: abcxyz12
ReadOnly: false
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz12-6b4d85894-fhb2p to minikube
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
31m 29m 4 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
31m 16m 63 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
31m 6m 105 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
21s 21s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
20s 20s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
20s 20s 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
16s 16s 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
16s 15s 2 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
16s 15s 2 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
19s 1s 2 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
Seems I'm able to run it directly (only for debugging/diagnoses purposes..):
kubectl run abcxyz123 --image=diyapploopback --image-pull-policy=Never
If I describe the above deployment/container I get:
Name: abcxyz123-6749977548-stvsm
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 14:26:33 +0000
Labels: pod-template-hash=2305533104
run=abcxyz123
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controllers: <none>
Containers:
abcxyz123:
Container ID: docker://c9b71667feba21ef259a395c9b8504e3e4968e5b9b35a191963f0576d0631d11
Image: diyapploopback
Image ID: docker://sha256:9590c4dc2ed16cb70a21c3385b7e0519ad0b1fece79e343a19337131600aa866
Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 07 Mar 2018 14:42:45 +0000
Finished: Wed, 07 Mar 2018 14:42:48 +0000
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz123-6749977548-stvsm to minikube
17m 17m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Pulled Container image "diyapploopback" already present on machine
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Created Created container
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Started Started container
16m 1m 66 kubelet, minikube spec.containers{abcxyz123} Warning BackOff Back-off restarting failed container
imagePullPolicy: IfNotPresent
The above was not present (and it is required...) in my image config within the deployment...

Error imagepullbackoff when scaling working deployment Kubernetes

I have a working deployment of a docker image on Kubernetes. However, when I want to scale it I recieve and error that it cannot find my image (even though I'm scaling something that's already working from an image?)
Here is the command I'm using to scale the deployment.
./kubectl scale deployments/mautic --replicas=2
Here is the logs when I run Kubectl describe
Name: mautic-3389378641-jgm9b
Namespace: default
Node: minikube/192.168.99.101
Start Time: Fri, 01 Sep 2017 14:34:08 +0100
Labels: app=mautic
pod-template-hash=3389378641
tier=frontend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"mau
tic-3389378641","uid":"52a87ff6-8f06-11e7-8fbc-080027cd66fa",...
Status: Pending
IP: 172.17.0.8
Created By: ReplicaSet/mautic-3389378641
Controlled By: ReplicaSet/mautic-3389378641
Containers:
mautic:
Container ID:
Image: mautic/mautic:latest
Image ID:
Port: 80/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment:
MAUTIC_DB_HOST: mautic-mysql
MAUTIC_DB_PASSWORD: <set to the key 'password.txt' in secret 'mysql-pass'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9drh0 (ro)
/var/www/html from mautic-local-storage (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mautic-local-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mautic-lv-claim
ReadOnly: false
default-token-9drh0:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9drh0
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
47m 47m 1 default-scheduler Normal Scheduled Successfully assigned mauti
c-3389378641-jgm9b to minikube
47m 47m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded
for volume "pvc-5b7c14a3-8f03-11e7-8fbc-080027cd66fa"
47m 47m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded
for volume "default-token-9drh0"
47m 7m 12 kubelet, minikube spec.containers{mautic} Normal Pulling pulling image "mautic/mauti
c:latest"
47m 6m 12 kubelet, minikube spec.containers{mautic} Warning Failed Failed to pull image "mauti
c/mautic:latest": rpc error: code = 2 desc = Network timed out while trying to connect to https://index.docker.io/v1/repositories/mautic/mautic/images. You
may want to check your internet connection or if you are behind a proxy.
47m 6m 170 kubelet, minikube Warning FailedSync Error syncing pod
47m 6m 158 kubelet, minikube spec.containers{mautic} Normal BackOff Back-off pulling image "mau
tic/mautic:latest"
But the Mautic image referenced is right here and is already in use with the deployment I want to scale.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage v0 0e0d4b13c0c2 10 days ago 611MB
mautic/mautic latest 730d2796f904 2 weeks ago 611MB
mysql 5.6 cdfa8cc50c33 5 weeks ago 298MB
mysql latest c73c7527c03a 5 weeks ago 412MB
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 2 months ago 41.8MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4 a8e00546bcf3 2 months ago 49.4MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 2 months ago 41.4MB
gcr.io/google-containers/kube-addon-manager v6.4-beta.2 0a951668696f 2 months ago 79.2MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 3 months ago 134MB
autoize/mautic latest 6c99d7ce1a07 4 months ago 665MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 16 months ago 747kB
Pods
Does anyone have any idea why this isn't working?
Your host has lost connectivity to Docker Hub - try running "docker pull mautic/mautic:latest" and see if that works. This may be a network issue on your host, a problem with an intermediate proxy between your host and Docker Hub or (less likely) a temporary outage at Docker Hub or something else.
Since you're using the latest tag it is likely that your Deployment is using imagePullPolicy=Always. (docs: Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.) I'd suggest explicitly setting imagePullPolicy=IfNotPresent in your Deployment spec so that the local image that already exists is used when starting a new container.