Failed to pull image with "x509: certificate signed by unknown authority" error - kubernetes

I am using k3s kubernetes, and Harbor as a private container registry. I use a self-sign cert in Harbor. And I have a sample image in Harbor, which I want to create a sample pod in Kubernetes using this private Harbor image.
I created a file call testPod.yml with the following content to create the pod:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: harbor-server/t_project/test:001
imagePullSecrets:
- name: testcred
However, there is an error after I applied this yml file, x509: certificate signed by unknow authority, which is shown below:
Name: test
Namespace: default
Priority: 0
Node: server/10.1.0.11
Start Time: Thu, 07 Jul 2022 15:20:32 +0800
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.42.2.164
IPs:
IP: 10.42.2.164
Containers:
test:
Container ID:
Image: harbor-server/t_project/test:001
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-47cgb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-47cgb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/test to server
Normal BackOff 19s kubelet Back-off pulling image "harbor-server/t_project/test:001"
Warning Failed 19s kubelet Error: ImagePullBackOff
Normal Pulling 4s (x2 over 19s) kubelet Pulling image "harbor-server/t_project/test:001"
Warning Failed 4s (x2 over 19s) kubelet Failed to pull image "harbor-server/t_project/test:001": rpc error: code = Unknown desc = failed to pull and unpack image "harbor-server/t_project/test:001": failed to resolve reference "harbor-server/t_project/test:001": failed to do request: Head "https://harbor-server:443/v2/t_project/test/manifests/001?ns=harbor-server": x509: certificate signed by unknown authority
Warning Failed 4s (x2 over 19s) kubelet Error: ErrImagePull
How to solve this x509 error? Is there any step that I have missed?

The CA’s certificate needs to be trusted first.
Put the CA into the host system’s trusted CA's chain. Run the following command.
sudo mkdir -p /usr/local/share/ca-certificates/myregistry
sudo cp registry/myca.pem /usr/local/share/ca-certificates/myregistry/myca.crt
sudo update-ca-certificates
Notice, the cert on the specific directory have to be named with crt extension. restart the K3s service to let the change in effect.

Related

HashiCorp Vault Enable REST Call

I am following the HashiCorp tutorial and it all looks fine until I try to launch the "webapp" pod - a simple pod whose only function is to demonstrate that it can start and mount a secret volume.
The error (permission denied on a REST call) is shown at the bottom of this command output:
kubectl describe pod webapp
Name: webapp
Namespace: default
Priority: 0
Service Account: webapp-sa
Node: docker-desktop/192.168.65.4
Start Time: Tue, 14 Feb 2023 09:32:07 -0500
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
webapp:
Container ID:
Image: jweissig/app:0.0.1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt/secrets-store from secrets-store-inline (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b76r (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
secrets-store-inline:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: secrets-store.csi.k8s.io
FSType:
ReadOnly: true
VolumeAttributes: secretProviderClass=vault-database
kube-api-access-5b76r:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42m default-scheduler Successfully assigned default/webapp to docker-desktop
Warning FailedMount 20m (x8 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline kube-api-access-5b76r]: timed out waiting for the condition
Warning FailedMount 12m (x23 over 42m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/webapp, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "db-password": Error making API request.
URL: GET http://vault.default:8200/v1/secret/data/db-pass
Code: 403. Errors:
* 1 error occurred:
* permission denied
Warning FailedMount 2m19s (x4 over 38m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[kube-api-access-5b76r secrets-store-inline]: timed out waiting for the condition
So it seems that this REST call fails: GET http://vault.default:8200/v1/secret/data/db-pass. Indeed, it fails from curl as well:
curl -vik -H "X-Vault-Token: root" http://localhost:8200/v1/secret/data/db-pass
* Trying 127.0.0.1:8200...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8200 failed: Connection refused
* Failed to connect to localhost port 8200: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8200: Connection refused
At this point I am a bit lost. I am not sure that the REST call is configured correctly, i.e. in such a way that Vault will accept it; but I am also not sure how to configure it differently.
The Vault logs show the information below, so I seems that the port and token I use are correct:
2023-02-14 09:07:14 You may need to set the following environment variables:
2023-02-14 09:07:14 $ export VAULT_ADDR='http://[::]:8200'
2023-02-14 09:07:14 The root token is displayed below
2023-02-14 09:07:14 Root Token: root
Vault seems to be running fine in Kubernetes:
kubectl get pods
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 1 (22m ago) 32m
vault-agent-injector-77fd4cb69f-mf66p 1/1 Running 1 (22m ago) 32m
If I try to show the Vault status:
vault status
Error checking seal status: Get "http://[::]:8200/v1/sys/seal-status": dial tcp [::]:8200: connect: connection refused
I don't think the Vault is sealed, but if I try to unseal it:
vault operator unseal
Unseal Key (will be hidden):
Error unsealing: Put "http://[::]:8200/v1/sys/unseal": dial tcp [::]:8200: connect: connection refused
Any ideas?
As pertains to the tutorial, it works. Not sure what I was doing wrong, but I ran it all again and it worked. If I had to guess, I would suspect that some of the YAML involved in configuring the pods got malformed (since white space is significant).
The vault status command works, but only from a terminal running inside the Vault pod. The Kubernetes-in-Docker-on-DockerDesktop cluster does not expose any ports for these pods, so even though I have vault-cli installed on my PC, I cannot use vault status from outside the pods.

CrashLoopBackOff : Back-off restarting failed container for flask application

I am a beginner in kubernetes and was trying to deploy my flask application following this guide: https://medium.com/analytics-vidhya/build-a-python-flask-app-and-deploy-with-kubernetes-ccc99bbec5dc
I have successfully built a docker image and pushed it to dockerhub https://hub.docker.com/repository/docker/beatrix1997/kubernetes_flask_app
but am having trouble debugging a pod.
This is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetesflaskapp-deploy
labels:
app: kubernetesflaskapp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetesflaskapp
template:
metadata:
labels:
app: kubernetesflaskapp
spec:
containers:
- name: kubernetesflaskapp
image: beatrix1997/kubernetes_flask_app
ports:
- containerPort: 5000
And this is the description of the pod:
Name: kubernetesflaskapp-deploy-5764bbbd44-8696k
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Fri, 20 May 2022 11:26:33 +0100
Labels: app=kubernetesflaskapp
pod-template-hash=5764bbbd44
Annotations: <none>
Status: Running
IP: 172.17.0.12
IPs:
IP: 172.17.0.12
Controlled By: ReplicaSet/kubernetesflaskapp-deploy-5764bbbd44
Containers:
kubernetesflaskapp:
Container ID: docker://d500dc15e389190670a9273fea1d70e6bd6ab2e7053bd2480d114ad6150830f1
Image: beatrix1997/kubernetes_flask_app
Image ID: docker-pullable://beatrix1997/kubernetes_flask_app#sha256:1bfa98229f55b04f32a6b85d72860886abcc0f17295b14e173151a8e4b0f0334
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 20 May 2022 11:58:38 +0100
Finished: Fri, 20 May 2022 11:58:38 +0100
Ready: False
Restart Count: 11
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zq8n7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zq8n7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33m default-scheduler Successfully assigned default/kubernetesflaskapp-deploy-5764bbbd44-8696k to minikube
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 14.783413947s
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.243534487s
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.373217701s
Normal Pulling 32m (x4 over 33m) kubelet Pulling image "beatrix1997/kubernetes_flask_app"
Normal Created 32m (x4 over 33m) kubelet Created container kubernetesflaskapp
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.239794774s
Normal Started 32m (x4 over 33m) kubelet Started container kubernetesflaskapp
Warning BackOff 3m16s (x138 over 33m) kubelet Back-off restarting failed container
I am using ubuntu as my OS if it matters at all.
Any help would be appreciated!
Many thanks!
I would check the following:
Check if your Docker image works in Docker, you can run it with the run command, find the official doc here
If it doesn't work, then you can check what is wrong in your app first.
If it does, try checking the readiness and liveness probe, here the official documentation
You can find more hints about failing pods here
The error can be due to the issue in the application as the reported reason is "Back-off restarting failed container". Please paste the following logs in the question for further clarification
kubectl logs -n <NS> pods <pod-name>

How do you install mayastor for openebs with microk8s to use as PV/SC?

I have a 3 node microk8s cluster running on virtualbox Ubuntu vms. And I am trying to get mayastor for openebs working to use with PVCs. I have followed the steps in this guide:
https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor
https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor
An example of my MayastorPool from step 3 looks like this:
apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
name: pool-on-node1-n2
namespace: mayastor
spec:
node: node1
disks: [ "/dev/nvme0n2" ]
And my StorageClass looks like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
provisioner: io.openebs.csi-mayastor
parameters:
repl: '3'
protocol: 'nvmf'
ioTimeout: '60'
local: 'true'
volumeBindingMode: WaitForFirstConsumer
All the checks seem fine according to guide, but when I try to creating a PVC and using it according to this https://mayastor.gitbook.io/introduction/quickstart/deploy-a-test-application the the test application fio pod doesn't come up. When I look at it with describe I see the following:
$ kubectl describe pods fio -n mayastor
Name: fio
Namespace: mayastor
Priority: 0
Node: node2/192.168.40.12
Start Time: Wed, 02 Jun 2021 22:56:03 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
fio:
Container ID:
Image: nixery.dev/shell/fio
Image ID:
Port: <none>
Host Port: <none>
Args:
sleep
1000000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6cdf (ro)
/volume from ms-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ms-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ms-volume-claim
ReadOnly: false
kube-api-access-l6cdf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: openebs.io/engine=mayastor
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44m default-scheduler Successfully assigned mayastor/fio to node2
Normal SuccessfulAttachVolume 44m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199"
Warning FailedMount 24m (x4 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[kube-api-access-l6cdf ms-volume]: timed out waiting for the condition
Warning FailedMount 13m (x23 over 44m) kubelet MountVolume.SetUp failed for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/b1166af6-1ade-4a3a-9b1d-653151418695/volumes/kubernetes.io~csi/pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199/mount, volume ec6ce101-fb3e-4a5a-8d61-1d228f8f8199
Warning FailedMount 4m3s (x13 over 42m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[ms-volume kube-api-access-l6cdf]: timed out waiting for the condition
Any ideas where to look or what to do to get mayastor working with microk8s? Happy to post more information.
Thanks to Kiran Mova's comments and Niladri from the openebs slack channel:
Replace the step:
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor#csi-node-plugin
kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml
with
curl -fSs https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml | sed "s|/var/lib/kubelet|/var/snap/microk8s/common/var/lib/kubelet|g" - | kubectl apply -f -
So replace the path with the microk8s installation specific path. Even though there is a symlink things don't seem to work out right without this change.

Pulling a image from gcr.to fails

I am able to create a kubernetes cluster and I followed the steps in to pull a private image from GCR repository.
https://cloud.google.com/container-registry/docs/advanced-authentication
https://cloud.google.com/container-registry/docs/access-control
I am unable to pull the image from GCR. I have used the below commands
gcloud auth login
I have authendiacted the service accounts.
Connection between the local machine and gcr as well.
Below is the error
$ kubectl describe pod test-service-55cc8f947d-5frkl
Name: test-service-55cc8f947d-5frkl
Namespace: default
Priority: 0
Node: gke-test-gke-clus-test-node-poo-c97a8611-91g2/10.128.0.7
Start Time: Mon, 12 Oct 2020 10:01:55 +0530
Labels: app=test-service
pod-template-hash=55cc8f947d
tier=test-service
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container test-service
Status: Pending
IP: 10.48.0.33
IPs:
IP: 10.48.0.33
Controlled By: ReplicaSet/test-service-55cc8f947d
Containers:
test-service:
Container ID:
Image: gcr.io/test-256004/test-service:v2
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
test_SERVICE_BUCKET: test-pt-prod
COPY_FILES_DOCKER_IMAGE: gcr.io/test-256004/test-gcs-copy:latest
test_GCP_PROJECT: test-256004
PIXALATE_GCS_DATASET: test_pixalate
PIXALATE_BQ_TABLE: pixalate
APP_ADS_TXT_GCS_DATASET: test_appadstxt
APP_ADS_TXT_BQ_TABLE: appadstxt
Mounts:
/test/output from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6g7nl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
test-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-pvc
ReadOnly: false
default-token-6g7nl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6g7nl
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 42s default-scheduler Successfully assigned default/test-service-55cc8f947d-5frkl to gke-test-gke-clus-test-node-poo-c97a8611-91g2
Normal SuccessfulAttachVolume 38s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-25025b4c-2e89-4400-8e0e-335298632e74"
Normal SandboxChanged 31s kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Failed to pull image "gcr.io/test-256004/test-service:v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/test-256004/test-service, repository does not exist or may require 'docker login': denied: Permission denied for "v2" from request "/v2/test-256004/test-service/manifests/v2".
Warning Failed 15s (x2 over 32s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ErrImagePull
Normal BackOff 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Back-off pulling image "gcr.io/test-256004/test-service:v2"
Warning Failed 3s (x4 over 29s) kubelet, gke-test-gke-clus-test-node-poo-c97a8611-91g2 Error: ImagePullBackOff
If you don't use workload identity, the default service account of your pod is this one of the nodes, and the nodes, by default, use the Compute Engine service account.
Make sure to grant it the correct permission to access to GCR.
If you use another service account, grant it with the Storage Object Reader role (when you pull an image, you read a blob stored in Cloud Storage (at least it's the same permission)).
Note: even if it's the default service account, I don't recommend to use the Compute Engine service account with any change in its roles. Indeed, it is project editor, that is a lot of responsability.

How to use private registry provider, Service Account - from Kubernertes deployments

Update I suspect this to be a google issue, I have created a new more clean question here.
Update: yes this is different than the suggested "This question may already have an answer here:", as this is about a "Service Account" - not a "User accounts".
Do you now how to use a private registry like Google Container Registry from DigitalOcean or any other Kubernetes not running on the same provider?
I tried following this, but unfortunately it did not work for me.
Update: I suspect it to be a Google SA issue, I will go and try using Docker Hub and get back if that succeeds. I am still curious to see the solution for this, so please let me know - thanks!
Update: Also tried this
Update: tried to activate Google Service Account
Update: tried to download Google Service Account key
Update: in the linked description is says:
kubectl create secret docker-registry $SECRETNAME \
--docker-server=https://gcr.io \
--docker-username=_json_key \
--docker-email=user#example.com \
--docker-password="$(cat k8s-gcr-auth-ro.json)"
Is the --docker-password="$(cat k8s-gcr-auth-ro.json)" really the password?
If I do cat k8s-gcr-auth-ro.json the format is:
{
"type": "service_account",
"project_id": "<xxx>",
"private_key_id": "<xxx>",
"private_key": "-----BEGIN PRIVATE KEY-----\<xxx>\n-----END PRIVATE KEY-----\n",
"client_email": "k8s-gcr-auth-ro#<xxx>.iam.gserviceaccount.com",
"client_id": "<xxx>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/k8s-gcr-auth-ro%<xxx>.iam.gserviceaccount.com"
}
kubectl get pods
I get: ...is waiting to start: image can't be pulled
from a deployment with:
image: gcr.io/<project name>/<image name>:v1
deployment.yaml
# K8s - Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <image-name>-deployment-v1
spec:
replicas: 1
template:
metadata:
labels:
app: <image-name>-deployment
version: v1
spec:
containers:
- name: <image-name>
image: gcr.io/<project-name>/<image-name>:v1
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: <name-of-secret>
I can see from the following that it logs: repository does not exist or may require 'docker login'
kubectl describe pod :
k describe pod <image-name>-deployment-v1-844568c768-5b2rt
Name: <image-name>-deployment-v1-844568c768-5b2rt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: my-cluster-digitalocean-1-7781/10.135.153.236
Start Time: Mon, 25 Mar 2019 15:51:37 +0100
Labels: app=<image-name>-deployment
pod-template-hash=844568c768
version=v1
Annotations: <none>
Status: Pending
IP: <ip address>
Controlled By: ReplicaSet/<image-name>-deployment-v1-844568c768
Containers:
chat-server:
Container ID:
Image: gcr.io/<project-name/<image-name>:v1
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dh8dh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-dh8dh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dh8dh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/<image-name>-deployment-v1-844568c768-5b2rt to my-cluster-digitalocean-1-7781
Normal Pulling 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 pulling image "gcr.io/<project-name><image-name>:v1"
Warning Failed 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 Failed to pull image "gcr.io/<project-name>/<image-name>:v1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/<project-name>/<image-name>, repository does not exist or may require 'docker login'
Warning Failed 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 Error: ErrImagePull
Normal SandboxChanged 31s (x7 over 47s) kubelet, my-cluster-digitalocean-1-7781 Pod sandbox changed, it will be killed and re-created.
Normal BackOff 29s (x6 over 45s) kubelet, my-cluster-digitalocean-1-7781 Back-off pulling image "gcr.io/<project-name>/<image-name>:v1"
Warning Failed 29s (x6 over 45s) kubelet, my-cluster-digitalocean-1-7781 Error: ImagePullBackOff
Just a note: docker pull on local machine pulls the image alright