ImagePullBackOff Error from mongo (Kubernetes) - kubernetes

I am trying to create a Kubernetes pod with the following config file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
However, I get an ImagePullBackOff, and when I use kubectl describe pod, here's what's shown:
Name: mongodb-deployment-8f6675bc5-jzmvw
Namespace: default
Priority: 0
Node: minikube/192.168.64.2
Start Time: Thu, 10 Dec 2020 16:30:21 +0800
Labels: app=mongodb
pod-template-hash=8f6675bc5
Annotations: <none>
Status: Pending
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/mongodb-deployment-8f6675bc5
Containers:
mongodb:
Container ID:
Image: mongo
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'> Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w5ltt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-w5ltt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w5ltt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 115m default-scheduler Successfully assigned default/mongodb-deployment-8f6675bc5-jzmvw to minikube
Normal Pulling 115m kubelet Pulling image "mongo"
Warning Failed 114m kubelet Failed to pull image "mongo": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 114m kubelet Error: ErrImagePull
Normal BackOff 114m kubelet Back-off pulling image "mongo"
Warning Failed 114m kubelet Error: ImagePullBackOff
I don't think it's a problem with the image/image name. Is there something wrong with my config file?
Any advice would be greatly appreciated!

There are few reasons for ImagePullBackOff error in Kubernetes, to determine it's cause you should run kubectl describe pod
Failed to pull image "mongo": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
This error means that image cannot be pulled from your container registry.
You can try to login to docker using docker login command.
If it fails you can try modifying your /etc/resolv.conf and adding nameserver 8.8.8.8 and then restarting docker using sudo systemctl restart docker.
Other cause might be using proxy or VPN connection. You can refer to answers in this question.

Related

ImagePulloff error while deploying on minikube

Hey there so I was trying to deploy my first and simple webapp with no database on minikube but this Imagepulloff error keeps coming in the pod.
Yes I have checked the name of Image,tag several times;
Here are the logs and yml files.
Namespace: default
Priority: 0
Service Account: default
Labels: app=nodeapp1
pod-template-hash=589c6bd468
Annotations: <none>
Status: Pending
Controlled By: ReplicaSet/nodeapp1-deployment-589c6bd468
Containers:
nodeserver:
Container ID:
Image: ayushftw/nodeapp1:latest
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6mkb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-k6mkb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapOptional: <nil>
DownwardAPI: true
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m3s default-scheduler Successfully assigned default/nodeapp1-deployment-589c6bd468-5lg2n to minikube
Normal Pulling 2m2s kubelet Pulling image "ayushftw/nodeapp1:latest"
Warning Failed 3s kubelet Failed to pull image "ayushftw/nodeapp1:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3s kubelet Error: ErrImagePull
Normal BackOff 2s kubelet Back-off pulling image "ayushftw/nodeapp1:latest"
Warning Failed 2s kubelet Error: ImagePullBackOff
deployment.yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp1-deployment
labels:
app: nodeapp1
spec:
replicas: 1
selector:
matchLabels:
app: nodeapp1
template:
metadata:
labels:
app: nodeapp1
spec:
containers:
- name: nodeserver
image: ayushftw/nodeapp1:latest
ports:
- containerPort: 3000
service.yml fie
apiVersion: v1
kind: Service
metadata:
name: nodeapp1-service
spec:
selector:
app: nodeapp1
type: LoadBalancer
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31011
Please Help If anybody knows anything about this .
I think your internet connection is slow. The timeout to pull an image is 120 seconds, so kubectl could not pull the image in under 120 seconds.
First, pull the image via Docker
docker image pull ayushftw/nodeapp1:latest
Then load the downloaded image to minikube
minikube image load ayushftw/nodeapp1:latest
And then everything will work because now kubectl will use the image that is stored locally.
It seems to be an issue with the ability to reach container registry or registry in use for your images. Can you try to pull the image manually from the node?

CrashLoopBackOff - Back-off restarting failed container

I have my image hosted on GCR.
I want to create Kubernetes Cluster on my local system(mac).
Steps I followed :
Create a imagePullSecretKey
Create generic key to communicate with GCP. (kubectl create secret generic gcp-key --from-file=key.json)
I have deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: sv-premier
spec:
selector:
matchLabels:
app: sv-premier
template:
metadata:
labels:
app: sv-premier
spec:
volumes:
- name: google-cloud-key
secret:
secretName: gcp-key
containers:
- name: sv-premier
image: gcr.io/proto/premiercore1:latest
imagePullPolicy: Always
command: ["echo", "Done deploying sv-premier"]
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8080
imagePullSecrets:
- name: imagepullsecretkey
When I execute the command - kubectl apply -f deployment.yaml , I get CrashLoopBackOff Error
Logs for -
kubectl describe pods podname
=======================
Name: sv-premier-6b77ddd747-cvdr5
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 04 Feb 2020 14:18:47 +0530
Labels: app=sv-premier
pod-template-hash=6b77ddd747
Annotations:
Status: Running
IP: 10.1.0.43
IPs:
Controlled By: ReplicaSet/sv-premier-6b77ddd747
Containers:
sv-premierleague:
Container ID: docker://141126d732409427fe39b405865f88856ac4e1d8586112797fc5bf4fdfbe317c
Image: gcr.io/proto/premiercore1:latest
Image ID: docker-pullable://gcr.io/proto/premiercore1#sha256:b3800ccca3f30725d5c9235dd349548f0fcfe309f51883d8af16397aef2c3953
Port: 8080/TCP
Host Port: 0/TCP
Command:
echo
Done deploying sv-premier
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 04 Feb 2020 15:00:51 +0530
Finished: Tue, 04 Feb 2020 15:00:51 +0530
Ready: False
Restart Count: 13
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
/var/secrets/google from google-cloud-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-key
Optional: false
default-token-s4jgd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s4jgd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ----
Normal Scheduled 46m default-scheduler
Successfully assigned default/sv-premier-6b77ddd747-cvdr5 to
docker-desktop
Normal Pulled 45m (x4 over 46m) kubelet, docker-desktop
Successfully pulled image
"gcr.io/proto/premiercore1:latest"
Normal Created 45m (x4 over 46m) kubelet, docker-desktop
Created container sv-premier
Normal Started 45m (x4 over 46m) kubelet, docker-desktop
Started container sv-premier
Normal Pulling 45m (x5 over 46m) kubelet, docker-desktop
Pulling image "gcr.io/proto/premiercore1:latest"
Warning BackOff 92s (x207 over 46m) kubelet, docker-desktop
Back-off restarting failed container
=======================
And output for -
kubectl logs podname --> Done Deploying sv-premier
I am confused why my container is exiting. not able to start.
Kindly guide please.
Update your deployment.yaml with a long running task example.
command: ["/bin/sh"]
args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
This will put your container to sleep after deployment and every hour it will log the message.
Read more about pod lifecycle container states here

Some Kubernetes pods consistently not able to resolve internal DNS on only one node

I have just moved my first cluster from minikube up to AWS EKS. All went pretty smoothly so far, except I'm running into some DNS issues I think, but only on one of the cluster nodes.
I have two nodes in the cluster running v1.14, and 4 pods of one type, and 4 of another, 3 of each work, but 1 of each - both on the same node - start then error (CrashLoopBackOff) with the script inside the container erroring because it can't resolve the hostname for the database. Deleting the errored pod, or even all pods, results in one pod on the same node failing every time.
The database is in its own pod and has a service assigned, none of the other pods of the same type have problems resolving the name or connecting. The database pod is on the same node as the pods that can't resolve the hostname. I'm not sure how to migrate the pod to a different node, but that might be worth trying to see if the problem follows.
No errors in the coredns pods. I'm not sure where to start looking to discover the issue from here, and any help or suggestions would be appreciated.
Providing the configs below. As mentioned, they all work on Minikube, and also they work on one node.
kubectl get pods - note age, all pod1's were deleted at the same time and they recreated themselves, 3 worked fine, 4th does not.
NAME READY STATUS RESTARTS AGE
pod1-85f7968f7-2cjwt 1/1 Running 0 34h
pod1-85f7968f7-cbqn6 1/1 Running 0 34h
pod1-85f7968f7-k9xv2 0/1 CrashLoopBackOff 399 34h
pod1-85f7968f7-qwcrz 1/1 Running 0 34h
postgresql-865db94687-cpptb 1/1 Running 0 3d14h
rabbitmq-667cfc4cc-t92pl 1/1 Running 0 34h
pod2-94b9bc6b6-6bzf7 1/1 Running 0 34h
pod2-94b9bc6b6-6nvkr 1/1 Running 0 34h
pod2-94b9bc6b6-jcjtb 0/1 CrashLoopBackOff 140 11h
pod2-94b9bc6b6-t4gfq 1/1 Running 0 34h
postgresql service
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
selector:
app: postgresql
pod1 deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod1
spec:
replicas: 4
selector:
matchLabels:
app: pod1
template:
metadata:
labels:
app: pod1
spec:
containers:
- name: pod1
image: us.gcr.io/gcp-project-8888888/pod1:latest
env:
- name: rabbitmquser
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: rmquser
volumeMounts:
- mountPath: /data/files
name: datafiles
volumes:
- name: datafiles
persistentVolumeClaim:
claimName: datafiles-pv-claim
imagePullSecrets:
- name: container-readonly
pod2 depoloyment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod2
spec:
replicas: 4
selector:
matchLabels:
app: pod2
template:
metadata:
labels:
app: pod2
spec:
containers:
- name: pod2
image: us.gcr.io/gcp-project-8888888/pod2:latest
env:
- name: rabbitmquser
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: rmquser
volumeMounts:
- mountPath: /data/files
name: datafiles
volumes:
- name: datafiles
persistentVolumeClaim:
claimName: datafiles-pv-claim
imagePullSecrets:
- name: container-readonly
CoreDNS config map to forward DNS to external service if it doesn't resolve internally. This is the only place I can think that would be causing the issue - but as said it works for one node.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . 8.8.8.8
cache 30
loop
reload
loadbalance
}
Errored Pod output. Same for both pods, as it occurs in library code common to both. As mentioned, this does not occur for all pods so the issue likely doesn't lie with the code.
Error connecting to database (psycopg2.OperationalError) could not translate host name "postgresql" to address: Try again
Errored Pod1 description:
Name: xyz-94b9bc6b6-jcjtb
Namespace: default
Priority: 0
Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230
Start Time: Tue, 15 Oct 2019 19:43:11 +1030
Labels: app=pod1
pod-template-hash=94b9bc6b6
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.70.63
Controlled By: ReplicaSet/xyz-94b9bc6b6
Containers:
pod1:
Container ID: docker://f7dc735111bd94b7c7b698e69ad302ca19ece6c72b654057627626620b67d6de
Image: us.gcr.io/xyz/xyz:latest
Image ID: docker-pullable://us.gcr.io/xyz/xyz#sha256:20110cf126b35773ef3a8656512c023b1e8fe5c81dd88f19a64c5bfbde89f07e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 16 Oct 2019 07:21:40 +1030
Finished: Wed, 16 Oct 2019 07:21:46 +1030
Ready: False
Restart Count: 139
Environment:
xyz: <set to the key 'xyz' in secret 'xyz-secrets'> Optional: false
Mounts:
/data/xyz from xyz (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
xyz:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: xyz-pv-claim
ReadOnly: false
default-token-m72kz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-m72kz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m22s (x3143 over 11h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container
Errored Pod 2 description:
Name: xyz-85f7968f7-k9xv2
Namespace: default
Priority: 0
Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230
Start Time: Mon, 14 Oct 2019 21:19:42 +1030
Labels: app=pod2
pod-template-hash=85f7968f7
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.84.69
Controlled By: ReplicaSet/pod2-85f7968f7
Containers:
pod2:
Container ID: docker://f7c7379f92f57ea7d381ae189b964527e02218dc64337177d6d7cd6b70990143
Image: us.gcr.io/xyz-217300/xyz:latest
Image ID: docker-pullable://us.gcr.io/xyz-217300/xyz#sha256:b9cecdbc90c5c5f7ff6170ee1eccac83163ac670d9df5febd573c2d84a4d628d
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 16 Oct 2019 07:23:35 +1030
Finished: Wed, 16 Oct 2019 07:23:41 +1030
Ready: False
Restart Count: 398
Environment:
xyz: <set to the key 'xyz' in secret 'xyz-secrets'> Optional: false
Mounts:
/data/xyz from xyz (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
xyz:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: xyz-pv-claim
ReadOnly: false
default-token-m72kz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-m72kz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m28s (x9208 over 34h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container
At the suggestion of a k8s community member, I applied the following change to my coredns configuration to be more in line with the best practice:
Line: proxy . 8.8.8.8 changed to forward . /etc/resolv.conf 8.8.8.8
I then deleted the pods, and after they were recreated by k8s, the issue did not appear again.
EDIT:
Turns out, that was not the issue at all as shortly afterwards the issue re-occurred and persisted. In the end, it was this: https://github.com/aws/amazon-vpc-cni-k8s/issues/641
Rolled back to 1.5.3 as recommended by Amazon, restarted the cluster, and the issue was resolved.

GKE with gcloud sql postgres: the sidecar proxy setup does not work

I am trying to setup a node.js app on GKE with a gcloud SQL Postgres database with a sidecar proxy. I am following along the docs but do not get it working. The proxy does not seem to be able to start (the app container does start). I have no idea why the proxy container can not start and also have no idea how to debug this (e.g. how do i get an error message!?).
mysecret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: [base64_username]
password: [base64_password]
Output of kubectl get secrets:
NAME TYPE DATA AGE
default-token-tbgsv kubernetes.io/service-account-token 3 5d
mysecret Opaque 2 7h
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: gcr.io/myproject/firstapp:v2
ports:
- containerPort: 8080
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject:europe-west4:databasename=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: mysecret
output of kubectl create -f ./kubernetes/app-deployment.json:
deployment.apps/myapp created
output of kubectl get deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myapp 1 1 1 0 5s
output of kubectl get pods:
NAME READY STATUS RESTARTS AGE
myapp-5bc965f688-5rxwp 1/2 CrashLoopBackOff 1 10s
output of kubectl describe pod/myapp-5bc955f688-5rxwp -n default:
Name: myapp-5bc955f688-5rxwp
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4
Start Time: Sat, 15 Dec 2018 21:46:03 +0100
Labels: app=myapp
pod-template-hash=1675219244
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy
Status: Running
IP: 10.44.1.9
Controlled By: ReplicaSet/myapp-5bc965f688
Containers:
app:
Container ID: docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120
Image: gcr.io/myproject/firstapp:v2
Image ID: docker-pullable://gcr.io/myproject/firstapp#sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 15 Dec 2018 21:46:04 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
cloudsql-proxy:
Container ID: docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject:europe-west4:databasename=tcp:5432
-credential_file=/secrets/cloudsql/mysecret.json
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 15 Dec 2018 22:43:37 +0100
Finished: Sat, 15 Dec 2018 22:43:37 +0100
Ready: False
Restart Count: 16
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: mysecret
Optional: false
default-token-tbgsv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbgsv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59m default-scheduler Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n
Normal Pulled 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/myproject/firstapp:v2" already present on machine
Normal Created 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Normal Started 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Started 59m (x4 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Pulled 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Warning BackOff 4m46s (x252 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Back-off restarting failed container
EDIT: something seems wrong with my secret since when I do kubectl logs 5bc955f688-5rxwp cloudsql-proxy I get:
2018/12/16 22:26:28 invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
I created the secret by doing:
kubectl create -f ./kubernetes/mysecret.yaml
I presume the secret is turned into JSON... When I change in app-deployment.yaml the mysecret.json into mysecret.yaml I still get similar error...
I was missing the correct key (credentials.json). It needs to be a key you generate from a service account; then you turn it into a secret. See also this issue.

Istio allowing all outbound traffic

So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)
Deployment (1 deployment)
Configmaps (1 configmap)
Service
VirtualService
GW
Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server:
vault running on port 8200
spring config server running on http
download dependencies and communicate with other services (which are not part of vpc/ k8)
Using following deployment file will not open outgoing connections. Only thing works is simple https request on port 443 like when i run curl https://google.com its success but no response on curl http://google.com Also logs showing connection with vault is not establishing as well.
I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-application-service
name: my-application-service-deployment
namespace: temp-nampesapce
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
replicas: 1
template:
metadata:
labels:
app: my-application-service-deployment
spec:
containers:
- envFrom:
- configMapRef:
name: my-application-service-env-variables
image: image.from.dockerhub:latest
name: my-application-service-pod
ports:
- containerPort: 8080
name: myappsvc
resources:
limits:
cpu: 700m
memory: 1.8Gi
requests:
cpu: 500m
memory: 1.7Gi
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-application-service-ingress
namespace: temp-namespace
spec:
hosts:
- my-application.mydomain.com
gateways:
- http-gateway
http:
- route:
- destination:
host: my-application-service
port:
number: 80
kind: Service
apiVersion: v1
metadata:
name: my-application-service
namespace: temp-namespace
spec:
selector:
app: api-my-application-service-deployment
ports:
- port: 80
targetPort: myappsvc
protocol: TCP
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: temp-namespace
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.mydomain.com"
Namespace with istio enabled:
Name: temp-namespace
Labels: istio-injection=enabled
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
Describe pods showing that istio and sidecare is working.
Name: my-application-service-deployment-fb897c6d6-9ztnx
Namespace: temp-namepsace
Node: ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time: Sun, 21 Oct 2018 14:40:26 +0500
Labels: app=my-application-service-deployment
pod-template-hash=964537282
Annotations: sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 100.115.0.4
Controlled By: ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
istio-init:
Container ID: docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8080,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 21 Oct 2018 14:40:26 +0500
Finished: Sun, 21 Oct 2018 14:40:26 +0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
my-application-service-pod:
Container ID: docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
Image: image.from.dockerhub:latest
Image ID: docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env#sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 1932735283200m
Requests:
cpu: 500m
memory: 1825361100800m
Environment Variables from:
my-application-service-env-variables ConfigMap Optional: false
Environment:
vault.token: <set to the key 'vault_token' in secret 'vault.token'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
istio-proxy:
Container ID: docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
my-application-service-deployment
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
POD_NAMESPACE: temp-namepsace (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-rc8kc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rc8kc
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-certs"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-rc8kc"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-envoy"
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Scheduled 3m default-scheduler Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "image.from.dockerhub:latest" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Issue was that I tried to adding sidecar in deployment not in pod by adding in pod resolved the issue. Got help from here:
https://github.com/istio/istio/issues/9304