I'm facing an issue with my ubuntu:16.04 desktop, using minikube version: v0.35.0, kubectl both client/server: 1.13.4
When I try running a configmap example and get error kubelet, minikube MountVolume.SetUp failed every time I try. I tried to debug but no result on my end. It's about last 2 days and failed to make it work.
$ kubectl apply -f ./kube-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: kuard-config
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
command:
- "/kuard"
- "$(EXTRA_PARAM)"
env:
- name: ANOTHER_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: another-param
- name: EXTRA_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: extra-param
volumeMounts:
- name: config-volume
mountPath: "/config"
volumes:
- name: config-volume
configMap:
name: my-config
restartPolicy: Never
Giving the following error when I run the following command
$ kubectl describe pods/kuard-config
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Thu, 20 Jun 2019 10:44:37 +0600
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"ann
otations":{},"name":"kuard-config","namespace":"default"},"spec":{"containers":[{"command"...
Status: Pending
IP:
Containers:
test-container:
Container ID:
Image: gcr.io/kuar-demo/kuard-amd64:1
Image ID:
Port: <none>
Host Port: <none>
Command:
/kuard
$(EXTRA_PARAM)
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ANOTHER_PARAM: <set to the key 'another-param' of config map 'my-config'> Optional: false
EXTRA_PARAM: <set to the key 'extra-param' of config map 'my-config'> Optional: false
Mounts:
/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q42jl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-config
Optional: false
default-token-q42jl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q42jl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned default/kuard-config to minikube
Warning FailedMount 4s (x5 over 11s) kubelet, minikube MountVolume.SetUp failed for volume "config-volume" : configmap "my-config" not found```
I'd expect if someone help me to figure out why `kubelet, minikube MountVolume.SetUp failed` is occur.
Related
I followed this: https://www.linode.com/docs/guides/how-to-setup-a-private-docker-registry-with-lke-and-object-storage/
repo works I can push or pull from my local with user/pass. secret docker-registry regcred created and verified again and again.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
namespace: apps
spec:
selector:
matchLabels:
app: hello
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: privatrepo.ddns.net/hello
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
any help?
here is the pod logs
Name:
Namespace: apps
Priority: 0
Node:
Start Time: Thu, 20 May 2021 14:13:27 +0200
Labels: app=hello
pod-template-hash=7d6674fdff
Annotations: cni.projectcalico.org/podIP: 10.2.0.88/32
Status: Pending
IP: 10.2.0.88
IPs:
IP: 10.2.0.88
Controlled By: ReplicaSet/
Containers:
hello:
Container ID:
Image: privatrepo.ddns.net/hello
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jlt9v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-jlt9v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jlt9v
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 24m (x680 over 179m) kubelet Back-off pulling image "privatrepo.ddns.net/hello"
Warning Failed 4m5s (x767 over 179m) kubelet Error: ImagePullBackOff
I have a fdb cluster using https://github.com/FoundationDB/fdb-kubernetes-operator and now trying to raise a pod with https://github.com/FoundationDB/fdb-document-layer
The result is CrashLoopBackOff of the pod
The description of the pod:
Name: fdb-doc-layer-84c4b84595-9rv8c
Namespace: default
Priority: 0
Node: faraday/5.188.158.233
Start Time: Sat, 21 Nov 2020 03:10:06 +0300
Labels: app=fdb-doc-layer
pod-template-hash=84c4b84595
Annotations: cni.projectcalico.org/podIP: 10.1.80.235/32
cni.projectcalico.org/podIPs: 10.1.80.235/32
Status: Running
IP: 10.1.80.235
IPs:
IP: 10.1.80.235
Controlled By: ReplicaSet/fdb-doc-layer-84c4b84595
Containers:
fdb-doc-layer:
Container ID: containerd://86f599ef8bd0684023a093f0e725fde02ac60f3899681053857e411b7c8c4b3b
Image: foundationdb/fdb-document-layer-build:latest
Image ID: docker.io/foundationdb/fdb-document-layer-build#sha256:5d1e84c5954141ce67be3fa28a428f572c3d8bbff1541ec8588fe82da600cb97
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 21 Nov 2020 03:36:23 +0300
Finished: Sat, 21 Nov 2020 03:36:23 +0300
Ready: False
Restart Count: 10
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 200m
memory: 128Mi
Environment:
FDB_CLUSTER_FILE: /etc/foundationdb/fdb.cluster
Mounts:
/etc/foundationdb from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mf8pp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: sample-cluster-config
Optional: false
default-token-mf8pp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mf8pp
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned default/fdb-doc-layer-84c4b84595-9rv8c to faraday
Normal Pulled 28m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.421941982s
Normal Pulled 28m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.223503462s
Normal Pulled 27m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.253710381s
Normal Pulled 27m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.672481437s
Normal Created 27m (x4 over 28m) kubelet Created container fdb-doc-layer
Normal Started 27m (x4 over 28m) kubelet Started container fdb-doc-layer
Normal Pulling 26m (x5 over 28m) kubelet Pulling image "foundationdb/fdb-document-layer-build:latest"
Normal Pulled 26m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.270867366s
Warning BackOff 3m2s (x116 over 28m) kubelet Back-off restarting failed container
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fdb-doc-layer
spec:
replicas: 1
selector:
matchLabels:
app: fdb-doc-layer
template:
metadata:
labels:
app: fdb-doc-layer
spec:
containers:
- name: fdb-doc-layer
image: foundationdb/fdb-document-layer-build:latest
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumeMounts:
- name: config-volume
mountPath: /etc/foundationdb
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 27017
volumes:
- name: config-volume
configMap:
name: sample-cluster-config
How to make fdb-document-layer work with kubernetes?
There are two issues:
You are using the wrong image instead of foundationdb/fdb-document-layer-build:latest you should use foundationdb/fdb-document-layer:latest. The first image is only used to build the document layer.
The ConfigMap contains the fdb.cluster file under the key cluster-file so you need to remap this or adjust the env variable.
The following config works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fdb-doc-layer
spec:
replicas: 1
selector:
matchLabels:
app: fdb-doc-layer
template:
metadata:
labels:
app: fdb-doc-layer
spec:
containers:
- name: fdb-doc-layer
image: foundationdb/fdb-document-layer:latest
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumeMounts:
- name: config-volume
mountPath: /etc/foundationdb
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 27017
volumes:
- name: config-volume
configMap:
name: sample-cluster-config
items:
- key: cluster-file
path: fdb.cluster
I am trying to deploy a stateful set of IPFS replicas on three Kubernetes worker nodes (based on this repo). The first three replicas work properly, but when it comes to the fourth one, it appears that the persistentVolumeClaims point to the shared physical memory. Therefore, the fourth node cannot acquire the lock. What would be the standard way to deploy many IPFS replicas in Kubernetes?
The fourth node printed the following log:
08:44:19.785 DEBUG cmd/ipfs: config path is /data/ipfs main.go:257
08:44:19.785 INFO cmd/ipfs: IPFS_PATH /data/ipfs main.go:301
08:44:19.785 DEBUG cmd/ipfs: Command cannot run on daemon. Checking if daemon is locked main.go:434
08:44:19.785 DEBUG lock: Checking lock lock.go:32
08:44:19.785 DEBUG lock: Can't lock file: /data/ipfs/repo.lock.
reason: cannot acquire lock: Lock FcntlFlock of /data/ipfs/repo.lock failed: resource temporarily unavailable lock.go:44
08:44:19.785 DEBUG fsrepo: (true)<->Lock is held at /data/ipfs fsrepo.go:302
Error: ipfs daemon is running. please stop it to run this command
Use 'ipfs daemon --help' for information about this command
Here is the yaml file for the stateful set:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs
namespace: ipfs
spec:
selector:
matchLabels:
app: ipfs
serviceName: ipfs
replicas: 6
template:
metadata:
labels:
app: ipfs
spec:
initContainers:
- name: init-repo
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command: ['/bin/sh', '/etc/ipfs-config/init.sh']
volumeMounts:
- name: www
mountPath: /data/ipfs
- name: secrets
mountPath: /etc/ipfs-secrets
- name: config
mountPath: /etc/ipfs-config
- name: init-peers
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
command: ['/bin/sh', '/etc/ipfs-config/peers-kubernetes-refresh.sh']
volumeMounts:
- name: www
mountPath: /data/ipfs
- name: config
mountPath: /etc/ipfs-config
containers:
- name: ipfs
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
env:
- name: IPFS_LOGGING
value: debug
command:
- ipfs
- daemon
ports:
- containerPort: 4001
name: swarm
- containerPort: 5001
name: api
- containerPort: 8080
name: readonly
volumeMounts:
- name: www
mountPath: /data/ipfs
volumes:
- name: secrets
secret:
secretName: ipfs
- name: config
configMap:
name: ipfs-config
- name: www
persistentVolumeClaim:
claimName: ipfs-pvc
Here is the persistent volume definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: ipfs-pv
namespace: ipfs
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
And the persistent volume claim definition:
vapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ipfs-pvc
namespace: ipfs
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
kubectl describe of the failing node is as follows:
Name: ipfs-3
Namespace: ipfs
Priority: 0
Node: swift-153/10.70.20.153
Start Time: Tue, 27 Oct 2020 14:38:11 -0400
Labels: app=ipfs
controller-revision-hash=ipfs-74bb88dbb6
statefulset.kubernetes.io/pod-name=ipfs-3
Annotations: <none>
Status: Running
IP: 10.244.3.43
IPs:
IP: 10.244.3.43
Controlled By: StatefulSet/ipfs
Containers:
ipfs:
Container ID: docker://81349e969be9ffcafeb4d65adf9d0b2de7311e46068e36dd4f227f169f6dfcab
Image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
Image ID: docker-pullable://ipfs/go-ipfs#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
Ports: 4001/TCP, 5001/TCP, 8080/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
ipfs
daemon
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 27 Oct 2020 14:39:51 -0400
Finished: Tue, 27 Oct 2020 14:39:51 -0400
Ready: False
Restart Count: 4
Environment:
IPFS_LOGGING: debug
Mounts:
/data/ipfs from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hb785 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
secrets:
Type: Secret (a volume populated by a Secret)
SecretName: ipfs
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: ipfs-config
Optional: false
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ipfs-pvc
ReadOnly: false
default-token-hb785:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hb785
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m24s default-scheduler Successfully assigned ipfs/ipfs-3 to swift-153
Normal Pulled 2m2s (x3 over 2m21s) kubelet Container image "ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054" already present on machine
Normal Created 2m (x3 over 2m19s) kubelet Created container ipfs
Normal Started 2m (x3 over 2m19s) kubelet Started container ipfs
Warning DNSConfigForming 103s (x10 over 2m24s) kubelet Search Line limits were exceeded, some search paths have been omitted, the applied search line is: ipfs.svc.cluster.local svc.cluster.local cluster.local search syslab.sandbox cs.toronto.edu
Warning BackOff 103s (x6 over 2m15s) kubelet Back-off restarting failed container
I have my image hosted on GCR.
I want to create Kubernetes Cluster on my local system(mac).
Steps I followed :
Create a imagePullSecretKey
Create generic key to communicate with GCP. (kubectl create secret generic gcp-key --from-file=key.json)
I have deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: sv-premier
spec:
selector:
matchLabels:
app: sv-premier
template:
metadata:
labels:
app: sv-premier
spec:
volumes:
- name: google-cloud-key
secret:
secretName: gcp-key
containers:
- name: sv-premier
image: gcr.io/proto/premiercore1:latest
imagePullPolicy: Always
command: ["echo", "Done deploying sv-premier"]
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8080
imagePullSecrets:
- name: imagepullsecretkey
When I execute the command - kubectl apply -f deployment.yaml , I get CrashLoopBackOff Error
Logs for -
kubectl describe pods podname
=======================
Name: sv-premier-6b77ddd747-cvdr5
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 04 Feb 2020 14:18:47 +0530
Labels: app=sv-premier
pod-template-hash=6b77ddd747
Annotations:
Status: Running
IP: 10.1.0.43
IPs:
Controlled By: ReplicaSet/sv-premier-6b77ddd747
Containers:
sv-premierleague:
Container ID: docker://141126d732409427fe39b405865f88856ac4e1d8586112797fc5bf4fdfbe317c
Image: gcr.io/proto/premiercore1:latest
Image ID: docker-pullable://gcr.io/proto/premiercore1#sha256:b3800ccca3f30725d5c9235dd349548f0fcfe309f51883d8af16397aef2c3953
Port: 8080/TCP
Host Port: 0/TCP
Command:
echo
Done deploying sv-premier
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 04 Feb 2020 15:00:51 +0530
Finished: Tue, 04 Feb 2020 15:00:51 +0530
Ready: False
Restart Count: 13
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
/var/secrets/google from google-cloud-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-key
Optional: false
default-token-s4jgd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s4jgd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ----
Normal Scheduled 46m default-scheduler
Successfully assigned default/sv-premier-6b77ddd747-cvdr5 to
docker-desktop
Normal Pulled 45m (x4 over 46m) kubelet, docker-desktop
Successfully pulled image
"gcr.io/proto/premiercore1:latest"
Normal Created 45m (x4 over 46m) kubelet, docker-desktop
Created container sv-premier
Normal Started 45m (x4 over 46m) kubelet, docker-desktop
Started container sv-premier
Normal Pulling 45m (x5 over 46m) kubelet, docker-desktop
Pulling image "gcr.io/proto/premiercore1:latest"
Warning BackOff 92s (x207 over 46m) kubelet, docker-desktop
Back-off restarting failed container
=======================
And output for -
kubectl logs podname --> Done Deploying sv-premier
I am confused why my container is exiting. not able to start.
Kindly guide please.
Update your deployment.yaml with a long running task example.
command: ["/bin/sh"]
args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
This will put your container to sleep after deployment and every hour it will log the message.
Read more about pod lifecycle container states here
I am trying to setup a node.js app on GKE with a gcloud SQL Postgres database with a sidecar proxy. I am following along the docs but do not get it working. The proxy does not seem to be able to start (the app container does start). I have no idea why the proxy container can not start and also have no idea how to debug this (e.g. how do i get an error message!?).
mysecret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: [base64_username]
password: [base64_password]
Output of kubectl get secrets:
NAME TYPE DATA AGE
default-token-tbgsv kubernetes.io/service-account-token 3 5d
mysecret Opaque 2 7h
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: gcr.io/myproject/firstapp:v2
ports:
- containerPort: 8080
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject:europe-west4:databasename=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: mysecret
output of kubectl create -f ./kubernetes/app-deployment.json:
deployment.apps/myapp created
output of kubectl get deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myapp 1 1 1 0 5s
output of kubectl get pods:
NAME READY STATUS RESTARTS AGE
myapp-5bc965f688-5rxwp 1/2 CrashLoopBackOff 1 10s
output of kubectl describe pod/myapp-5bc955f688-5rxwp -n default:
Name: myapp-5bc955f688-5rxwp
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4
Start Time: Sat, 15 Dec 2018 21:46:03 +0100
Labels: app=myapp
pod-template-hash=1675219244
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy
Status: Running
IP: 10.44.1.9
Controlled By: ReplicaSet/myapp-5bc965f688
Containers:
app:
Container ID: docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120
Image: gcr.io/myproject/firstapp:v2
Image ID: docker-pullable://gcr.io/myproject/firstapp#sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 15 Dec 2018 21:46:04 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
cloudsql-proxy:
Container ID: docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject:europe-west4:databasename=tcp:5432
-credential_file=/secrets/cloudsql/mysecret.json
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 15 Dec 2018 22:43:37 +0100
Finished: Sat, 15 Dec 2018 22:43:37 +0100
Ready: False
Restart Count: 16
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: mysecret
Optional: false
default-token-tbgsv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbgsv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59m default-scheduler Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n
Normal Pulled 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/myproject/firstapp:v2" already present on machine
Normal Created 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Normal Started 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Started 59m (x4 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Pulled 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Warning BackOff 4m46s (x252 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Back-off restarting failed container
EDIT: something seems wrong with my secret since when I do kubectl logs 5bc955f688-5rxwp cloudsql-proxy I get:
2018/12/16 22:26:28 invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
I created the secret by doing:
kubectl create -f ./kubernetes/mysecret.yaml
I presume the secret is turned into JSON... When I change in app-deployment.yaml the mysecret.json into mysecret.yaml I still get similar error...
I was missing the correct key (credentials.json). It needs to be a key you generate from a service account; then you turn it into a secret. See also this issue.