I have a fdb cluster using https://github.com/FoundationDB/fdb-kubernetes-operator and now trying to raise a pod with https://github.com/FoundationDB/fdb-document-layer
The result is CrashLoopBackOff of the pod
The description of the pod:
Name: fdb-doc-layer-84c4b84595-9rv8c
Namespace: default
Priority: 0
Node: faraday/5.188.158.233
Start Time: Sat, 21 Nov 2020 03:10:06 +0300
Labels: app=fdb-doc-layer
pod-template-hash=84c4b84595
Annotations: cni.projectcalico.org/podIP: 10.1.80.235/32
cni.projectcalico.org/podIPs: 10.1.80.235/32
Status: Running
IP: 10.1.80.235
IPs:
IP: 10.1.80.235
Controlled By: ReplicaSet/fdb-doc-layer-84c4b84595
Containers:
fdb-doc-layer:
Container ID: containerd://86f599ef8bd0684023a093f0e725fde02ac60f3899681053857e411b7c8c4b3b
Image: foundationdb/fdb-document-layer-build:latest
Image ID: docker.io/foundationdb/fdb-document-layer-build#sha256:5d1e84c5954141ce67be3fa28a428f572c3d8bbff1541ec8588fe82da600cb97
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 21 Nov 2020 03:36:23 +0300
Finished: Sat, 21 Nov 2020 03:36:23 +0300
Ready: False
Restart Count: 10
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 200m
memory: 128Mi
Environment:
FDB_CLUSTER_FILE: /etc/foundationdb/fdb.cluster
Mounts:
/etc/foundationdb from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mf8pp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: sample-cluster-config
Optional: false
default-token-mf8pp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mf8pp
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned default/fdb-doc-layer-84c4b84595-9rv8c to faraday
Normal Pulled 28m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.421941982s
Normal Pulled 28m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.223503462s
Normal Pulled 27m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.253710381s
Normal Pulled 27m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.672481437s
Normal Created 27m (x4 over 28m) kubelet Created container fdb-doc-layer
Normal Started 27m (x4 over 28m) kubelet Started container fdb-doc-layer
Normal Pulling 26m (x5 over 28m) kubelet Pulling image "foundationdb/fdb-document-layer-build:latest"
Normal Pulled 26m kubelet Successfully pulled image "foundationdb/fdb-document-layer-build:latest" in 1.270867366s
Warning BackOff 3m2s (x116 over 28m) kubelet Back-off restarting failed container
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fdb-doc-layer
spec:
replicas: 1
selector:
matchLabels:
app: fdb-doc-layer
template:
metadata:
labels:
app: fdb-doc-layer
spec:
containers:
- name: fdb-doc-layer
image: foundationdb/fdb-document-layer-build:latest
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumeMounts:
- name: config-volume
mountPath: /etc/foundationdb
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 27017
volumes:
- name: config-volume
configMap:
name: sample-cluster-config
How to make fdb-document-layer work with kubernetes?
There are two issues:
You are using the wrong image instead of foundationdb/fdb-document-layer-build:latest you should use foundationdb/fdb-document-layer:latest. The first image is only used to build the document layer.
The ConfigMap contains the fdb.cluster file under the key cluster-file so you need to remap this or adjust the env variable.
The following config works:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fdb-doc-layer
spec:
replicas: 1
selector:
matchLabels:
app: fdb-doc-layer
template:
metadata:
labels:
app: fdb-doc-layer
spec:
containers:
- name: fdb-doc-layer
image: foundationdb/fdb-document-layer:latest
env:
- name: FDB_CLUSTER_FILE
value: /etc/foundationdb/fdb.cluster
volumeMounts:
- name: config-volume
mountPath: /etc/foundationdb
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 27017
volumes:
- name: config-volume
configMap:
name: sample-cluster-config
items:
- key: cluster-file
path: fdb.cluster
Related
I am trying to get a volume mounted as a non-root user in one of my containers. I'm trying an approach from this SO post using an initContainer to set the correct user, but when I try to start the configuration I get an "unbound immediate PersistentVolumneClaims" error. I suspect it's because the volume is mounted in both my initContainer and container, but I'm not sure why that would be the issue: I can see the initContainer taking the claim, but I would have thought when it exited that it would release it, letting the normal container take the claim. Any ideas or alternatives to getting the directory mounted as a non-root user? I did try using securityContext/fsGroup, but that seemed to have no effect. The /var/rdf4j directory below is the one that is being mounted as root.
Configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: triplestore-data-storage-dir
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: local-storage
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: triplestore-data-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: local-storage
volumeName: "triplestore-data-storage-dir"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: Always
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
persistentVolumeClaim:
claimName: "triplestore-data-storage"
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
triplestore-data-storage Bound triplestore-data-storage-dir 10Gi RWX local-storage 13s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
triplestore-data-storage-dir 10Gi RWX Delete Bound default/triplestore-data-storage local-storage 17s
kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
21s Warning FailedScheduling pod/triplestore-6d6876f49-2s84c 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
19s Normal Scheduled pod/triplestore-6d6876f49-2s84c Successfully assigned default/triplestore-6d6876f49-2s84c to docker-desktop
3s Normal Pulled pod/triplestore-6d6876f49-2s84c Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
3s Normal Created pod/triplestore-6d6876f49-2s84c Created container take-data-dir-ownership
3s Normal Started pod/triplestore-6d6876f49-2s84c Started container take-data-dir-ownership
2s Warning BackOff pod/triplestore-6d6876f49-2s84c Back-off restarting failed container
46m Normal Pulled pod/triplestore-6d6876f49-9n5kt Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
79s Warning BackOff pod/triplestore-6d6876f49-9n5kt Back-off restarting failed container
21s Normal SuccessfulCreate replicaset/triplestore-6d6876f49 Created pod: triplestore-6d6876f49-2s84c
21s Normal ScalingReplicaSet deployment/triplestore Scaled up replica set triplestore-6d6876f49 to 1
kubectl describe pods/triplestore-6d6876f49-tw8r8
Name: triplestore-6d6876f49-tw8r8
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 17 Jan 2022 10:17:20 -0500
Labels: app=demo
pod-template-hash=6d6876f49
role=triplestore
Annotations: <none>
Status: Pending
IP: 10.1.2.133
IPs:
IP: 10.1.2.133
Controlled By: ReplicaSet/triplestore-6d6876f49
Init Containers:
take-data-dir-ownership:
Container ID: docker://89e7b1e3ae76c30180ee5083624e1bf5f30b55fd95bf1c24422fabe41ae74408
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID: docker-pullable://registry.com/publicrepos/docker_cache/eclipse/rdf4j-workbench#sha256:14621ad610b0d0269dedd9939ea535348cc6c147f9bd47ba2039488b456118ed
Port: <none>
Host Port: <none>
Command:
chown
-R
100:65533
/var/rdf4j
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 17 Jan 2022 10:22:59 -0500
Finished: Mon, 17 Jan 2022 10:22:59 -0500
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Containers:
triplestore:
Container ID:
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 200Mi
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: triplestore-data-storage
ReadOnly: false
kube-api-access-s8wdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m24s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m13s default-scheduler Successfully assigned default/triplestore-6d6876f49-tw8r8 to docker-desktop
Normal Pulled 4m42s (x5 over 6m12s) kubelet Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
Normal Created 4m42s (x5 over 6m12s) kubelet Created container take-data-dir-ownership
Normal Started 4m42s (x5 over 6m12s) kubelet Started container take-data-dir-ownership
Warning BackOff 70s (x26 over 6m10s) kubelet Back-off restarting failed container
Solution
As it turns out the problem was that the initContainer wasn't running as root, it was running as the default user of the container, and so didn't have the permissions to run the chown command. In the linked SO comment, this was the first comment to the answer, with the response being that the initContainer ran as root - this has apparently changed in newer versions of kubernetes. There is a solution though, you can set the securityContext on the container to run as root, giving it permission to run the chown command, and that successfully allows the volume to be mounted as a non-root user. Here's the final configuration of the initContainer.
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
1 pod has unbound immediate PersistentVolumeClaims. - this error means the pod cannot bound to the PVC on the node where it has been scheduled to run on. This can happen when the PVC bounded to a PV that refers to a location that is not valid on the node that the pod is scheduled to run on. It will be helpful if you can post the complete output of kubectl get nodes -o wide, kubectl describe pvc triplestore-data-storage, kubectl describe pv triplestore-data-storage-dir to the question.
The mean time, PVC/PV is optional when using hostPath, can you try the following spec and see if the pod can come online:
apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
I have my image hosted on GCR.
I want to create Kubernetes Cluster on my local system(mac).
Steps I followed :
Create a imagePullSecretKey
Create generic key to communicate with GCP. (kubectl create secret generic gcp-key --from-file=key.json)
I have deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: sv-premier
spec:
selector:
matchLabels:
app: sv-premier
template:
metadata:
labels:
app: sv-premier
spec:
volumes:
- name: google-cloud-key
secret:
secretName: gcp-key
containers:
- name: sv-premier
image: gcr.io/proto/premiercore1:latest
imagePullPolicy: Always
command: ["echo", "Done deploying sv-premier"]
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8080
imagePullSecrets:
- name: imagepullsecretkey
When I execute the command - kubectl apply -f deployment.yaml , I get CrashLoopBackOff Error
Logs for -
kubectl describe pods podname
=======================
Name: sv-premier-6b77ddd747-cvdr5
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 04 Feb 2020 14:18:47 +0530
Labels: app=sv-premier
pod-template-hash=6b77ddd747
Annotations:
Status: Running
IP: 10.1.0.43
IPs:
Controlled By: ReplicaSet/sv-premier-6b77ddd747
Containers:
sv-premierleague:
Container ID: docker://141126d732409427fe39b405865f88856ac4e1d8586112797fc5bf4fdfbe317c
Image: gcr.io/proto/premiercore1:latest
Image ID: docker-pullable://gcr.io/proto/premiercore1#sha256:b3800ccca3f30725d5c9235dd349548f0fcfe309f51883d8af16397aef2c3953
Port: 8080/TCP
Host Port: 0/TCP
Command:
echo
Done deploying sv-premier
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 04 Feb 2020 15:00:51 +0530
Finished: Tue, 04 Feb 2020 15:00:51 +0530
Ready: False
Restart Count: 13
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-s4jgd (ro)
/var/secrets/google from google-cloud-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: gcp-key
Optional: false
default-token-s4jgd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-s4jgd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ----
Normal Scheduled 46m default-scheduler
Successfully assigned default/sv-premier-6b77ddd747-cvdr5 to
docker-desktop
Normal Pulled 45m (x4 over 46m) kubelet, docker-desktop
Successfully pulled image
"gcr.io/proto/premiercore1:latest"
Normal Created 45m (x4 over 46m) kubelet, docker-desktop
Created container sv-premier
Normal Started 45m (x4 over 46m) kubelet, docker-desktop
Started container sv-premier
Normal Pulling 45m (x5 over 46m) kubelet, docker-desktop
Pulling image "gcr.io/proto/premiercore1:latest"
Warning BackOff 92s (x207 over 46m) kubelet, docker-desktop
Back-off restarting failed container
=======================
And output for -
kubectl logs podname --> Done Deploying sv-premier
I am confused why my container is exiting. not able to start.
Kindly guide please.
Update your deployment.yaml with a long running task example.
command: ["/bin/sh"]
args: ["-c", "while true; do echo Done Deploying sv-premier; sleep 3600;done"]
This will put your container to sleep after deployment and every hour it will log the message.
Read more about pod lifecycle container states here
I'm facing an issue with my ubuntu:16.04 desktop, using minikube version: v0.35.0, kubectl both client/server: 1.13.4
When I try running a configmap example and get error kubelet, minikube MountVolume.SetUp failed every time I try. I tried to debug but no result on my end. It's about last 2 days and failed to make it work.
$ kubectl apply -f ./kube-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: kuard-config
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
command:
- "/kuard"
- "$(EXTRA_PARAM)"
env:
- name: ANOTHER_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: another-param
- name: EXTRA_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: extra-param
volumeMounts:
- name: config-volume
mountPath: "/config"
volumes:
- name: config-volume
configMap:
name: my-config
restartPolicy: Never
Giving the following error when I run the following command
$ kubectl describe pods/kuard-config
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Thu, 20 Jun 2019 10:44:37 +0600
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"ann
otations":{},"name":"kuard-config","namespace":"default"},"spec":{"containers":[{"command"...
Status: Pending
IP:
Containers:
test-container:
Container ID:
Image: gcr.io/kuar-demo/kuard-amd64:1
Image ID:
Port: <none>
Host Port: <none>
Command:
/kuard
$(EXTRA_PARAM)
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ANOTHER_PARAM: <set to the key 'another-param' of config map 'my-config'> Optional: false
EXTRA_PARAM: <set to the key 'extra-param' of config map 'my-config'> Optional: false
Mounts:
/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q42jl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-config
Optional: false
default-token-q42jl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q42jl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned default/kuard-config to minikube
Warning FailedMount 4s (x5 over 11s) kubelet, minikube MountVolume.SetUp failed for volume "config-volume" : configmap "my-config" not found```
I'd expect if someone help me to figure out why `kubelet, minikube MountVolume.SetUp failed` is occur.
I am trying to setup a node.js app on GKE with a gcloud SQL Postgres database with a sidecar proxy. I am following along the docs but do not get it working. The proxy does not seem to be able to start (the app container does start). I have no idea why the proxy container can not start and also have no idea how to debug this (e.g. how do i get an error message!?).
mysecret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: [base64_username]
password: [base64_password]
Output of kubectl get secrets:
NAME TYPE DATA AGE
default-token-tbgsv kubernetes.io/service-account-token 3 5d
mysecret Opaque 2 7h
app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: gcr.io/myproject/firstapp:v2
ports:
- containerPort: 8080
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject:europe-west4:databasename=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: mysecret
output of kubectl create -f ./kubernetes/app-deployment.json:
deployment.apps/myapp created
output of kubectl get deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myapp 1 1 1 0 5s
output of kubectl get pods:
NAME READY STATUS RESTARTS AGE
myapp-5bc965f688-5rxwp 1/2 CrashLoopBackOff 1 10s
output of kubectl describe pod/myapp-5bc955f688-5rxwp -n default:
Name: myapp-5bc955f688-5rxwp
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4
Start Time: Sat, 15 Dec 2018 21:46:03 +0100
Labels: app=myapp
pod-template-hash=1675219244
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy
Status: Running
IP: 10.44.1.9
Controlled By: ReplicaSet/myapp-5bc965f688
Containers:
app:
Container ID: docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120
Image: gcr.io/myproject/firstapp:v2
Image ID: docker-pullable://gcr.io/myproject/firstapp#sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 15 Dec 2018 21:46:04 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
cloudsql-proxy:
Container ID: docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject:europe-west4:databasename=tcp:5432
-credential_file=/secrets/cloudsql/mysecret.json
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 15 Dec 2018 22:43:37 +0100
Finished: Sat, 15 Dec 2018 22:43:37 +0100
Ready: False
Restart Count: 16
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: mysecret
Optional: false
default-token-tbgsv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbgsv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59m default-scheduler Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n
Normal Pulled 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/myproject/firstapp:v2" already present on machine
Normal Created 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Normal Started 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Started 59m (x4 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Pulled 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Warning BackOff 4m46s (x252 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Back-off restarting failed container
EDIT: something seems wrong with my secret since when I do kubectl logs 5bc955f688-5rxwp cloudsql-proxy I get:
2018/12/16 22:26:28 invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
I created the secret by doing:
kubectl create -f ./kubernetes/mysecret.yaml
I presume the secret is turned into JSON... When I change in app-deployment.yaml the mysecret.json into mysecret.yaml I still get similar error...
I was missing the correct key (credentials.json). It needs to be a key you generate from a service account; then you turn it into a secret. See also this issue.
So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)
Deployment (1 deployment)
Configmaps (1 configmap)
Service
VirtualService
GW
Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server:
vault running on port 8200
spring config server running on http
download dependencies and communicate with other services (which are not part of vpc/ k8)
Using following deployment file will not open outgoing connections. Only thing works is simple https request on port 443 like when i run curl https://google.com its success but no response on curl http://google.com Also logs showing connection with vault is not establishing as well.
I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-application-service
name: my-application-service-deployment
namespace: temp-nampesapce
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
replicas: 1
template:
metadata:
labels:
app: my-application-service-deployment
spec:
containers:
- envFrom:
- configMapRef:
name: my-application-service-env-variables
image: image.from.dockerhub:latest
name: my-application-service-pod
ports:
- containerPort: 8080
name: myappsvc
resources:
limits:
cpu: 700m
memory: 1.8Gi
requests:
cpu: 500m
memory: 1.7Gi
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-application-service-ingress
namespace: temp-namespace
spec:
hosts:
- my-application.mydomain.com
gateways:
- http-gateway
http:
- route:
- destination:
host: my-application-service
port:
number: 80
kind: Service
apiVersion: v1
metadata:
name: my-application-service
namespace: temp-namespace
spec:
selector:
app: api-my-application-service-deployment
ports:
- port: 80
targetPort: myappsvc
protocol: TCP
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: temp-namespace
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.mydomain.com"
Namespace with istio enabled:
Name: temp-namespace
Labels: istio-injection=enabled
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
Describe pods showing that istio and sidecare is working.
Name: my-application-service-deployment-fb897c6d6-9ztnx
Namespace: temp-namepsace
Node: ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time: Sun, 21 Oct 2018 14:40:26 +0500
Labels: app=my-application-service-deployment
pod-template-hash=964537282
Annotations: sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 100.115.0.4
Controlled By: ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
istio-init:
Container ID: docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8080,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 21 Oct 2018 14:40:26 +0500
Finished: Sun, 21 Oct 2018 14:40:26 +0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
my-application-service-pod:
Container ID: docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
Image: image.from.dockerhub:latest
Image ID: docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env#sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 1932735283200m
Requests:
cpu: 500m
memory: 1825361100800m
Environment Variables from:
my-application-service-env-variables ConfigMap Optional: false
Environment:
vault.token: <set to the key 'vault_token' in secret 'vault.token'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
istio-proxy:
Container ID: docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
my-application-service-deployment
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
POD_NAMESPACE: temp-namepsace (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-rc8kc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rc8kc
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-certs"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-rc8kc"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-envoy"
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Scheduled 3m default-scheduler Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "image.from.dockerhub:latest" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Issue was that I tried to adding sidecar in deployment not in pod by adding in pod resolved the issue. Got help from here:
https://github.com/istio/istio/issues/9304