When I do a kubectl describe pod, I can see
Environment: <none>
just after secrets. I wonder what it is. Is it possible to assign secrets to an environment? ( local, dev, staging, prod for instance ? )
➜ espace-client git:(master) ✗ kubectl describe pod -n espace-client espace-client-client-6b7b994b4c-gx58t
Name: espace-client-client-6b7b994b4c-gx58t
Namespace: espace-client
Priority: 0
Node: minikube/192.168.0.85
Start Time: Fri, 27 Sep 2019 11:37:06 +0200
Labels: app=espace-client-client
pod-template-hash=6b7b994b4c
Annotations: kubectl.kubernetes.io/restartedAt: 2019-09-27T11:37:06+02:00
Status: Running
IP: 172.17.0.21
IPs: <none>
Controlled By: ReplicaSet/espace-client-client-6b7b994b4c
Containers:
espace-client-client:
Container ID: docker://b3ee1efe45bb8ed9f27aca60e3bfecc1d7e29bc12600787d8d674ffb62ffc3f4
Image: espace_client_client:local
Image ID: docker://sha256:4cf73af7615ebfd30e7a8b0126154fa12b605dd34ead7cb0eefc43cd3ccc869b
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 27 Sep 2019 11:37:09 +0200
Ready: True
Restart Count: 0
Environment Variables from:
espace-client-client-env Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lzb8h (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-lzb8h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lzb8h
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
The environment section contains any environment variables defined as part of the PodSpec:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
It is because most likely no Env vars where defined for the Pod. You can also assign Secrets to environment. They would show up in the Environment section like this:
SECURITY_JWT_PRIVATEKEY: <set to the key 'privateKey' in secret 'tokens'> Optional: false
For example:
apiVersion: v1
kind: Pod
metadata:
name: secrets-demo
labels:
purpose: demonstrate-secrets-in-env
spec:
containers:
- name: secret-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
Related
I'm learning Kubernetes and below are the yaml configuration files:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 27017
targetPort: 27017
webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
After starting a test webapp i came across the below error:
NAME READY STATUS RESTARTS AGE
mongo-deployment-7875498c-psn56 1/1 Running 0 100m
my-go-app-664f7475d4-jgnsk 1/1 Running 1 (7d20h ago) 7d20h
webapp-deployment-7dc5b857df-6bx4s 0/1 CreateContainerConfigError 0 29m
which if i try to get more details about the CreateContainerConfigError i get:
~/K8s/K8s-demo$ kubectl describe pod webapp-deployment-7dc5b857df-6bx4s
Name: webapp-deployment-7dc5b857df-6bx4s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Thu, 06 Jan 2022 12:20:02 +0200
Labels: app=webapp
pod-template-hash=7dc5b857df
Annotations: <none>
Status: Pending
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/webapp-deployment-7dc5b857df
Containers:
webapp:
Container ID:
Image: nanajanashia/k8s-demo-app:v1.0
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Environment:
USER_NAME: <set to the key 'mongo-user' in secret 'mongo-secret'> Optional: false
USER_PWD: <set to the key 'mongo-password' in secret 'mongo-secret'> Optional: false
DB_URL: <set to the key 'mongo-url' in secret 'mongo-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkflh (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkflh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/webapp-deployment-7dc5b857df-6bx4s to minikube
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
Normal Pulled 27s (x142 over 30m) kubelet Container image "nanajanashia/k8s-demo-app:v1.0" already present on machine
which seems that the issue with configuration is:
Warning Failed 28m (x12 over 30m) kubelet Error: secret "mongo-config" not found
I don't have a secret with name "mongo-config" but there is a ConfigMap with "mongo-config" name:
>:~/K8s/K8s-demo$ kubectl get secret
NAME TYPE DATA AGE
default-token-gs25h kubernetes.io/service-account-token 3 5m57s
mongo-secret Opaque 2 5m48s
>:~/K8s/K8s-demo$ kubectl get configmap
NAME DATA AGE
kube-root-ca.crt 1 6m4s
mongo-config 1 6m4s
Could you please advice what is the issue here?
You have secretKeyRef in:
- name: DB_URL
valueFrom:
secretKeyRef:
name: mongo-config
key: mongo-url
You have to use configMapKeyRef
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
Basically your configmap is whats called a config map from literals. The only way to use that config map data as your env variables is :
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
envFrom:
- configMapRef:
name: mongo-config
Modify you web app deployment yaml this way. Also modify the config map itself . Use DB_URL instead of mongo-url as key.
The mongo secret has to be in the same namespace:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
namespace: mongodb
labels:
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: ""
mongodb-passwords: ""
mongodb-metrics-password: ""
mongodb-replica-set-key: ""
I followed this: https://www.linode.com/docs/guides/how-to-setup-a-private-docker-registry-with-lke-and-object-storage/
repo works I can push or pull from my local with user/pass. secret docker-registry regcred created and verified again and again.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
namespace: apps
spec:
selector:
matchLabels:
app: hello
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: privatrepo.ddns.net/hello
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
any help?
here is the pod logs
Name:
Namespace: apps
Priority: 0
Node:
Start Time: Thu, 20 May 2021 14:13:27 +0200
Labels: app=hello
pod-template-hash=7d6674fdff
Annotations: cni.projectcalico.org/podIP: 10.2.0.88/32
Status: Pending
IP: 10.2.0.88
IPs:
IP: 10.2.0.88
Controlled By: ReplicaSet/
Containers:
hello:
Container ID:
Image: privatrepo.ddns.net/hello
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jlt9v (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-jlt9v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jlt9v
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 24m (x680 over 179m) kubelet Back-off pulling image "privatrepo.ddns.net/hello"
Warning Failed 4m5s (x767 over 179m) kubelet Error: ImagePullBackOff
I am trying to deploy a stateful set of IPFS replicas on three Kubernetes worker nodes (based on this repo). The first three replicas work properly, but when it comes to the fourth one, it appears that the persistentVolumeClaims point to the shared physical memory. Therefore, the fourth node cannot acquire the lock. What would be the standard way to deploy many IPFS replicas in Kubernetes?
The fourth node printed the following log:
08:44:19.785 DEBUG cmd/ipfs: config path is /data/ipfs main.go:257
08:44:19.785 INFO cmd/ipfs: IPFS_PATH /data/ipfs main.go:301
08:44:19.785 DEBUG cmd/ipfs: Command cannot run on daemon. Checking if daemon is locked main.go:434
08:44:19.785 DEBUG lock: Checking lock lock.go:32
08:44:19.785 DEBUG lock: Can't lock file: /data/ipfs/repo.lock.
reason: cannot acquire lock: Lock FcntlFlock of /data/ipfs/repo.lock failed: resource temporarily unavailable lock.go:44
08:44:19.785 DEBUG fsrepo: (true)<->Lock is held at /data/ipfs fsrepo.go:302
Error: ipfs daemon is running. please stop it to run this command
Use 'ipfs daemon --help' for information about this command
Here is the yaml file for the stateful set:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs
namespace: ipfs
spec:
selector:
matchLabels:
app: ipfs
serviceName: ipfs
replicas: 6
template:
metadata:
labels:
app: ipfs
spec:
initContainers:
- name: init-repo
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command: ['/bin/sh', '/etc/ipfs-config/init.sh']
volumeMounts:
- name: www
mountPath: /data/ipfs
- name: secrets
mountPath: /etc/ipfs-secrets
- name: config
mountPath: /etc/ipfs-config
- name: init-peers
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
command: ['/bin/sh', '/etc/ipfs-config/peers-kubernetes-refresh.sh']
volumeMounts:
- name: www
mountPath: /data/ipfs
- name: config
mountPath: /etc/ipfs-config
containers:
- name: ipfs
image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
env:
- name: IPFS_LOGGING
value: debug
command:
- ipfs
- daemon
ports:
- containerPort: 4001
name: swarm
- containerPort: 5001
name: api
- containerPort: 8080
name: readonly
volumeMounts:
- name: www
mountPath: /data/ipfs
volumes:
- name: secrets
secret:
secretName: ipfs
- name: config
configMap:
name: ipfs-config
- name: www
persistentVolumeClaim:
claimName: ipfs-pvc
Here is the persistent volume definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: ipfs-pv
namespace: ipfs
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
And the persistent volume claim definition:
vapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ipfs-pvc
namespace: ipfs
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
kubectl describe of the failing node is as follows:
Name: ipfs-3
Namespace: ipfs
Priority: 0
Node: swift-153/10.70.20.153
Start Time: Tue, 27 Oct 2020 14:38:11 -0400
Labels: app=ipfs
controller-revision-hash=ipfs-74bb88dbb6
statefulset.kubernetes.io/pod-name=ipfs-3
Annotations: <none>
Status: Running
IP: 10.244.3.43
IPs:
IP: 10.244.3.43
Controlled By: StatefulSet/ipfs
Containers:
ipfs:
Container ID: docker://81349e969be9ffcafeb4d65adf9d0b2de7311e46068e36dd4f227f169f6dfcab
Image: ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
Image ID: docker-pullable://ipfs/go-ipfs#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054
Ports: 4001/TCP, 5001/TCP, 8080/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
ipfs
daemon
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 27 Oct 2020 14:39:51 -0400
Finished: Tue, 27 Oct 2020 14:39:51 -0400
Ready: False
Restart Count: 4
Environment:
IPFS_LOGGING: debug
Mounts:
/data/ipfs from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hb785 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
secrets:
Type: Secret (a volume populated by a Secret)
SecretName: ipfs
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: ipfs-config
Optional: false
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ipfs-pvc
ReadOnly: false
default-token-hb785:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hb785
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m24s default-scheduler Successfully assigned ipfs/ipfs-3 to swift-153
Normal Pulled 2m2s (x3 over 2m21s) kubelet Container image "ipfs/go-ipfs:v0.4.11#sha256:e977e1560b960933061efc694c937d711ce1a51aa4a5239acfdff01504b11054" already present on machine
Normal Created 2m (x3 over 2m19s) kubelet Created container ipfs
Normal Started 2m (x3 over 2m19s) kubelet Started container ipfs
Warning DNSConfigForming 103s (x10 over 2m24s) kubelet Search Line limits were exceeded, some search paths have been omitted, the applied search line is: ipfs.svc.cluster.local svc.cluster.local cluster.local search syslab.sandbox cs.toronto.edu
Warning BackOff 103s (x6 over 2m15s) kubelet Back-off restarting failed container
I'm facing an issue with my ubuntu:16.04 desktop, using minikube version: v0.35.0, kubectl both client/server: 1.13.4
When I try running a configmap example and get error kubelet, minikube MountVolume.SetUp failed every time I try. I tried to debug but no result on my end. It's about last 2 days and failed to make it work.
$ kubectl apply -f ./kube-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: kuard-config
spec:
containers:
- name: test-container
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
command:
- "/kuard"
- "$(EXTRA_PARAM)"
env:
- name: ANOTHER_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: another-param
- name: EXTRA_PARAM
valueFrom:
configMapKeyRef:
name: my-config
key: extra-param
volumeMounts:
- name: config-volume
mountPath: "/config"
volumes:
- name: config-volume
configMap:
name: my-config
restartPolicy: Never
Giving the following error when I run the following command
$ kubectl describe pods/kuard-config
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Thu, 20 Jun 2019 10:44:37 +0600
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"ann
otations":{},"name":"kuard-config","namespace":"default"},"spec":{"containers":[{"command"...
Status: Pending
IP:
Containers:
test-container:
Container ID:
Image: gcr.io/kuar-demo/kuard-amd64:1
Image ID:
Port: <none>
Host Port: <none>
Command:
/kuard
$(EXTRA_PARAM)
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ANOTHER_PARAM: <set to the key 'another-param' of config map 'my-config'> Optional: false
EXTRA_PARAM: <set to the key 'extra-param' of config map 'my-config'> Optional: false
Mounts:
/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-q42jl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-config
Optional: false
default-token-q42jl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-q42jl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned default/kuard-config to minikube
Warning FailedMount 4s (x5 over 11s) kubelet, minikube MountVolume.SetUp failed for volume "config-volume" : configmap "my-config" not found```
I'd expect if someone help me to figure out why `kubelet, minikube MountVolume.SetUp failed` is occur.
I'm running Spinnaker on Kubernetes 1.10.111. One of the Spinnaker services is a Pod running a service called Clouddriver. This Pod was running fine, but then the readinessProbe started erroring continuously. Kubernetes docs say
readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod.
— https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
But this Pod's IP is still in the Service's endpoints. Why?
Clouddriver Pod YAML
kubectl -n spinnaker-test get pods spin-clouddriver-5559d44484-mp8q9 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: spotify.backend-service
creationTimestamp: 2019-02-15T20:46:38Z
generateName: spin-clouddriver-5559d44484-
labels:
app: spin
app.kubernetes.io/managed-by: halyard
app.kubernetes.io/name: clouddriver
app.kubernetes.io/part-of: spinnaker
app.kubernetes.io/version: 1.12.1
cluster: spin-clouddriver
pod-template-hash: "1115800040"
name: spin-clouddriver-5559d44484-mp8q9
namespace: spinnaker-test
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: spin-clouddriver-5559d44484
uid: ce79561c-3161-11e9-acdf-42010a800082
resourceVersion: "53541277"
selfLink: /api/v1/namespaces/spinnaker-test/pods/spin-clouddriver-5559d44484-mp8q9
uid: caa66d7c-3162-11e9-acdf-42010a800082
spec:
containers:
- env:
- name: JAVA_OPTS
value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2
- name: SPRING_PROFILES_ACTIVE
value: local
image: gcr.io/spinnaker-marketplace/clouddriver:4.3.1-20190130095322
imagePullPolicy: IfNotPresent
lifecycle: {}
name: clouddriver
ports:
- containerPort: 7002
protocol: TCP
readinessProbe:
exec:
command:
- wget
- --no-check-certificate
- --spider
- -q
- http://localhost:7002/health
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "20"
memory: 5000Mi
requests:
cpu: "20"
memory: 5000Mi
securityContext:
allowPrivilegeEscalation: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/spinnaker/config
name: spin-clouddriver-files-1952526246
- mountPath: /home/halyard/.hal/k8s-spinnaker/staging/dependencies
name: spin-clouddriver-files-1757773194
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-w2lt5
readOnly: true
dnsPolicy: ClusterFirst
nodeName: gke-production-us-ce-terraform-201812-d63606d6-9vq9
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 720
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: spin-clouddriver-files-1952526246
secret:
defaultMode: 420
secretName: spin-clouddriver-files-1952526246
- name: spin-clouddriver-files-1757773194
secret:
defaultMode: 420
secretName: spin-clouddriver-files-1757773194
- name: default-token-w2lt5
secret:
defaultMode: 420
secretName: default-token-w2lt5
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-02-15T20:46:38Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-02-15T20:53:40Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-02-15T20:46:38Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3509b48511b1ea7bc97812cb82831c559d9410cb9eaaa26b4f492d881603fb31
image: gcr.io/spinnaker-marketplace/clouddriver:4.3.1-20190130095322
imageID: docker-pullable://gcr.io/spinnaker-marketplace/clouddriver#sha256:466228b97b8c4a61a0270c53ae4c397eb04bc3661bc4f1ee9ef4d5fce70d187d
lastState: {}
name: clouddriver
ready: true
restartCount: 0
state:
running:
startedAt: 2019-02-15T20:47:26Z
hostIP: 10.178.32.98
phase: Running
podIP: 10.179.34.24
qosClass: Guaranteed
startTime: 2019-02-15T20:46:38Z
Describing the Pod shows the readinessProbe has been continuously erroring for over a day.
kubectl -n spinnaker-test describe pods spin-clouddriver-5559d44484-mp8q9
Name: spin-clouddriver-5559d44484-mp8q9
Namespace: spinnaker-test
Node: gke-production-us-ce-terraform-201812-d63606d6-9vq9/10.178.32.98
Start Time: Fri, 15 Feb 2019 15:46:38 -0500
Labels: app=spin
app.kubernetes.io/managed-by=halyard
app.kubernetes.io/name=clouddriver
app.kubernetes.io/part-of=spinnaker
app.kubernetes.io/version=1.12.1
cluster=spin-clouddriver
pod-template-hash=1115800040
Annotations: kubernetes.io/psp=spotify.backend-service
Status: Running
IP: 10.179.34.24
Controlled By: ReplicaSet/spin-clouddriver-5559d44484
Containers:
clouddriver:
Container ID: docker://3509b48511b1ea7bc97812cb82831c559d9410cb9eaaa26b4f492d881603fb31
Image: gcr.io/spinnaker-marketplace/clouddriver:4.3.1-20190130095322
Image ID: docker-pullable://gcr.io/spinnaker-marketplace/clouddriver#sha256:466228b97b8c4a61a0270c53ae4c397eb04bc3661bc4f1ee9ef4d5fce70d187d
Port: 7002/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 15 Feb 2019 15:47:26 -0500
Ready: True
Restart Count: 0
Limits:
cpu: 20
memory: 5000Mi
Requests:
cpu: 20
memory: 5000Mi
Readiness: exec [wget --no-check-certificate --spider -q http://localhost:7002/health] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
JAVA_OPTS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2
SPRING_PROFILES_ACTIVE: local
Mounts:
/home/halyard/.hal/k8s-spinnaker/staging/dependencies from spin-clouddriver-files-1757773194 (rw)
/opt/spinnaker/config from spin-clouddriver-files-1952526246 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-w2lt5 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
spin-clouddriver-files-1952526246:
Type: Secret (a volume populated by a Secret)
SecretName: spin-clouddriver-files-1952526246
Optional: false
spin-clouddriver-files-1757773194:
Type: Secret (a volume populated by a Secret)
SecretName: spin-clouddriver-files-1757773194
Optional: false
default-token-w2lt5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w2lt5
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 3m (x321 over 1d) kubelet, gke-production-us-ce-terraform-201812-d63606d6-9vq9 Readiness probe errored: rpc error: code = DeadlineExceeded desc = context deadline exceeded
But Service still has the Pod's IP of 10.179.34.24 in its Endpoints.
kubectl -n spinnaker-test describe services spin-clouddriver
Name: spin-clouddriver
Namespace: spinnaker-test
Labels: app=spin
cluster=spin-clouddriver
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"spin","cluster":"spin-clouddriver"},"name":"spin-clouddriver","namesp...
Selector: app=spin,cluster=spin-clouddriver
Type: ClusterIP
IP: 10.178.65.100
Port: <unset> 7002/TCP
TargetPort: 7002/TCP
Endpoints: 10.179.34.24:7002
Session Affinity: None
Events: <none>
kubectl -n spinnaker-test describe endpoints spin-clouddriver
Name: spin-clouddriver
Namespace: spinnaker-test
Labels: app=spin
cluster=spin-clouddriver
Annotations: <none>
Subsets:
Addresses: 10.179.34.24
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 7002 TCP
Events: <none>
footnotes
GKE 1.10.11-gke.1 to be exact, but the fact that it's GKE shouldn't matter.
A probe by the kubelet can end in one of three states:
successful
failed (command returned a non-0 exit code)
errored (command did not return before the timeout elapsed, the command does not exist inside the container, etc)
Here is the code (in 1.10.11) where the event probe errored is recorded. Note that err != nil.
Here is the code that calls the above function - when err != nil (the probe returned an error), the result is discarded.
Only probes that fail will actually cause the pod's ready state to be changed.