Kubernetes init containers run every hour - kubernetes

I have recently set up redis via https://github.com/tarosky/k8s-redis-ha, this repo includes an init container, and I have included an extra init container in order to get passwords etc set up.
I am seeing some strange (and it seems undocumented) behavior, whereby the init containers run as expected before the redis container starts, however then they run subsequently every hour, close to an hour. I have tested this behavior using a busybox init container (which does nothing) on deployments & statefulset and experience the same behavior, so it is not specific to this redis pod.
I have tested this on bare metal with k8s 1.6 and 1.8 with the same results, however when applying init containers to GKE (k8s 1.7) this behavior does not happen. I can't see any flags for GKE's kubelet to dictate this behavior.
See below for kubectl describe pod showing that the init containers are run when the main pod has not exited/crashed.
Name: redis-sentinel-1
Namespace: (redacted)
Node: (redacted)/(redacted)
Start Time: Mon, 12 Mar 2018 06:20:55 +0000
Labels: app=redis-sentinel
controller-revision-hash=redis-sentinel-7cc557cf7c
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"(redacted)","name":"redis-sentinel","uid":"759a3a3b-25bd-11e8-a8ce-0242ac110...
security.alpha.kubernetes.io/unsafe-sysctls=net.core.somaxconn=1024
Status: Running
IP: (redacted)
Controllers: StatefulSet/redis-sentinel
Init Containers:
redis-ha-server:
Container ID: docker://557d777a7c660b062662426ebe9bbf6f9725fb9d88f89615a8881346587c1835
Image: tarosky/k8s-redis-ha:sentinel-3.0.1
Image ID: docker-pullable://tarosky/k8s-redis-ha#sha256:98e09ef5fbea5bfd2eb1858775c967fa86a92df48e2ec5d0b405f7ca3f5ada1c
Port:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 13 Mar 2018 03:01:12 +0000
Finished: Tue, 13 Mar 2018 03:01:12 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
-redis-init:
Container ID: docker://18c4e353233a6827999ae4a16adf1f408754a21d80a8e3374750fdf9b54f9b1a
Image: gcr.io/(redacted)/redis-init
Image ID: docker-pullable://gcr.io/(redacted)/redis-init#sha256:42042093d58aa597cce4397148a2f1c7967db689256ed4cc8d9f42b34d53aca2
Port:
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 13 Mar 2018 03:01:25 +0000
Finished: Tue, 13 Mar 2018 03:01:25 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt from opt (rw)
/secrets/redis-password from redis-password (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
Containers:
redis-sentinel:
Container ID: docker://a54048cbb7ec535c841022c543a0d566c9327f37ede3a6232516721f0e37404d
Image: redis:3.2
Image ID: docker-pullable://redis#sha256:474fb41b08bcebc933c6337a7db1dc7131380ee29b7a1b64a7ab71dad03ad718
Port: 26379/TCP
Command:
/opt/bin/k8s-redis-ha-sentinel
Args:
/opt/sentinel.conf
State: Running
Started: Mon, 12 Mar 2018 06:21:02 +0000
Ready: True
Restart Count: 0
Readiness: exec [redis-cli -p 26379 info server] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVICE: redis-server
SERVICE_PORT: redis-server
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
redis-sword:
Container ID: docker://50279448bbbf175b6f56f96dab59061c4652c2117452ed15b3a5380681c7176f
Image: tarosky/k8s-redis-ha:sword-3.0.1
Image ID: docker-pullable://tarosky/k8s-redis-ha#sha256:2315c7a47d9e47043d030da270c9a1252c2cfe29c6e381c8f50ca41d3065db6d
Port:
State: Running
Started: Mon, 12 Mar 2018 06:21:03 +0000
Ready: True
Restart Count: 0
Environment:
SERVICE: redis-server
SERVICE_PORT: redis-server
SENTINEL: redis-sentinel
SENTINEL_PORT: redis-sentinel
Mounts:
/opt from opt (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hkj6d (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
opt:
Type: HostPath (bare host directory volume)
Path: /store/redis-sentinel/opt
redis-password:
Type: Secret (a volume populated by a Secret)
SecretName: redis-password
Optional: false
default-token-hkj6d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hkj6d
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
20h 30m 21 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Pulling pulling image "tarosky/k8s-redis-ha:sentinel-3.0.1"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Started Started container
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Created Created container
20h 30m 21 kubelet, 10.1.3.102 spec.initContainers{redis-ha-server} Normal Pulled Successfully pulled image "tarosky/k8s-redis-ha:sentinel-3.0.1"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Pulling pulling image "gcr.io/(redacted)/redis-init"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Pulled Successfully pulled image "gcr.io/(redacted)/redis-init"
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Created Created container
21h 30m 22 kubelet, 10.1.3.102 spec.initContainers{redis-init} Normal Started Started container
Note the Containers in the pod which started at Mon, 12 Mar 2018 06:21:02 +0000 (with 0 restarts) and the Init Containers which started from Tue, 13 Mar 2018 03:01:12 +0000. These seem to re-run every hour pretty much in an interval of hour.
Our bare metal must be misconfigured for init containers somewhere? Can anyone shed any light on this strange behavior?

If you are pruning away exited containers, then the container pruning/removal is a likely cause. In my testing, it appears that exited init containers which are removed from Docker Engine (hourly, or otherwise), such as with "docker system prune -f" will cause Kubernetes to re-launch the init containers. Is this the issue in your case, if this is still persisting?
Also, see https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ for Kubelet garbage collection documentation, which appears to support these types of tasks (rather than needing to implement it yourself)

Related

What is the kubernetes equivalent of docker inspect?

When a docker container is running it is sometimes helpful to look at runtime configuration.
What is the equivalent command for kubernetes?
I did a search on so for this and came up with some similar questions: See https://stackoverflow.com/search?q=What+is+the+kubernetes+equivalent, but not this question.
What's the kubectl equivalent of docker exec bash in Kubernetes?
Docker volume and kubernetes volume
Kubernetes is a container orchestrator, so you'll not find container-level commands.
You can check the container logs:
kubectl logs pod-name
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
You can describe a pod to see pod details, as well as possible pull image errors:
kubectl describe pod nginx-deployment-1006230814-6winp
Name: nginx-deployment-1006230814-6winp
Namespace: default
Node: kubernetes-node-wul5/10.240.0.9
Start Time: Thu, 24 Mar 2016 01:39:49 +0000
Labels: app=nginx,pod-template-hash=1006230814
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-1956810328","uid":"14e607e7-8ba1-11e7-b5cb-fa16" ...
Status: Running
IP: 10.244.0.6
Controllers: ReplicaSet/nginx-deployment-1006230814
Containers:
nginx:
Container ID: docker://90315cc9f513c724e9957a4788d3e625a078de84750f244a40f97ae355eb1149
Image: nginx
Image ID: docker://6f62f48c4e55d700cf3eb1b5e33fa051802986b77b874cc351cce539e5163707
Port: 80/TCP
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
cpu: 500m
memory: 128Mi
Requests:
memory: 128Mi
cpu: 500m
State: Running
Started: Thu, 24 Mar 2016 01:39:51 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5kdvl (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-4bcbi:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4bcbi
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
54s 54s 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-deployment-1006230814-6winp to kubernetes-node-wul5
54s 54s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulling pulling image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Pulled Successfully pulled image "nginx"
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Created Created container with docker id 90315cc9f513
53s 53s 1 {kubelet kubernetes-node-wul5} spec.containers{nginx} Normal Started Started container with docker id 90315cc9f513
If you need see details about a container, use the docker client or whatever other container runtime client for this purpose.

How to see Pod logs: a container name must be specified for pod... choose one of: [wait main]

I am running an Argo workflow and getting the following error in the pod's log:
error: a container name must be specified for pod <name>, choose one of: [wait main]
This error only happens some of the time and only with some of my templates, but when it does, it is a template that is run later in the workflow (i.e. not the first template run). I have not yet been able to identify the parameters that will run successfully, so I will be happy with tips for debugging. I have pasted the output of describe below.
Based on searches, I think the solution is simply that I need to attach "-c main" somewhere, but I do not know where and cannot find information in the Argo docs.
Describe:
Name: message-passing-1-q8jgn-607612432
Namespace: argo
Priority: 0
Node: REDACTED
Start Time: Wed, 17 Mar 2021 17:16:37 +0000
Labels: workflows.argoproj.io/completed=false
workflows.argoproj.io/workflow=message-passing-1-q8jgn
Annotations: cni.projectcalico.org/podIP: 192.168.40.140/32
cni.projectcalico.org/podIPs: 192.168.40.140/32
workflows.argoproj.io/node-name: message-passing-1-q8jgn.e
workflows.argoproj.io/outputs: {"exitCode":"6"}
workflows.argoproj.io/template:
{"name":"egress","arguments":{},"inputs":{...
Status: Failed
IP: 192.168.40.140
IPs:
IP: 192.168.40.140
Controlled By: Workflow/message-passing-1-q8jgn
Containers:
wait:
Container ID: docker://26d6c30440777add2af7ef3a55474d9ff36b8c562d7aecfb911ce62911e5fda3
Image: argoproj/argoexec:v2.12.10
Image ID: docker-pullable://argoproj/argoexec#sha256:6edb85a84d3e54881404d1113256a70fcc456ad49c6d168ab9dfc35e4d316a60
Port: <none>
Host Port: <none>
Command:
argoexec
wait
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 17 Mar 2021 17:16:43 +0000
Finished: Wed, 17 Mar 2021 17:17:03 +0000
Ready: False
Restart Count: 0
Environment:
ARGO_POD_NAME: message-passing-1-q8jgn-607612432 (v1:metadata.name)
Mounts:
/argo/podmetadata from podmetadata (rw)
/mainctrfs/mnt/logs from log-p1-vol (rw)
/mainctrfs/mnt/processed from processed-p1-vol (rw)
/var/run/docker.sock from docker-sock (ro)
/var/run/secrets/kubernetes.io/serviceaccount from argo-token-v2w56 (ro)
main:
Container ID: docker://67e6d6d3717ab1080f14cac6655c90d990f95525edba639a2d2c7b3170a7576e
Image: REDACTED
Image ID: REDACTED
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
Args:
State: Terminated
Reason: Error
Exit Code: 6
Started: Wed, 17 Mar 2021 17:16:43 +0000
Finished: Wed, 17 Mar 2021 17:17:03 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt/logs/ from log-p1-vol (rw)
/mnt/processed/ from processed-p1-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from argo-token-v2w56 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
podmetadata:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.annotations -> annotations
docker-sock:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType: Socket
processed-p1-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: message-passing-1-q8jgn-processed-p1-vol
ReadOnly: false
log-p1-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: message-passing-1-q8jgn-log-p1-vol
ReadOnly: false
argo-token-v2w56:
Type: Secret (a volume populated by a Secret)
SecretName: argo-token-v2w56
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m35s default-scheduler Successfully assigned argo/message-passing-1-q8jgn-607612432 to ack1
Normal Pulled 7m31s kubelet Container image "argoproj/argoexec:v2.12.10" already present on machine
Normal Created 7m31s kubelet Created container wait
Normal Started 7m30s kubelet Started container wait
Normal Pulled 7m30s kubelet Container image already present on machine
Normal Created 7m30s kubelet Created container main
Normal Started 7m30s kubelet Started container main
This happens when you try to see logs for a pod with multiple containers and not specify for what container you want to see the log. Typical command to see logs:
kubectl logs <podname>
But your Pod has two container, one named "wait" and one named "main". You can see the logs from the container named "main" with:
kubectl logs <podname> -c main
or you can see the logs from all containers with
kubectl logs <podname> --all-containers

why the kubernetes pod shows Back-off restarting failed container [duplicate]

This question already has answers here:
How can I keep a container running on Kubernetes?
(14 answers)
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
(21 answers)
Closed 2 years ago.
I want to build a troube shooting pod, this is my Dockerbuild file:
FROM alpine:3.11
MAINTAINER jiangxiaoqiang (jiangtingqiang#gmail.com)
ENV LANG=en_US.UTF-8 \
LC_ALL=en_US.UTF-8 \
TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \
&& echo $TZ > /etc/timezone \
&& apk add --no-cache curl jq \
nmap \
bind-tools \
busybox-extras \
bash
CMD ["/bin/bash","-l"]
but when I start it in kubernetes cluster, it shows: Back-off restarting failed container, and always restart all the time. so simple docker container ,why give me this tips? this is the descibe output:
[root#k8smaster ~]# kubectl describe pod ts-7d754488b9-jqqh9
Name: ts-7d754488b9-jqqh9
Namespace: default
Priority: 0
Node: k8sslave2/192.168.31.31
Start Time: Wed, 02 Sep 2020 12:28:48 -0400
Labels: k8s-app=ts
pod-template-hash=7d754488b9
Annotations: cni.projectcalico.org/podIP: 10.11.125.135/32
Status: Running
IP: 10.11.125.135
IPs:
IP: 10.11.125.135
Controlled By: ReplicaSet/ts-7d754488b9
Containers:
ts:
Container ID: docker://0c810ed8f8ec1cde6c0249edde59fc28a169d5730e87c423403f802cd12df6dd
Image: registry.cn-shanghai.aliyuncs.com/jiangxiaoqiang/dolphin/k8s-ts:v0.0.1
Image ID: docker-pullable://registry.cn-shanghai.aliyuncs.com/jiangxiaoqiang/dolphin/k8s-ts#sha256:68edaed45c1fadee71abbe7bdaad23f2400f352f1b6309142689a197367f3ae9
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Sep 2020 12:30:13 -0400
Finished: Wed, 02 Sep 2020 12:30:13 -0400
Ready: False
Restart Count: 4
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-79w95 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-79w95:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-79w95
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/ts-7d754488b9-jqqh9 to k8sslave2
Normal Created 96s (x4 over 2m17s) kubelet, k8sslave2 Created container ts
Normal Started 95s (x4 over 2m16s) kubelet, k8sslave2 Started container ts
Warning BackOff 69s (x7 over 2m15s) kubelet, k8sslave2 Back-off restarting failed container
Normal Pulling 54s (x5 over 2m17s) kubelet, k8sslave2 Pulling image "registry.cn-shanghai.aliyuncs.com/jiangxiaoqiang/dolphin/k8s-ts:v0.0.1"
Normal Pulled 54s (x5 over 2m17s) kubelet, k8sslave2 Successfully pulled image "registry.cn-shanghai.aliyuncs.com/jiangxiaoqiang/dolphin/k8s-ts:v0.0.1"
The container is completed means it is finished it's execution task. If you wish the container should run for specific time then pass eg . sleep 3600 as argument or you can use restartPolicy: Never in your deployment file.
something like this
spec:
containers:
- image: alpine
command:
- /bin/sh
- "-c"
- "sleep 60m"
imagePullPolicy: Always
restartPolicy: Never
name: alpine

Readiness probe failed: timeout: failed to connect service ":8080" within 1s

I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc.
Link to github repo: microservices-demo
I have followed all the installation process to locally build and deploy the microservices, and am able to access the web frontend through my browser. However, when I click on any of the product images say, I see this error page.
HTTP Status: 500 Internal Server Error
On doing a check using kubectl get pods
I realize that one of my pods( Recommendation service) has status CrashLoopBackOff.
Running kubectl describe pods recommendationservice-55b4d6c477-kxv8r:
Namespace: default
Priority: 0
Node: minikube/192.168.99.116
Start Time: Thu, 23 Jul 2020 19:58:38 +0530
Labels: app=recommendationservice
app.kubernetes.io/managed-by=skaffold-v1.11.0
pod-template-hash=55b4d6c477
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.40
skaffold.dev/run-id=49913ced-e8df-40a7-9336-a227b56bcb5f
skaffold.dev/tag-policy=git-commit
Annotations: <none>
Status: Running
IP: 172.17.0.14
IPs:
IP: 172.17.0.14
Controlled By: ReplicaSet/recommendationservice-55b4d6c477
Containers:
server:
Container ID: docker://2d92aa966a82fbe58c8f40f6ecf9d6d55c29f8081cb40e0423a2397e1419350f
Image: recommendationservice:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb
Image ID: docker://sha256:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 23 Jul 2020 21:09:33 +0530
Finished: Thu, 23 Jul 2020 21:09:53 +0530
Ready: False
Restart Count: 29
Limits:
cpu: 200m
memory: 450Mi
Requests:
cpu: 100m
memory: 220Mi
Liveness: exec [/bin/grpc_health_probe -addr=:8080] delay=0s timeout=1s period=5s #success=1 #failure=3
Readiness: exec [/bin/grpc_health_probe -addr=:8080] delay=0s timeout=1s period=5s #success=1 #failure=3
Environment:
PORT: 8080
PRODUCT_CATALOG_SERVICE_ADDR: productcatalogservice:3550
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sbpcx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sbpcx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sbpcx
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 44m (x15 over 74m) kubelet, minikube Container image "recommendationservice:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb" already present on machine
Warning Unhealthy 9m33s (x99 over 74m) kubelet, minikube Readiness probe failed: timeout: failed to connect service ":8080" within 1s
Warning BackOff 4m25s (x294 over 72m) kubelet, minikube Back-off restarting failed container
In Events, I see Readiness probe failed: timeout: failed to connect service ":8080" within 1s.
What is the reason and how can I resolve this?
Thanks for the help!
Answer
The timeout of the Readiness Probe (1 second) was too short.
More Info
The relevant Readiness Probe is defined such that /bin/grpc_health_probe -addr=:8080 is run inside the server container.
You would expect a 1 second timeout to be sufficient for such a probe but this is running on Minikube so that could be impacting the timeout of the probe.

why kubernetes pods error tips not refresh after fix problem

My pod get red status and show this error:
Usage of EmptyDir volume "agent" exceeds the limit "100Mi".
after I fix this problem, the error did not disappear.
how to make the error tips disappear? This is the pod info:
dolphin#dolphins-MacBook-Pro ~ % kubectl describe pods soa-task-745d48d955-bd4j8
Name: soa-task-745d48d955-bd4j8
Namespace: dabai-fat
Priority: 0
Node: azshara-k8s03/172.19.150.82
Start Time: Tue, 17 Aug 2021 15:23:18 +0800
Labels: k8s-app=soa-task
pod-template-hash=745d48d955
Annotations: kubectl.kubernetes.io/restartedAt: 2021-04-20T07:42:03Z
Status: Running
IP: 172.30.184.3
IPs: <none>
Controlled By: ReplicaSet/soa-task-745d48d955
Init Containers:
init-agent:
Container ID: docker://25d947147300edba8bc1861d40cea314047674b74f82d7de9013eead41f1f20f
Image: registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-agent:6.5.0
Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-agent#sha256:eda5426bc7bc06fc184e740f4783f263f151ae25e55aae37eec8b67e5dbb2fb0
Port: <none>
Host Port: <none>
Command:
sh
-c
set -ex;mkdir -p /skywalking/agent;cp -r /opt/skywalking/agent/* /skywalking/agent;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 17 Aug 2021 15:23:19 +0800
Finished: Tue, 17 Aug 2021 15:23:19 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/skywalking/agent from agent (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xnrwt (ro)
Containers:
soa-task:
Container ID: docker://5406a90606e0a3905fa8fa4827e19db0d8d58609c06c2fd4e756b718df5db3b9
Image: registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-task:v1.0.0
Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-task#sha256:aece36589aae4fdedcfb82d7e64e451e32ebb1169dccd2485f8fe4bd451944a8
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 17 Aug 2021 15:23:34 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:11028/actuator/liveness delay=120s timeout=30s period=10s #success=1 #failure=3
Readiness: http-get http://:11028/actuator/health delay=90s timeout=30s period=10s #success=1 #failure=3
Environment:
SKYWALKING_ADDR: dabai-skywalking-skywalking-oap.apm.svc.cluster.local:11800
APOLLO_META: <set to the key 'apollo.meta' of config map 'fat-config'> Optional: false
ENV: <set to the key 'env' of config map 'fat-config'> Optional: false
Mounts:
/opt/skywalking/agent from agent (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xnrwt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
agent:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: 100Mi
default-token-xnrwt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xnrwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 360s
node.kubernetes.io/unreachable:NoExecute op=Exists for 360s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned dabai-fat/soa-task-745d48d955-bd4j8 to azshara-k8s03
Normal Pulled 13m kubelet Container image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-agent:6.5.0" already present on machine
Normal Created 13m kubelet Created container init-agent
Normal Started 13m kubelet Started container init-agent
Normal Pulling 13m kubelet Pulling image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-task:v1.0.0"
Normal Pulled 13m kubelet Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai-fat/soa-task:v1.0.0"
Normal Created 13m kubelet Created container soa-task
Normal Started 13m kubelet Started container soa-task