Container Optimized OS performance - kubernetes

After upgrading my cluster nodes image from CONTAINER_VM to CONTAINER_OPTIMIZED_OS I ran into performance degradation of the PHP Application up to 10 times.
Did i miss something in my configuration or its a common issue?
I tried to take machines with more CPU and memory but it affected the performance slightly.
Terraform configuration:
resource "google_compute_address" "dev-cluster-address" {
name = "dev-cluster-address"
region = "europe-west1"
}
resource "google_container_cluster" "dev-cluster" {
name = "dev-cluster"
zone = "europe-west1-d"
initial_node_count = 2
node_version = "1.7.5"
master_auth {
username = "*********-dev"
password = "*********"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/sqlservice.admin"
]
machine_type = "n1-standard-1"
disk_size_gb = 20
image_type = "COS"
}
}
Kubernetes deployment for Symfony Application:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment-dev
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: dev
spec:
containers:
- name: nginx
image: nginx:1.13.5-alpine
volumeMounts:
- name: application
mountPath: /var/www/web
- name: nginx-config
mountPath: /etc/nginx/conf.d
ports:
- containerPort: 80
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
- name: php
image: ********
lifecycle:
postStart:
exec:
command:
- "bash"
- "/var/www/provision/files/init_php.sh"
envFrom:
- configMapRef:
name: symfony-config-dev
volumeMounts:
- name: application
mountPath: /application
- name: logs
mountPath: /var/www/var/logs
- name: lexik-jwt-keys
mountPath: /var/www/var/jwt
ports:
- containerPort: 9000
resources:
limits:
cpu: "400m"
memory: "1536M"
requests:
cpu: "300m"
memory: "1024M"
- name: cloudsql-proxy-mysql
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "10m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:dev1=tcp:0.0.0.0:3306"
- name: cloudsql-proxy-analytics
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "10m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:analytics-dev1=tcp:0.0.0.0:3307"
- name: sidecar-logging
image: alpine:3.6
args: [/bin/sh, -c, 'tail -n+1 -f /var/www/var/logs/prod.log']
volumeMounts:
- name: logs
mountPath: /var/www/var/logs
resources:
limits:
cpu: "5m"
memory: "20M"
requests:
cpu: "5m"
memory: "20M"
volumes:
- name: application
emptyDir: {}
- name: logs
emptyDir: {}
- name: nginx-config
configMap:
name: config-dev
items:
- key: nginx
path: default.conf
- name: lexik-jwt-keys
configMap:
name: config-dev
items:
- key: lexik_jwt_private_key
path: private.pem
- key: lexik_jwt_public_key
path: public.pem

One of the reasons could be the fact that Kubernetes actually started enforcing the CPU limits with Container-Optimized OS.
resources:
limits:
cpu: "20m"
These were not enforced on the older ContainerVM images.
Could you please try removing/relaxing cpu limits from your pod-spec and see if it helps?

Related

Prometheus & Alert Manager keeps crashing after updating the EKS version to 1.16

prometheus-prometheus-kube-prometheus-prometheus-0 0/2 Terminating 0 4s
alertmanager-prometheus-kube-prometheus-alertmanager-0 0/2 Terminating 0 10s
After updating EKS cluster to 1.16 from 1.15 everything works fine except these two pods, they keep on terminating and unable to initialise. Hence, prometheus monitoring does not work. I am getting below errors while describing the pods.
Error: failed to start container "prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:362: creating new parent process caused: container_linux.go:1941: running lstat on namespace path "/proc/29271/ns/ipc" caused: lstat /proc/29271/ns/ipc: no such file or directory: unknown
Error: failed to start container "config-reloader": Error response from daemon: cannot join network of a non running container: 7e139521980afd13dad0162d6859352b0b2c855773d6d4062ee3e2f7f822a0b3
Error: cannot find volume "config" to mount into container "config-reloader"
Error: cannot find volume "config" to mount into container "prometheus"
here is my yaml file for the deployment:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
creationTimestamp: "2021-04-30T16:39:14Z"
deletionGracePeriodSeconds: 600
deletionTimestamp: "2021-04-30T16:49:14Z"
generateName: prometheus-prometheus-kube-prometheus-prometheus-
labels:
app: prometheus
app.kubernetes.io/instance: prometheus-kube-prometheus-prometheus
app.kubernetes.io/managed-by: prometheus-operator
app.kubernetes.io/name: prometheus
app.kubernetes.io/version: 2.26.0
controller-revision-hash: prometheus-prometheus-kube-prometheus-prometheus-56d9fcf57
operator.prometheus.io/name: prometheus-kube-prometheus-prometheus
operator.prometheus.io/shard: "0"
prometheus: prometheus-kube-prometheus-prometheus
statefulset.kubernetes.io/pod-name: prometheus-prometheus-kube-prometheus-prometheus-0
name: prometheus-prometheus-kube-prometheus-prometheus-0
namespace: mo
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: prometheus-prometheus-kube-prometheus-prometheus
uid: 326a09f2-319c-449d-904a-1dd0019c6d80
resourceVersion: "9337443"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-prometheus-kube-prometheus-prometheus-0
uid: e2be062f-749d-488e-a6cc-42ef1396851b
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=10d
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.external-url=http://prometheus-kube-prometheus-prometheus.monitoring:9090
- --web.route-prefix=/
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
name: prometheus
ports:
- containerPort: 9090
name: web
protocol: TCP
readinessProbe:
failureThreshold: 120
httpGet:
path: /-/ready
port: web
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /etc/prometheus/certs
name: tls-assets
readOnly: true
- mountPath: /prometheus
name: prometheus-prometheus-kube-prometheus-prometheus-db
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
- args:
- --listen-address=:8080
- --reload-url=http://localhost:9090/-/reload
- --config-file=/etc/prometheus/config/prometheus.yaml.gz
- --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
- --watched-dir=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
command:
- /bin/prometheus-config-reloader
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SHARD
value: "0"
image: quay.io/prometheus-operator/prometheus-config-reloader:v0.47.0
imagePullPolicy: IfNotPresent
name: config-reloader
ports:
- containerPort: 8080
name: reloader-web
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config
name: config
- mountPath: /etc/prometheus/config_out
name: config-out
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: prometheus-prometheus-kube-prometheus-prometheus-0
nodeName: ip-10-1-49-45.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: prometheus-kube-prometheus-prometheus
serviceAccountName: prometheus-kube-prometheus-prometheus
subdomain: prometheus-operated
terminationGracePeriodSeconds: 600
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: config
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus
- name: tls-assets
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus-tls-assets
- emptyDir: {}
name: config-out
- configMap:
defaultMode: 420
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- emptyDir: {}
name: prometheus-prometheus-kube-prometheus-prometheus-db
- name: prometheus-kube-prometheus-prometheus-token-mh66q
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-prometheus-token-mh66q
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-30T16:39:14Z"
status: "True"
type: PodScheduled
phase: Pending
qosClass: Burstable
If someone needs to know the answer, in my case(the above situation) there were 2 Prometheus operators running in different different namespace, 1 in default & another monitoring namespace. so I removed the one from the default namespace and it resolved my pods crashing issue.

Kubernetes CronJob - Multiple CronJob configuration is not working

I have to run two CronJobs in Kubernetes (AWS-EKS) and I have below configuration. When I apply the template, only one CronJob is getting created. The one that gets created is always the second one. So it looks like the first one is getting overwritten by the second. I am unable to figure out what am I doing wrong.
# Source: deploy-k8s-app/templates/multicron.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
schedule: '5/15 * * * *'
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
activeDeadlineSeconds: 900
template:
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
containers:
- env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/my-app"
- name: IS_JACOCO_ENABLED
value: "false"
- name: SPRING_PROFILES_ACTIVE
value: "int-dc4"
- name: METRICS_ADDRESS
value: "NA"
- name: APP_MODULE
value: "expand"
- name: JAVA_TOOL_OPTIONS
value: "-Xms256M -Xmx512M"
image: "xxxxx.dkr.ecr.us-east-1.amazonaws.com/my-ecr:my-app-latest-10"
imagePullPolicy: IfNotPresent
name: my-app
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: apps-logs
mountPath: /var/log/containers
- name: fluentdconf
mountPath: /fluentd/etc
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.11.2-debian-cloudwatch-1.0
env:
- name: REGION
value: us-east-1
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: MY-EKS-Cluster
- name: CI_VERSION
value: "k8s/1.0.1"
- name: LOG_GROUP_NAME
value: /aws/containerinsights/MY-EKS-Cluster/springapp
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: fluentdconf
mountPath: /fluentd/etc
- name: apps-logs
mountPath: /var/log/containers
volumes:
- name: fluentdconf
configMap:
name: fluentd-spring-config
- name: apps-logs
emptyDir: {}
- name: my-app-shared
emptyDir: {}
restartPolicy: OnFailure
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: my-app
name: my-app-addl
namespace: commercial
spec:
schedule: '15/30 * * * *'
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
activeDeadlineSeconds: 1800
template:
metadata:
labels:
app: my-app
name: my-app
namespace: commercial
spec:
containers:
- env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/my-app"
- name: IS_JACOCO_ENABLED
value: "false"
- name: SPRING_PROFILES_ACTIVE
value: "int-dc4"
- name: METRICS_ADDRESS
value: "NA"
- name: APP_MODULE
value: "expand"
- name: JAVA_TOOL_OPTIONS
value: "-Xms256M -Xmx512M"
image: "xxxxx.dkr.ecr.us-east-1.amazonaws.com/my-ecr:my-app-latest-10"
imagePullPolicy: IfNotPresent
name: my-app
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: apps-logs
mountPath: /var/log/containers
- name: fluentdconf
mountPath: /fluentd/etc
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.11.2-debian-cloudwatch-1.0
env:
- name: REGION
value: us-east-1
- name: AWS_REGION
value: us-east-1
- name: CLUSTER_NAME
value: MY-EKS-Cluster
- name: CI_VERSION
value: "k8s/1.0.1"
- name: LOG_GROUP_NAME
value: /aws/containerinsights/MY-EKS-Cluster/springapp
resources:
limits:
cpu: 160m
memory: 1024Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: fluentdconf
mountPath: /fluentd/etc
- name: apps-logs
mountPath: /var/log/containers
volumes:
- name: fluentdconf
configMap:
name: fluentd-spring-config
- name: apps-logs
emptyDir: {}
- name: my-app-shared
emptyDir: {}
restartPolicy: OnFailure
kubectl apply -f multicron.yaml
cronjob.batch/my-app-addl created
(Expectation: Two CronJobs to be created. Actual: Only one is created, and that is the second one)
kubectl get cronjob -n commercial
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
my-app-addl 15/30 * * * * False 0 <none> 9s
(Expectation: Two CronJobs to be created. Actual: Only one is created, and that is the second one)
Thanks!
Abhilash
I could solve this by separating the Documents by using --- between CronJob entries

How to mount a kubernetes.io/dockerconfigjson

I have a secret of type kubernetes.io/dockerconfigjson:
$ kubectl describe secrets dockerjson
Name: dockerjson
Namespace: my-prd
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 1335 bytes
When I try to mount this secret into a container - I cannot find a config.json:
- name: dump
image: kaniko-executor:debug
imagePullPolicy: Always
command: ["/busybox/find", "/", "-name", "config.json"]
volumeMounts:
- name: docker-config
mountPath: /foobar
volumes:
- name: docker-config
secret:
secretName: dockerjson
defaultMode: 256
which only prints:
/kaniko/.docker/config.json
Is this supported at all or am I doing something wrong?
Am using OpenShift 3.9 - which should be Kubernetes 1.9.
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug-v0.9.0
command:
- /busybox/cat
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 0.5
memory: 500Mi
tty: true
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
volumes:
- name: docker-config
secret:
secretName: dockerjson
items:
- key: .dockerconfigjson
path: config.json

Update Kafka in Kubernetes causes downtime

I'm running a 4 brokers Kafka cluster in Kubernetes. The replication factor is 3 and ISR is 2.
In addition, there's a producer service (running Spring stream) generating messages and a consumer service reading from the topic. Now I tried to update the Kafka cluster with a rolling update, hoping for no downtime, but during the update, the producer's log was filled with this error:
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
According to my calculation, when 1 broker is down there shouldn't be a problem because the min ISR is 2. However, it seems like the producer service is unaware of the rolling update and keep sending messages to the same broker...
Any ideas how to solve it?
This is my kafka.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: default
labels:
app: kafka
spec:
serviceName: kafka
replicas: 4
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: kafka
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9308"
spec:
nodeSelector:
middleware.node: "true"
imagePullSecrets:
- name: nexus-registry
terminationGracePeriodSeconds: 300
containers:
- name: kafka
image: kafka:2.12-2.1.0
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 3000m
memory: 1800Mi
requests:
cpu: 2000m
memory: 1800Mi
env:
# Replication
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: "3"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "3"
- name: KAFKA_MIN_INSYNC_REPLICAS
value: "2"
# Protocol Version
- name: KAFKA_INTER_BROKER_PROTOCOL_VERSION
value: "2.1"
- name: KAFKA_LOG_MESSAGE_FORMAT_VERSION
value: "2.1"
- name: ENABLE_AUTO_EXTEND
value: "true"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_RESERVED_BROKER_MAX_ID
value: "999999999"
- name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
value: "true"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_NUM_RECOVERY_THREADS_PER_DATA_DIR
value: "10"
- name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "3"
- name: KAFKA_LOG_RETENTION_BYTES
value: "1800000000000"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KAFKA_OFFSETS_RETENTION_MINUTES
value: "10080"
- name: KAFKA_ZOOKEEPER_CONNECT
valueFrom:
configMapKeyRef:
name: zk-config
key: zk.endpoints
- name: KAFKA_LOG_DIRS
value: /kafka/kafka-logs
ports:
- name: kafka
containerPort: 9092
- name: prometheus
containerPort: 7071
volumeMounts:
- name: data
mountPath: /kafka
readinessProbe:
tcpSocket:
port: 9092
timeoutSeconds: 1
failureThreshold: 12
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
- name: kafka-exporter
image: danielqsj/kafka-exporter:latest
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
ports:
- containerPort: 9308
volumeClaimTemplates:
- metadata:
name: data
labels:
app: kafka
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi

How to change user and group owner for VolumeMount

I want to set up a pod and there are two containers running inside the pod, which try to access a mounted file /var/run/udspath.
In container serviceC, I need to change the file and group owner of /var/run/udspath, so I add a command into the yaml file. But it does not work.
kubectl apply does not complain, but container serviceC is not created.
Without this "command: ['/bin/sh', '-c', 'sudo chown 1337:1337 /var/run/udspath']", the container could be created.
apiVersion: v1
kind: Service
metadata:
name: clitool
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
selector:
app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: clitool
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"1c09c07e5751560367349d807c164267eaf5aea4018b4588d884f7d265cf14a4","initContainers":["istio-init"],"containers":["serviceC"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: httpbin
version: v1
spec:
containers:
- image:
name: serviceA
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/run/udspath
name: sdsudspath
- image:
imagePullPolicy: IfNotPresent
name: serviceB
ports:
- containerPort: 8000
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- httpbin
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15007
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image:
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sudo chown 1337:1337 /var/run/udspath"]
name: serviceC
resources:
requests:
cpu: 10m
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
- mountPath: /var/run/udspath
name: sdsudspath
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- 8000,
- -d
- ""
image: docker.io/quanlin/proxy_init:180712-1038
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- name: sdsudspath
hostPath:
path: /var/run/udspath
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
kubectl describe pod xxx shows that
serviceC:
Container ID:
Image:
Image ID:
Port: <none>
Command:
/bin/sh
Args:
-c
sudo chown 1337:1337 /var/run/udspath
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 Jul 2018 10:30:04 -0700
Finished: Mon, 30 Jul 2018 10:30:04 -0700
Ready: False
Restart Count: 2
Requests:
cpu: 10m
Environment:
POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from certs (ro)
/etc/istio/proxy from envoy (rw)
/var/run/udspath from sdsudspath (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g2zzv (ro)
More information would be helpful. Like what error are you getting.
Nevertheless, it really depends on what is defined in ServiceC's dockerfile entrypoint or cmd.
Mapping between docker and kubernetes:
Docker Entrypoint --> Pod command (The command run by the container)
Docker cmd --> Pod args (The arguments passed to the command)
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/