Here's the Jenkinsfile, I'm spinning up:
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
name: kaniko
namespace: jenkins
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.8.1-debug
imagePullPolicy: IfNotPresent
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
- name: image-cache
mountPath: /cache
imagePullSecrets:
- name: regcred
volumes:
- name: image-cache
persistentVolumeClaim:
claimName: kaniko-cache-pvc
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: regcred
items:
- key: .dockerconfigjson
path: config.json
'''
}
}
stages {
stage('Build & Cache Image'){
steps{
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh -xe
/kaniko/executor \
--cache \
--cache-dir=/cache \
--dockerfile Dockerfile \
--context `pwd`/Dockerfile \
--insecure \
--skip-tls-verify \
--destination testrepo/kaniko-test:0.0.1'''
}
}
}
}
}
}
Problem is the executor doesn't dump the cache anywhere I can find. If I rerun the pod and stage, the executor logs say that there's no cache. I want to retain the cache using a PVC as you can see. Any thoughts? Do I miss something?
Thanks in advance.
You should use separate pod kaniko-warmer, which will download you specific images.
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer:latest
args: ["--cache-dir=/cache",
"--image=nginx:1.17.1-alpine",
"--image=node:17"]
volumeMounts:
- name: kaniko-cache
mountPath: /cache
volumes:
- name: kaniko-cache
hostPath:
path: /opt/volumes/database/qazexam-front-cache
type: DirectoryOrCreate
Then volume kaniko-cache could be mounted to kaniko executor
Related
Trying to use the kustomize to patch a Kubernetes resource. However, the order/sequence of the initContainers list is different in the output.
For example, the input is
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "sleep 3600"]
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "sleep 7200"]
after the patch, the output become
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: myapp-pod
spec:
containers:
- command:
- sh
- -c
- echo The app is running! && sleep 3600
image: busybox:1.28
name: myapp-container
initContainers:
- command:
- sh
- -c
- sleep 7200
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-myservice
- command:
- sh
- -c
- sleep 3600
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-mydb
Have tried with the --reorder argument but doesn't help.
Version tested:
{Version:kustomize/v4.1.3 GitCommit:0f614e92f72f1b938a9171b964d90b197ca8fb68 BuildDate:2021-05-20T20:52:40Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- source.yaml
patches:
- path: ./pod-patch.yaml
target:
kind: Pod
name: ".*"
pod-patch.yaml
apiVersion: apps/v1
kind: Pod
metadata:
name: doesNotMatter
spec:
initContainers:
- name: init-myservice
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
- name: init-mydb
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
This is a non-issue. The order is different because you've inverted it in your pod-patch.yaml.
In source.yaml, the order of the initContainers is [init-mydb, init-myservice]. In pod-patch.yaml it's [init-myservice, init-mydb].
I found this Pass date command as parameter to kubernetes cronjob which is similar, but did not solve my problem.
I'm trying to backup etcd using a cronjob, but etcd image doesn't have "date" command.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup
namespace: kube-system
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- args:
- -c
- etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
snapshot save /backup/etcd-snapshot-$(DATE_CURR).db
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
- name: DATE_CURR
#value: $(date --date= +"%Y-%m-%d_%H:%M:%S_%Z")
value: $(date +"%Y-%m-%d_%H:%M:%S_%Z")
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: backup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
readOnly: true
- mountPath: /backup
name: backup
- args:
- -c
- find /backup -type f -mtime +30 -exec rm -f {} \;
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: cleanup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /backup
name: backup
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: homelab-a
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /opt/etcd_backups
type: DirectoryOrCreate
name: backup
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
when this runs, it produces a file named "etcd-snapshot-.db", no date. If I can catch the logs, it says that "date" is not a known command. Sure enough, when I am able to exec into the pod while it's running, date does not work in the etcd image. How can I pass the date from the system as a variable so that it just uses the text and not try to actually invoke the "date" command?
Edit: Thanks to #Andrew, here is the solution:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup
namespace: kube-system
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- args:
- -c
- etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
snapshot save /backup/etcd-snapshot-$(echo `printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"`).db
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: backup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
readOnly: true
- mountPath: /backup
name: backup
- args:
- -c
- find /backup -type f -mtime +30 -exec rm -f {} \;
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: cleanup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /backup
name: backup
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: homelab-a
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /opt/etcd_backups
type: DirectoryOrCreate
name: backup
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
You can use printf like this:
printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"
Use man strftime to get conversion specification sequences.
Just tried it inside etcd container in kubernetes 1.19,
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /tmp/etcd-snapshot-$(printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n").db
{"level":"info","ts":1603748998.7475638,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db.part"}
{"level":"info","ts":"2020-10-26T21:49:58.760Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1603748998.7616487,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2020-10-26T21:49:58.889Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1603748998.9136698,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"5.9 MB","took":0.165411663}
{"level":"info","ts":1603748998.914325,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db"}
Snapshot saved at /tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db
Edit: A reproducible example using jobs, without passing any env vars:
apiVersion: batch/v1
kind: Job
metadata:
name: printf
spec:
template:
spec:
containers:
- name: printf
image: k8s.gcr.io/etcd:3.4.9-1
command: ["/bin/sh"]
args:
- -c
- echo `printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"`
restartPolicy: Never
backoffLimit: 4
# kubectl create -f job.yaml
job.batch/printf created
# kubectl logs printf-lfcdh
2020-10-27_10:03:59_UTC
Notice the use of backticks `` instead of $()
I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.
I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)
I have a secret of type kubernetes.io/dockerconfigjson:
$ kubectl describe secrets dockerjson
Name: dockerjson
Namespace: my-prd
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 1335 bytes
When I try to mount this secret into a container - I cannot find a config.json:
- name: dump
image: kaniko-executor:debug
imagePullPolicy: Always
command: ["/busybox/find", "/", "-name", "config.json"]
volumeMounts:
- name: docker-config
mountPath: /foobar
volumes:
- name: docker-config
secret:
secretName: dockerjson
defaultMode: 256
which only prints:
/kaniko/.docker/config.json
Is this supported at all or am I doing something wrong?
Am using OpenShift 3.9 - which should be Kubernetes 1.9.
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug-v0.9.0
command:
- /busybox/cat
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 0.5
memory: 500Mi
tty: true
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
volumes:
- name: docker-config
secret:
secretName: dockerjson
items:
- key: .dockerconfigjson
path: config.json
I want to set up a pod and there are two containers running inside the pod, which try to access a mounted file /var/run/udspath.
In container serviceC, I need to change the file and group owner of /var/run/udspath, so I add a command into the yaml file. But it does not work.
kubectl apply does not complain, but container serviceC is not created.
Without this "command: ['/bin/sh', '-c', 'sudo chown 1337:1337 /var/run/udspath']", the container could be created.
apiVersion: v1
kind: Service
metadata:
name: clitool
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
selector:
app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: clitool
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"1c09c07e5751560367349d807c164267eaf5aea4018b4588d884f7d265cf14a4","initContainers":["istio-init"],"containers":["serviceC"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: httpbin
version: v1
spec:
containers:
- image:
name: serviceA
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/run/udspath
name: sdsudspath
- image:
imagePullPolicy: IfNotPresent
name: serviceB
ports:
- containerPort: 8000
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- httpbin
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15007
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image:
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sudo chown 1337:1337 /var/run/udspath"]
name: serviceC
resources:
requests:
cpu: 10m
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
- mountPath: /var/run/udspath
name: sdsudspath
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- 8000,
- -d
- ""
image: docker.io/quanlin/proxy_init:180712-1038
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- name: sdsudspath
hostPath:
path: /var/run/udspath
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
kubectl describe pod xxx shows that
serviceC:
Container ID:
Image:
Image ID:
Port: <none>
Command:
/bin/sh
Args:
-c
sudo chown 1337:1337 /var/run/udspath
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 Jul 2018 10:30:04 -0700
Finished: Mon, 30 Jul 2018 10:30:04 -0700
Ready: False
Restart Count: 2
Requests:
cpu: 10m
Environment:
POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from certs (ro)
/etc/istio/proxy from envoy (rw)
/var/run/udspath from sdsudspath (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g2zzv (ro)
More information would be helpful. Like what error are you getting.
Nevertheless, it really depends on what is defined in ServiceC's dockerfile entrypoint or cmd.
Mapping between docker and kubernetes:
Docker Entrypoint --> Pod command (The command run by the container)
Docker cmd --> Pod args (The arguments passed to the command)
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/