Why are podman pods not reproducible using kubernetes yaml file? - kubernetes

I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.

I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)

Related

Kaniko Image Cache in Jenkins Kubernetes Agents

Here's the Jenkinsfile, I'm spinning up:
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
name: kaniko
namespace: jenkins
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.8.1-debug
imagePullPolicy: IfNotPresent
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
- name: image-cache
mountPath: /cache
imagePullSecrets:
- name: regcred
volumes:
- name: image-cache
persistentVolumeClaim:
claimName: kaniko-cache-pvc
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: regcred
items:
- key: .dockerconfigjson
path: config.json
'''
}
}
stages {
stage('Build & Cache Image'){
steps{
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh -xe
/kaniko/executor \
--cache \
--cache-dir=/cache \
--dockerfile Dockerfile \
--context `pwd`/Dockerfile \
--insecure \
--skip-tls-verify \
--destination testrepo/kaniko-test:0.0.1'''
}
}
}
}
}
}
Problem is the executor doesn't dump the cache anywhere I can find. If I rerun the pod and stage, the executor logs say that there's no cache. I want to retain the cache using a PVC as you can see. Any thoughts? Do I miss something?
Thanks in advance.
You should use separate pod kaniko-warmer, which will download you specific images.
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer:latest
args: ["--cache-dir=/cache",
"--image=nginx:1.17.1-alpine",
"--image=node:17"]
volumeMounts:
- name: kaniko-cache
mountPath: /cache
volumes:
- name: kaniko-cache
hostPath:
path: /opt/volumes/database/qazexam-front-cache
type: DirectoryOrCreate
Then volume kaniko-cache could be mounted to kaniko executor

jhipster kubernetes deployment has no redis

i have designed several microservices using JHipster jdl studio, with redis cache.
i want to deploy them using kubernetes and docker-compose JHipster generator.
With docker-compose deployement generation, i see redis docker in the generated docker-compose.yml.
But in kubernetes no redis srvice or app generated.
I read the jhipster kubernetes generator source, but i dont see any redis generation in jhipster kubernetes generators and templates
Is there an issue or is there a reason for that?
thanks a lot
here is a sample of one microservice
app.jdl
application {
config {
applicationType microservice
authenticationType jwt
baseName msbooklibrary
blueprints []
buildTool maven
cacheProvider redis
clientPackageManager npm
creationTimestamp 1606242682385
databaseType sql
devDatabaseType h2Memory
dtoSuffix DTO
embeddableLaunchScript false
enableHibernateCache true
enableSwaggerCodegen true
enableTranslation false
jhiPrefix jhi
jhipsterVersion "6.10.5"
languages [en, fr]
messageBroker kafka
nativeLanguage en
otherModules []
packageName fr.XXXX
prodDatabaseType postgresql
searchEngine elasticsearch
serverPort 9000
serviceDiscoveryType eureka
skipClient true
skipUserManagement true
testFrameworks [gatling, cucumber]
websocket false
}
entities Book
}
docker-compose.yml
msbooklibrary:
image: msbooklibrary
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- 'SPRING_PROFILES_ACTIVE=prod,swagger'
- MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED=true
- 'EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/eureka'
- 'SPRING_CLOUD_CONFIG_URI=http://admin:$${jhipster.registry.password}#jhipster-registry:8761/config'
- 'SPRING_DATASOURCE_URL=jdbc:postgresql://msbooklibrary-postgresql:5432/msbooklibrary'
- 'JHIPSTER_CACHE_REDIS_SERVER=redis://msbooklibrary-redis:6379'
- JHIPSTER_CACHE_REDIS_CLUSTER=false
- JHIPSTER_SLEEP=30
- 'SPRING_DATA_JEST_URI=http://msbooklibrary-elasticsearch:9200'
- 'SPRING_ELASTICSEARCH_REST_URIS=http://msbooklibrary-elasticsearch:9200'
- 'KAFKA_BOOTSTRAPSERVERS=kafka:9092'
- JHIPSTER_REGISTRY_PASSWORD=admin
msbooklibrary-postgresql:
image: 'postgres:12.3'
environment:
- POSTGRES_USER=msbooklibrary
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
msbooklibrary-elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.8.8'
environment:
- ES_JAVA_OPTS=-Xms1024m -Xmx1024m
- discovery.type=single-node
msbooklibrary-redis:
image: 'redis:6.0.4'
msbooklibrary-deployment.yml // kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: msbooklibrary
namespace: msdmall
spec:
replicas: 1
selector:
matchLabels:
app: msbooklibrary
version: 'v1'
template:
metadata:
labels:
app: msbooklibrary
version: 'v1'
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- msbooklibrary
topologyKey: kubernetes.io/hostname
weight: 100
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 msbooklibrary-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: msbooklibrary-app
image: dockerregistry/msbooklibrary
env:
- name: SPRING_PROFILES_ACTIVE
value: prod
- name: SPRING_CLOUD_CONFIG_URI
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/config
- name: JHIPSTER_REGISTRY_PASSWORD
valueFrom:
secretKeyRef:
name: registry-secret
key: registry-admin-password
- name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
value: http://admin:${jhipster.registry.password}#jhipster-registry.msdmall.svc.cluster.local:8761/eureka/
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://msbooklibrary-postgresql.msdmall.svc.cluster.local:5432/msbooklibrary
- name: SPRING_DATASOURCE_USERNAME
value: msbooklibrary
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: msbooklibrary-postgresql
key: postgresql-password
- name: SPRING_DATA_JEST_URI
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: SPRING_ELASTICSEARCH_REST_URIS
value: http://msbooklibrary-elasticsearch.msdmall.svc.cluster.local:9200
- name: KAFKA_CONSUMER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_CONSUMER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_CONSUMER_GROUP_ID
value: 'msbooklibrary'
- name: KAFKA_CONSUMER_AUTO_OFFSET_RESET
value: 'earliest'
- name: KAFKA_PRODUCER_BOOTSTRAP_SERVERS
value: 'jhipster-kafka.msdmall.svc.cluster.local:9092'
- name: KAFKA_PRODUCER_KEY_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: KAFKA_PRODUCER_VALUE_DESERIALIZER
value: 'org.apache.kafka.common.serialization.StringDeserializer'
- name: SPRING_SLEUTH_PROPAGATION_KEYS
value: 'x-request-id,x-ot-span-context'
- name: JAVA_OPTS
value: ' -Xmx256m -Xms256m'
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
ports:
- name: http
containerPort: 9000
readinessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
httpGet:
path: /management/health
port: http
initialDelaySeconds: 120
msbooklibrary-service.yml
apiVersion: v1
kind: Service
metadata:
name: msbooklibrary
namespace: msdmall
labels:
app: msbooklibrary
spec:
selector:
app: msbooklibrary
ports:
- name: http
port: 9000
I can't recall any specific reason. Guess it was just forgotten. Can you open an issue on github?

passing system date as variable to Kubernetes CronJob

I found this Pass date command as parameter to kubernetes cronjob which is similar, but did not solve my problem.
I'm trying to backup etcd using a cronjob, but etcd image doesn't have "date" command.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup
namespace: kube-system
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- args:
- -c
- etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
snapshot save /backup/etcd-snapshot-$(DATE_CURR).db
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
- name: DATE_CURR
#value: $(date --date= +"%Y-%m-%d_%H:%M:%S_%Z")
value: $(date +"%Y-%m-%d_%H:%M:%S_%Z")
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: backup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
readOnly: true
- mountPath: /backup
name: backup
- args:
- -c
- find /backup -type f -mtime +30 -exec rm -f {} \;
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: cleanup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /backup
name: backup
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: homelab-a
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /opt/etcd_backups
type: DirectoryOrCreate
name: backup
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
when this runs, it produces a file named "etcd-snapshot-.db", no date. If I can catch the logs, it says that "date" is not a known command. Sure enough, when I am able to exec into the pod while it's running, date does not work in the etcd image. How can I pass the date from the system as a variable so that it just uses the text and not try to actually invoke the "date" command?
Edit: Thanks to #Andrew, here is the solution:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: backup
namespace: kube-system
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- args:
- -c
- etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
snapshot save /backup/etcd-snapshot-$(echo `printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"`).db
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: backup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
readOnly: true
- mountPath: /backup
name: backup
- args:
- -c
- find /backup -type f -mtime +30 -exec rm -f {} \;
command:
- /bin/sh
env:
- name: ETCDCTL_API
value: "3"
image: k8s.gcr.io/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
name: cleanup
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /backup
name: backup
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: homelab-a
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /opt/etcd_backups
type: DirectoryOrCreate
name: backup
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
You can use printf like this:
printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"
Use man strftime to get conversion specification sequences.
Just tried it inside etcd container in kubernetes 1.19,
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /tmp/etcd-snapshot-$(printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n").db
{"level":"info","ts":1603748998.7475638,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db.part"}
{"level":"info","ts":"2020-10-26T21:49:58.760Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1603748998.7616487,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2020-10-26T21:49:58.889Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1603748998.9136698,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"5.9 MB","took":0.165411663}
{"level":"info","ts":1603748998.914325,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db"}
Snapshot saved at /tmp/etcd-snapshot-2020-10-26_21:49:58_UTC.db
Edit: A reproducible example using jobs, without passing any env vars:
apiVersion: batch/v1
kind: Job
metadata:
name: printf
spec:
template:
spec:
containers:
- name: printf
image: k8s.gcr.io/etcd:3.4.9-1
command: ["/bin/sh"]
args:
- -c
- echo `printf "%(%Y-%m-%d_%H:%M:%S_%Z)T\n"`
restartPolicy: Never
backoffLimit: 4
# kubectl create -f job.yaml
job.batch/printf created
# kubectl logs printf-lfcdh
2020-10-27_10:03:59_UTC
Notice the use of backticks `` instead of $()

How to change user and group owner for VolumeMount

I want to set up a pod and there are two containers running inside the pod, which try to access a mounted file /var/run/udspath.
In container serviceC, I need to change the file and group owner of /var/run/udspath, so I add a command into the yaml file. But it does not work.
kubectl apply does not complain, but container serviceC is not created.
Without this "command: ['/bin/sh', '-c', 'sudo chown 1337:1337 /var/run/udspath']", the container could be created.
apiVersion: v1
kind: Service
metadata:
name: clitool
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
selector:
app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: clitool
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"1c09c07e5751560367349d807c164267eaf5aea4018b4588d884f7d265cf14a4","initContainers":["istio-init"],"containers":["serviceC"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: httpbin
version: v1
spec:
containers:
- image:
name: serviceA
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/run/udspath
name: sdsudspath
- image:
imagePullPolicy: IfNotPresent
name: serviceB
ports:
- containerPort: 8000
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- httpbin
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15007
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image:
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sudo chown 1337:1337 /var/run/udspath"]
name: serviceC
resources:
requests:
cpu: 10m
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
- mountPath: /var/run/udspath
name: sdsudspath
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- 8000,
- -d
- ""
image: docker.io/quanlin/proxy_init:180712-1038
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- name: sdsudspath
hostPath:
path: /var/run/udspath
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
kubectl describe pod xxx shows that
serviceC:
Container ID:
Image:
Image ID:
Port: <none>
Command:
/bin/sh
Args:
-c
sudo chown 1337:1337 /var/run/udspath
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 Jul 2018 10:30:04 -0700
Finished: Mon, 30 Jul 2018 10:30:04 -0700
Ready: False
Restart Count: 2
Requests:
cpu: 10m
Environment:
POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from certs (ro)
/etc/istio/proxy from envoy (rw)
/var/run/udspath from sdsudspath (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g2zzv (ro)
More information would be helpful. Like what error are you getting.
Nevertheless, it really depends on what is defined in ServiceC's dockerfile entrypoint or cmd.
Mapping between docker and kubernetes:
Docker Entrypoint --> Pod command (The command run by the container)
Docker cmd --> Pod args (The arguments passed to the command)
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/

Kubernetes Helm Chart - Debugging

I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}