Disable the order/sorting in Kubernetes kustomize build - kubernetes

Trying to use the kustomize to patch a Kubernetes resource. However, the order/sequence of the initContainers list is different in the output.
For example, the input is
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "sleep 3600"]
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "sleep 7200"]
after the patch, the output become
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: myapp-pod
spec:
containers:
- command:
- sh
- -c
- echo The app is running! && sleep 3600
image: busybox:1.28
name: myapp-container
initContainers:
- command:
- sh
- -c
- sleep 7200
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-myservice
- command:
- sh
- -c
- sleep 3600
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-mydb
Have tried with the --reorder argument but doesn't help.
Version tested:
{Version:kustomize/v4.1.3 GitCommit:0f614e92f72f1b938a9171b964d90b197ca8fb68 BuildDate:2021-05-20T20:52:40Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- source.yaml
patches:
- path: ./pod-patch.yaml
target:
kind: Pod
name: ".*"
pod-patch.yaml
apiVersion: apps/v1
kind: Pod
metadata:
name: doesNotMatter
spec:
initContainers:
- name: init-myservice
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
- name: init-mydb
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501

This is a non-issue. The order is different because you've inverted it in your pod-patch.yaml.
In source.yaml, the order of the initContainers is [init-mydb, init-myservice]. In pod-patch.yaml it's [init-myservice, init-mydb].

Related

Use environment variable as default for another env variable in Kubernetes

Is there a way to use an environment variable as the default for another? For example:
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"
- name: RESULT
value: "$(SOMETIMES_SET) ? $(SOMETIMES_SET) : $(ALWAYS_SET)"
I don't think there is a way to do that but anyway you can try something like this
apiVersion: v1
kind: Pod
metadata:
name: Work
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
args:
- RESULT=${SOMETIMES_SET:-${ALWAYS_SET}}; command_to_run_app
command:
- sh
- -c
env:
- name: ALWAYS_SET
value: "default"
- name: SOMETIMES_SET
value: "custom"

Why are podman pods not reproducible using kubernetes yaml file?

I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.
I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)

K8s: Error in applying yaml file after adding env values

The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes

Pass json string to environment variable in a k8s deployment for Envoy

I have a K8s deployment with one pod running among others a container with Envoy sw. I have defined image in such way that if there is an Environment variable EXTRA_OPTS defined it will be appended to the command line to start Envoy.
I want to use that variable to override default configuration as explained in
https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-config-yaml
Environment variable works ok for other command options such as "-l debug" for example.
Also, I have tested expected final command line and it works.
Dockerfile set Envoy to run in this way:
CMD ["/bin/bash", "-c", "envoy -c envoy.yaml $EXTRA_OPTS"]
What I want is to set this:
...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"'
...
I have succesfully tested running envoy with final command line:
envoy -c /etc/envoy/envoy.yaml --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"
And I have also tested a "simpler" option in EXTRA_OPTS and it works:
...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' -l debug'
...
I would expect Envoy running with this new admin port, instead I'm having param errors:
PARSE ERROR: Argument: {
Couldn't find match for argument
It looks like quotes are not being passed to the actual Environment variable into the container...
Any clue???
Thanks to all
You should set ["/bin/bash", "-c", "envoy -c envoy.yaml"] as an ENTRYPOINT in you dockerfile or use command in kubernetes and then use args to add additional arguments.
You can find more information in docker documentation
Let me explain by example:
$ docker build -t fl3sh/test:bash .
$ cat Dockerfile
FROM ubuntu
RUN echo '#!/bin/bash' > args.sh && \
echo 'echo "$#"' >> args.sh && \
chmod -x args.sh
CMD ["args","from","docker","cmd"]
ENTRYPOINT ["/bin/bash", "args.sh", "$ENV_ARG"]
cat args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: args
name: args
spec:
containers:
- args:
- args
- from
- k8s
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/args $ENV_ARG args from k8s
cat command-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-args
name: command-args
spec:
containers:
- command:
- /bin/bash
- -c
args:
- 'echo args'
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-args args
cat command-env-args.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: command-env-args
name: command-env-args
spec:
containers:
- env:
- name: ENV_ARG
value: "arg from env"
command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-env-args arg from env
cat command-no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-no-args
name: command-no-args
spec:
containers:
- command:
- /bin/bash
- -c
- 'echo "no args";echo "$#"'
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-no-args no args
#notice ^ empty line above
cat no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: no-args
name: no-args
spec:
containers:
- image: fl3sh/test:bash
name: no-args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/no-args $ENV_ARG args from docker cmd
If you need to recreate my example you can use this loop to get this output like above:
for p in `kubectl get po -oname`; do echo cat ${p#*/}.yaml; echo ""; \
cat ${p#*/}.yaml; echo -e "\nOutput:"; printf "$p "; \
kubectl logs $p;echo "";done
Conclusion if you need to pass env as arguments use:
command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
I hope now it is clear.

Helm appears to parse my chart differently depending on if I use --dry-run --debug?

So I was deploying a new cronjob today and got the following error:
Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.
You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.
whats going on here?
hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.
I really wish the error-messages of helm would be more helpful.
the corrected chart looks like:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
This may be due to formatting error.
Look at the examples here and here.
The structure is
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
As per provided output you have spec and restartPolicy on the same line:
jobTemplate:
spec:
template:
spec:
restartPolicy: Never #<----------this is not 'Always'!!
The same with spec.jobTemplate.spec.template.spec.containers
Suppose helm uses some default values instead of yours.
You can also try to generate yaml file, convert it to json and apply.