Error parsing yaml to json: did not find expected key - kubernetes - kubernetes

I am trying to create a k8s job with the below yaml,
apiVersion: batch/v1
kind: Job
metadata:
name: mysql-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "test1234"
command: ["/bin/sh","-c"]
args: ["mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com -e"create database test; show databases" && mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com test < /home/schema/test-schema.sql"]
volumeMounts:
- name: mysql-config-vol
mountPath: /home/mysql-config
- name: schema-config-vol
mountPath: /home/schema
volumes:
- name: mysql-config-vol
configMap:
name: mysql-config
- name: schema-config-vol
configMap:
name: test-schema
restartPolicy: Never
Some issue with the args given above, so I am getting the below error:
error: error parsing k8s-job.yaml: error converting YAML to JSON: yaml: line 15: did not find expected ',' or ']'
I have to pass the commands in args to 1) login to mysql server 2) create database called "test" 3) import the sql schema to the created database in mysql. But, there's an error with the syntax and I am unable to figure out where exactly the issue is.
Can anyone please help me to fix this? Thanks in Advance!

Figured out the way, the following args is working. Please refer if needed,
args: ["mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com -e 'create database obortech_qa; ' && mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com obortech_qa < /home/schema/test-schema.sql"]

Related

Script in a pod is not getting executed

I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a CronJob. The CronJob object looks like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysqldump
namespace: mysqldump
spec:
schedule: "* * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: viejo/debian-mysqldump:latest
envFrom:
- configMapRef:
name: mysqldump-config
args:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
restartPolicy: OnFailure
The script is called mysqldump.sh, which gets all necessary details from a ConfigMap object. It makes the dump of the databases in an environment variable MYSQLDUMP_DATABASES, and moves it to S3 bucket.
Note: I am going to move some variables to a Secret, but before I need this to work.
What happens is NOTHING. The script is never getting executed I tried putting a "echo starting the backup", before the script, and "echo backup ended" after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:
root#mysqldump-27550908-sjwfm:/# /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
root#mysqldump-27550908-sjwfm:/#
Can anyone point out a possible issue?
Try change args to command:
...
command:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
...

Tekton: yq Task gives safelyRenameFile [ERRO] Failed copying from /tmp/temp & [ERRO] open /workspace/source permission denied error

We have a Tekton pipeline and want to replace the image tags contents of our deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-api-spring-boot
spec:
replicas: 3
revisionHistoryLimit: 3
selector:
matchLabels:
app: microservice-api-spring-boot
template:
metadata:
labels:
app: microservice-api-spring-boot
spec:
containers:
- image: registry.gitlab.com/jonashackt/microservice-api-spring-boot#sha256:5d8a03755d3c45a3d79d32ab22987ef571a65517d0edbcb8e828a4e6952f9bcd
name: microservice-api-spring-boot
ports:
- containerPort: 8098
imagePullSecrets:
- name: gitlab-container-registry
Our Tekton pipeline uses the yq Task from Tekton Hub to replace the .spec.template.spec.containers[0].image with the "$(params.IMAGE):$(params.SOURCE_REVISION)" name like this:
- name: substitute-config-image-name
taskRef:
name: yq
runAfter:
- fetch-config-repository
workspaces:
- name: source
workspace: config-workspace
params:
- name: files
value:
- "./deployment/deployment.yml"
- name: expression
value: .spec.template.spec.containers[0].image = \"$(params.IMAGE)\":\"$(params.SOURCE_REVISION)\"
Sadly the yq Task doesn't seem to work, it produces a green
Step completed successfully, but shows the following errors:
16:50:43 safelyRenameFile [ERRO] Failed copying from /tmp/temp3555913516 to /workspace/source/deployment/deployment.yml
16:50:43 safelyRenameFile [ERRO] open /workspace/source/deployment/deployment.yml: permission denied
Here's also a screenshot from our Tekton Dashboard:
Any idea on how to solve the error?
The problem seems to be related to the way how the Dockerfile of https://github.com/mikefarah/yq now handles file permissions (for example this fix among others). The 0.3 version of the Tekton yq Task uses the image https://hub.docker.com/layers/mikefarah/yq/4.16.2/images/sha256-c6ef1bc27dd9cee57fa635d9306ce43ca6805edcdab41b047905f7835c174005 which produces the error.
One work-around to the problem could be the usage of the yq Task version 0.2 which you can apply via:
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/yq/0.2/yq.yaml
This one uses the older docker.io/mikefarah/yq:4#sha256:34f1d11ad51dc4639fc6d8dd5ade019fe57cf6084bb6a99a2f11ea522906033b and works without the error.
Alternatively you can simply create your own yq based Task that won't have the problem like this:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: replace-image-name-with-yq
spec:
workspaces:
- name: source
description: A workspace that contains the file which need to be dumped.
params:
- name: IMAGE_NAME
description: The image name to substitute
- name: FILE_PATH
description: The file path relative to the workspace dir.
- name: YQ_VERSION
description: Version of https://github.com/mikefarah/yq
default: v4.2.0
steps:
- name: substitute-with-yq
image: alpine
workingDir: $(workspaces.source.path)
command:
- /bin/sh
args:
- '-c'
- |
set -ex
echo "--- Download yq & add to path"
wget https://github.com/mikefarah/yq/releases/download/$(params.YQ_VERSION)/yq_linux_amd64 -O /usr/bin/yq &&\
chmod +x /usr/bin/yq
echo "--- Run yq expression"
yq e ".spec.template.spec.containers[0].image = \"$(params.IMAGE_NAME)\"" -i $(params.FILE_PATH)
echo "--- Show file with replacement"
cat $(params.FILE_PATH)
resources: {}
This custom Task simple uses the alpine image as base and installs yq using the Plain binary wget download. Also it uses yq exactly as you would do on the command line locally, which makes development of your expression so much easier!
As a bonus it outputs the file contents so you can check the replacement results right in the Tekton pipeline!
You need to apply it with
kubectl apply -f tekton-ci-config/replace-image-name-with-yq.yml
And should now be able to use it like this:
- name: replace-config-image-name
taskRef:
name: replace-image-name-with-yq
runAfter:
- dump-contents
workspaces:
- name: source
workspace: config-workspace
params:
- name: IMAGE_NAME
value: "$(params.IMAGE):$(params.SOURCE_REVISION)"
- name: FILE_PATH
value: "./deployment/deployment.yml"
Inside the Tekton dashboard it will look somehow like this and output the processed file:

How to solve this helm error "Error: UPGRADE FAILED: error validating "": error validating data"?

This is a follow-up question to my former question on chart validation here
While trying to deploy a helm chart, I have an error that shows thus:
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.initContainers[1]): unknown field "mountPath" in io.k8s.api.core.v1.Container
make: *** [upgrade] Error 1
FWIW, this is the initcontainer spec details below:
spec:
initContainers:
{{- if .Values.libp2p.staticKeypair.enabled}}
- name: libp2p-init-my-service
image: busybox:1.28
command: ['sh', '-c', '/bin/cp /libp2p-keys/* /root/libp2p-keys && /bin/chmod -R 0700 /root/libp2p-keys/']
volumeMounts:
- mountPath: /libp2p-keys
name: source-libp2p-keys
- mountPath: /root/libp2p-keys
name: destination-libp2p
{{- end }}
- name: config-dir
mountPath: /root/.mina-config
- name: fix-perms
image: busybox:1.28
command: [ 'sh', '-c', 'for dir in keys echo-keys faucet-keys; do [ -d /$dir ] && /bin/cp /$dir/* /wallet-keys; done; /bin/chmod 0700 /wallet-keys']
volumeMounts:
- mountPath: "/keys/"
name: private-keys
readOnly: true
- mountPath: /wallet-keys
name: wallet-keys
containers:
What could be the possible causes and how can I handle them?
You're working with YAML so take care about the indentation since it's really important.
Since you're declaring initContainers, on the first level you define Containers; but you included the following on that level:
- name: config-dir
mountPath: /root/.mina-config
Since name is actually an attribute of Container, it complains about mountPath.
I don't know where you want to mount .mina-config, but it should be nested inside of the volumeMounts attribute within a Container and not at the same level than the containers.

Kubernetes - How to execute a command inside the .yml file

My problem is the following:
I should execute the "envsubst" command from inside a POD, I'm using Kubernetes.
Actually I'm executing the command manually accessing to the POD and then executing it, but I would do it automatically inside my configuration file, which is a .yml file.
I've found some references on the web and I've tried some examples, but the result was always that the POD didn't start correctly, presenting the error CrashBackLoopOff error.
I would execute the following command:
envsubst < /usr/share/nginx/html/env_token.js > /usr/share/nginx/html/env.js
There's the content of my .yml file (not all, just the most relevant part)
spec:
containers:
- name: example 1
image: imagename/docker_console:${deploy.version}
env:
- name: PIPPO_ID
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: accessKey
- name: PIPPO
valueFrom:
secretKeyRef:
name: pippo-${deploy.env}-secret
key: secretAccessKey
- name: ENV
value: ${deploy.env}
- name: CREATION_TIMESTAMP
value: ${deploy.creation_timestamp}
- name: TEST
value: ${consoleenv}
command: ["/bin/sh"]
args: ["envsubst", "/usr/share/nginx/html/assets/env_token.js /usr/share/nginx/html/assets/env.js"]
The final two rows, "command" and "args", should be written in this way? I've already tried to put the "envsubst" in the command but it didn't work. I've also tried using commas in the args row to separate each parameter, same error.
Do you have some suggestions you know they work for sure?
Thanks

Trying to mount existing volume in k8s generates error

I have a very simple test.yaml file:
apiVersion: v1
metadata:
name: petter-dummy-pod
spec:
volumes:
- name: recovery
persistentVolumeClaim:
claimName: petter-test
containers:
- name: petter-dummy-pod
image: ubuntu
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
volumeMounts:
- name: petter-test
mounthPath: "/tmp/recovery"
subPath: recovery
restartPolicy: Never
When I apply this one it generates an error that I am a bit stuck with:
/home/ubuntu# kubectl apply -f test.yaml
error: error validating "test.yaml": error validating data: [ValidationError(Pod.spec.containers[0].volumeMounts[0]): unknown field "mounthPath" in io.k8s.api.core.v1.VolumeMount, ValidationError(Pod.spec.containers[0].volumeMounts[0]): missing required field "mountPath" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false
Any ideas how to solve this one?
you have got a typo mounthPath: "/tmp/recovery" it should be mountPath rather than mounthPath