I have a very simple test.yaml file:
apiVersion: v1
metadata:
name: petter-dummy-pod
spec:
volumes:
- name: recovery
persistentVolumeClaim:
claimName: petter-test
containers:
- name: petter-dummy-pod
image: ubuntu
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
volumeMounts:
- name: petter-test
mounthPath: "/tmp/recovery"
subPath: recovery
restartPolicy: Never
When I apply this one it generates an error that I am a bit stuck with:
/home/ubuntu# kubectl apply -f test.yaml
error: error validating "test.yaml": error validating data: [ValidationError(Pod.spec.containers[0].volumeMounts[0]): unknown field "mounthPath" in io.k8s.api.core.v1.VolumeMount, ValidationError(Pod.spec.containers[0].volumeMounts[0]): missing required field "mountPath" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false
Any ideas how to solve this one?
you have got a typo mounthPath: "/tmp/recovery" it should be mountPath rather than mounthPath
Related
According to Kubernetes documentation, we should specify type: DirectoryOrCreate if we want to create directory on the host. The default option is "no checks will be performed before mounting the hostPath volume".
However, I am seeing directory gets created on host even when no type is specified:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-user-hostpath
spec:
replicas: 1
selector:
matchLabels:
app: busybox-user-local-storage1
template:
metadata:
labels:
app: busybox-user-local-storage1
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sh", "-ec", "while :; do echo $(date '+%Y-%m-%d %H:%M:%S') deployment1 >> /home/test.txt; sleep 5 ; done"]
volumeMounts:
- name: busybox-hostpath
mountPath: /home
volumes:
- name: busybox-hostpath
hostPath:
path: /home/maintainer/data
/home/maintainer/data directory did not exist before running the pod. After deployment, I can see the directory is created. This goes against the documentation unless I am missing something. I was expecting the pod should crash but I can see the files are created. Any idea please?
This is something that goes back in time, before type was even implemented for hostPath volume. When unset should just go and default directly to create an empty directory, and it's a backward compatible implementation, because no one had the option to add type and forcing an error when it's not defined would have broken all previously created pods without it. You can take a look into the actual design-proposal: https://github.com/kubernetes/design-proposals-archive/blob/main/storage/volume-hostpath-qualifiers.md#host-volume
The design proposal clearly specifies that "unset - If nothing exists at the given path, an empty directory will be created there. Otherwise, behaves like exists"
I am trying to create a k8s job with the below yaml,
apiVersion: batch/v1
kind: Job
metadata:
name: mysql-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "test1234"
command: ["/bin/sh","-c"]
args: ["mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com -e"create database test; show databases" && mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com test < /home/schema/test-schema.sql"]
volumeMounts:
- name: mysql-config-vol
mountPath: /home/mysql-config
- name: schema-config-vol
mountPath: /home/schema
volumes:
- name: mysql-config-vol
configMap:
name: mysql-config
- name: schema-config-vol
configMap:
name: test-schema
restartPolicy: Never
Some issue with the args given above, so I am getting the below error:
error: error parsing k8s-job.yaml: error converting YAML to JSON: yaml: line 15: did not find expected ',' or ']'
I have to pass the commands in args to 1) login to mysql server 2) create database called "test" 3) import the sql schema to the created database in mysql. But, there's an error with the syntax and I am unable to figure out where exactly the issue is.
Can anyone please help me to fix this? Thanks in Advance!
Figured out the way, the following args is working. Please refer if needed,
args: ["mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com -e 'create database obortech_qa; ' && mysql --defaults-extra-file=/home/mysql-config/mysql-config --host yyy.mysql.database.azure.com obortech_qa < /home/schema/test-schema.sql"]
This is a follow-up question to my former question on chart validation here
While trying to deploy a helm chart, I have an error that shows thus:
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.initContainers[1]): unknown field "mountPath" in io.k8s.api.core.v1.Container
make: *** [upgrade] Error 1
FWIW, this is the initcontainer spec details below:
spec:
initContainers:
{{- if .Values.libp2p.staticKeypair.enabled}}
- name: libp2p-init-my-service
image: busybox:1.28
command: ['sh', '-c', '/bin/cp /libp2p-keys/* /root/libp2p-keys && /bin/chmod -R 0700 /root/libp2p-keys/']
volumeMounts:
- mountPath: /libp2p-keys
name: source-libp2p-keys
- mountPath: /root/libp2p-keys
name: destination-libp2p
{{- end }}
- name: config-dir
mountPath: /root/.mina-config
- name: fix-perms
image: busybox:1.28
command: [ 'sh', '-c', 'for dir in keys echo-keys faucet-keys; do [ -d /$dir ] && /bin/cp /$dir/* /wallet-keys; done; /bin/chmod 0700 /wallet-keys']
volumeMounts:
- mountPath: "/keys/"
name: private-keys
readOnly: true
- mountPath: /wallet-keys
name: wallet-keys
containers:
What could be the possible causes and how can I handle them?
You're working with YAML so take care about the indentation since it's really important.
Since you're declaring initContainers, on the first level you define Containers; but you included the following on that level:
- name: config-dir
mountPath: /root/.mina-config
Since name is actually an attribute of Container, it complains about mountPath.
I don't know where you want to mount .mina-config, but it should be nested inside of the volumeMounts attribute within a Container and not at the same level than the containers.
I have created kubernetes cluster on digitalocean. and I have deployed k6 as a job on kubernetes cluster.
apiVersion: batch/v1
kind: Job
metadata:
name: benchmark
spec:
template:
spec:
containers:
- name: benchmark
image: loadimpact/k6:0.29.0
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=./test.json", "/etc/k6-config/script.js"]
volumeMounts:
- name: config-volume
mountPath: /etc/k6-config
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: k6-config
this is how my k6-job.yaml file look like. After deploying it in kubernetes cluster I have checked the pods logs. it is showing permission denied error.
level=error msg="open ./test.json: permission denied"
how to solve this issue?
The k6 Docker image runs as an unprivileged user, but unfortunately the default work directory is set to /, so it has no permission to write there.
To work around this consider changing the JSON output path to /home/k6/out.json, i.e.:
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=/home/k6/test.json", "/etc/k6-config/script.js"]
I'm one of the maintainers on the team, so will propose a change to the Dockerfile to set the WORKDIR to /home/k6 to make the default behavior a bit more intuitive.
I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test