I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.
Related
I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.
Let's say I have the following simple ConfigMap:
apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
creationTimestamp: null
name: mycm
I know that a container of some deployment can consume this environment variable via:
kubectl set env deployment mydb --from=configmap/mycm
or by specifying it manually in the manifest like so:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MYSQL_ROOT_PASSWORD
name: mycm
However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.
I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.
So far I have the following YAML manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: mariadb
name: mariadb
resources: {}
args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
- name: MYSQL_ROOT_PASSWORD
value: temp
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
I'm attempting to set the MYSQL_ROOT_PASSWORD to some temporary value, and then update it to mounted value as soon as the container starts via args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
As I somewhat expected, this didn't work, resulting in the following error:
/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory
I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:
readinessProbe:
exec:
command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
initialDelaySeconds: 5
periodSeconds: 5
Is there any easy way to achieve what I'm trying to do, or is it impossible?
So I managed to find a solution, with a lot of inspiration from this answer.
Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, watch_passwd.sh, which makes use of inotifywait to watch for changes and then uses the K8s API to rollout the changes accordingly:
update_passwd() {
kubectl delete secret mysql-root-passwd > /dev/null 2>&1
kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}
update_passwd
while true
do
inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
update_passwd
kubectl rollout restart deployment $1
done
The Dockerfile is then:
FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .
After building the image (locally in this case) as mysidecar, I create the ServiceAccount, Role, and RoleBinding outlined here, adding rules for deployments so that they can be restarted by the sidecar.
After this, I piece it all together to create the following YAML Manifest (note that imagePullPolicy is set to Never, since I created the image locally):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 3
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
serviceAccountName: secretmaker
containers:
- image: mysidecar
name: mysidecar
imagePullPolicy: Never
command:
- /bin/sh
- -c
- |
./watch_passwd.sh $(DEPLOYMENT_NAME)
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: config-volume
mountPath: /etc/config
- image: mariadb
name: mariadb
resources: {}
envFrom:
- secretRef:
name: mysql-root-passwd
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: mydb
name: secretmaker
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: mydb
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
---
It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)
We ran into an issue recently as to using environment variables inside container.
OS: windows 10 pro
k8s cluster: minikube
k8s version: 1.18.3
1. The way that doesn't work, though it's preferred way for us
Here is the deployment.yaml using 'envFrom':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-configmap
here is the db.properties:
POSTGRES_HOST_AUTH_METHOD=trust
step 1:
kubectl create configmap db-configmap ./db.properties
step 2:
kebuctl apply -f ./deployment.yaml
step 3:
kubectl get pod
Run the above command, get the following result:
db-8d7f7bcb9-7l788 0/1 CrashLoopBackOff 1 9s
That indicates the environment variables POSTGRES_HOST_AUTH_METHOD is not injected.
2. The way that works (we can't work with this approach)
Here is the deployment.yaml using 'env':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
step 1:
kubectl apply -f ./deployment.yaml
step 2:
kubectl get pod
Run the above command, get the following result:
db-fc58f998d-nxgnn 1/1 Running 0 32s
the above indicates the environment is injected so that the db starts.
What did I do wrong in the first case?
Thank you in advance for the help.
Update:
Provide the configmap:
kubectl describe configmap db-configmap
Name: db-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db.properties:
----
POSTGRES_HOST_AUTH_METHOD=trust
For creating config-maps for usecase-1. please use the below command
kubectl create configmap db-configmap --from-env-file db.properties
Are you missing the key? (see "key:" (no quotes) below) And I think you need to provide the name of the env-variable...which people usually use the key-name, but you don't have to. I've repeated the same value ("POSTGRES_HOST_AUTH_METHOD") below as the environment variable NAME and the keyname of the config-map.
#start env .. where we add environment variables
env:
# Define the environment variable
- name: POSTGRES_HOST_AUTH_METHOD
#value: "UseHardCodedValueToDebugSometimes"
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to environment variable (above "name:")
name: db-configmap
# Specify the key associated with the value
key: POSTGRES_HOST_AUTH_METHOD
My example (trying to use your values)....comes from this generic example:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
pods/pod-single-configmap-env-variable.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
PS
You can use "describe" to take a looksie at your config-map, after you (think:) ) you have set it up correctly.
kubectl describe configmap db-configmap --namespace=IfNotDefaultNameSpaceHere
See when you do it like you described.
deployment# exb db-7785cdd5d8-6cstw
root#db-7785cdd5d8-6cstw:/# env | grep -i TRUST
db.properties=POSTGRES_HOST_AUTH_METHOD=trust
the env set is not exactly POSTGRES_HOST_AUTH_METHOD its actually taking filename in env.
create configmap via
kubectl create cm db-configmap --from-env-file db.properties and it will actually put env POSTGRES_HOST_AUTH_METHOD in pod.
I am learning Kubernetes and creating a Pod with configmap. I created a yaml file to create a POD. But I am getting below error
error: error parsing yuvi.yaml: error converting YAML to JSON: YAML: line 13: mapping values are not allowed in this context
My configmap name: yuviconfigmap
YAML file:
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap
The yaml is not as per what kubernetes expects. Below should work.
apiVersion: v1
kind: Pod
metadata:
name: yuvipod1
spec:
containers:
- name: yuvicontain
image: nginx
volumeMounts:
- name: yuvivolume
mountPath: /etc/voulme
volumes:
- name: yuvivolume
configMap:
name: yuviconfigmap
To get one of our issue resolved, we need "CATALINA_OPTS=-Djava.awt.headless=true" this property in Kubernetes configuration. i guess this should be added in .yml file.
Please help us , under which this has to be added. Thanks in advance.
Any sample Yaml file or link to it if provided will be great.
CATALINA_OPTS is an environment variable. Here is how to define it and set it to a single value in Deployment configuration yaml:
...
spec:
...
template:
metadata:
labels:
...
spec:
containers:
- image: ...
...
env:
- name: CATALINA_OPTS
value: -Djava.awt.headless=true
...
It could go as ConfigMap
As an example, if you want to have the variable CATALINA_OPTS, to contain the value "-Djava.awt.headless=true", you can do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
CATALINA_OPTS: "-Djava.awt.headless=true"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: bb
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: my-config
restartPolicy: Never
# kubectl logs my-pod
...
CATALINA_OPTS=-Djava.awt.headless=true
Is this what you need?
I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule:
Connect to an existing Pod
Execute a script
Disconnect
I have alternative methods of doing this, but they don't feel right.
Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script
Create one deployment that has a "Cron Pod" which also houses the application, and many "Non Cron Pods" which are just the application. The Cron Pod would use a different image (one with cron tasks scheduled).
I would prefer to use the Kubernetes ScheduledJobs if possible to prevent the same Job running multiple times at once and also because it strikes me as the more appropriate way of doing it.
Is there a way to do this by ScheduledJobs / CronJobs?
http://kubernetes.io/docs/user-guide/cron-jobs/
As far as I'm aware there is no "official" way to do this the way you want, and that is I believe by design. Pods are supposed to be ephemeral and horizontally scalable, and Jobs are designed to exit. Having a cron job "attach" to an existing pod doesn't fit that module. The Scheduler would have no idea if the job completed.
Instead, a Job can to bring up an instance of your application specifically for running the Job and then take it down once the Job is complete. To do this you can use the same Image for the Job as for your Deployment but use a different "Entrypoint" by setting command:.
If they job needs access to data created by your application then that data will need to be persisted outside the application/Pod, you could so this a few ways but the obvious ways would be a database or a persistent volume.
For example useing a database would look something like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP
spec:
template:
metadata:
labels:
name: THIS
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP
command:
- app-start
env:
- name: DB_HOST
value: "127.0.0.1"
- name: DB_DATABASE
value: "app_db"
And a job that connects to the same database, but with a different "Entrypoint" :
apiVersion: batch/v1
kind: Job
metadata:
name: APP-JOB
spec:
template:
metadata:
name: APP-JOB
labels:
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP-JOB
command:
- app-job
env:
- name: DB_HOST
value: "127.0.0.1"
- name: DB_DATABASE
value: "app_db"
Or the persistent volume approach would look something like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP
spec:
template:
metadata:
labels:
name: THIS
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP
command:
- app-start
volumeMounts:
- mountPath: "/var/www/html"
name: APP-VOLUME
volumes:
- name: APP-VOLUME
persistentVolumeClaim:
claimName: APP-CLAIM
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: APP-VOLUME
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /app
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: APP-CLAIM
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
service: app
With a job like this, attaching to the same volume:
apiVersion: batch/v1
kind: Job
metadata:
name: APP-JOB
spec:
template:
metadata:
name: APP-JOB
labels:
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP-JOB
command:
- app-job
volumeMounts:
- mountPath: "/var/www/html"
name: APP-VOLUME
volumes:
- name: APP-VOLUME
persistentVolumeClaim:
claimName: APP-CLAIM
Create a scheduled pod that uses the Kubernetes API to run the command you want on the target pods, via the exec function. The pod image should contain the client libraries to access the API -- many of these are available or you can build your own.
For example, here is a solution using the Python client that execs to each ZooKeeper pod and runs a database maintenance command:
import time
from kubernetes import config
from kubernetes.client import Configuration
from kubernetes.client.apis import core_v1_api
from kubernetes.client.rest import ApiException
from kubernetes.stream import stream
import urllib3
config.load_incluster_config()
configuration = Configuration()
configuration.verify_ssl = False
configuration.assert_hostname = False
urllib3.disable_warnings()
Configuration.set_default(configuration)
api = core_v1_api.CoreV1Api()
label_selector = 'app=zk,tier=backend'
namespace = 'default'
resp = api.list_namespaced_pod(namespace=namespace,
label_selector=label_selector)
for x in resp.items:
name = x.spec.hostname
resp = api.read_namespaced_pod(name=name,
namespace=namespace)
exec_command = [
'/bin/sh',
'-c',
'opt/zookeeper/bin/zkCleanup.sh -n 10'
]
resp = stream(api.connect_get_namespaced_pod_exec, name, namespace,
command=exec_command,
stderr=True, stdin=False,
stdout=True, tty=False)
print("============================ Cleanup %s: ============================\n%s\n" % (name, resp if resp else "<no output>"))
and the associated Dockerfile:
FROM ubuntu:18.04
ADD ./cleanupZk.py /
RUN apt-get update \
&& apt-get install -y python-pip \
&& pip install kubernetes \
&& chmod +x /cleanupZk.py
CMD /cleanupZk.py
Note that if you have an RBAC-enabled cluster, you may need to create a service account and appropriate roles to make this API call possible. A role such as the following is sufficient to list pods and to run exec, such as the example script above requires:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-list-exec
namespace: default
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""] # "" indicates the core API group
resources: ["pods/exec"]
verbs: ["create", "get"]
An example of the associated cron job:
apiVersion: v1
kind: ServiceAccount
metadata:
name: zk-maint
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: zk-maint-pod-list-exec
namespace: default
subjects:
- kind: ServiceAccount
name: zk-maint
namespace: default
roleRef:
kind: Role
name: pod-list-exec
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: zk-maint
namespace: default
labels:
app: zk-maint
tier: jobs
spec:
schedule: "45 3 * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: zk-maint
image: myorg/zkmaint:latest
serviceAccountName: zk-maint
restartPolicy: OnFailure
imagePullSecrets:
- name: azure-container-registry
This seems like an anti-pattern. Why can't you just run your worker pod as a job pod?
Regardless you seem pretty convinced you need to do this. Here is what I would do.
Take your worker pod and wrap your shell execution in a simple webservice, it's 10 minutes of work with just about any language. Expose the port and put a service in front of that worker/workers. Then your job pods can simply curl ..svc.cluster.local:/ (unless you've futzed with dns).
It sounds as though you might want to run scheduled work within the pod itself rather than doing this at the Kubernetes level. I would approach this as a cronjob within the container, using traditional Linux crontab. Consider:
kind: Pod
apiVersion: v1
metadata:
name: shell
spec:
init-containers:
- name: shell
image: "nicolaka/netshoot"
command:
- /bin/sh
- -c
- |
echo "0 */5 * * * /opt/whatever/bin/do-the-thing" | crontab -
sleep infinity
If you want to track logs from those processes, that will require a fluentd type of mechanism to track those log files.
I managed to do this by creating a custom image with doctl (DigitalOcean's command line interface) and kubectl. The CronJob object would use these two commands to download the cluster configuration and run a command against a container.
Here is a sample CronJob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: drupal-cron
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: drupal-cron
image: juampynr/digital-ocean-cronjob:latest
env:
- name: DIGITALOCEAN_ACCESS_TOKEN
valueFrom:
secretKeyRef:
name: api
key: key
command: ["/bin/bash","-c"]
args:
- doctl kubernetes cluster kubeconfig save drupster;
POD_NAME=$(kubectl get pods -l tier=frontend -o=jsonpath='{.items[0].metadata.name}');
kubectl exec $POD_NAME -c drupal -- vendor/bin/drush core:cron;
restartPolicy: OnFailure
Here is the Docker image that the CronJob uses: https://hub.docker.com/repository/docker/juampynr/digital-ocean-cronjob
If you are not using DigitalOcean, figure out how to download the cluster configuration so kubectl can use it. For example, with Google Cloud, you would have to download gcloud.
Here is the project repository where I implemented this https://github.com/juampynr/drupal8-do.
This one should help .
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
kubectl exec -it <podname> "sh script.sh ";
restartPolicy: OnFailure