I want to use podSpecPatch to patch volumeMounts, but workflow will be failed - argo-workflows

I create a cluster workflow template, one parameter is volume-mounts. So I can choose already created pvcs to mount on the pod instead all of them.
Then I will get spec.containers[1].volumeMounts[0].name: Not found: "${volume-name}", but ${volume-name} has already define in workflow spec.volumes
demo
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
spec:
templates:
- name: main
inputs:
parameters:
- name: volume-mounts
default: "[]"
podSpecPatch: |
containers:
- name: main
volumeMounts: {{inputs.parameters.volume-mounts}}
volumes:
- name: data1
persistentVolumeClaim:
claimName: already-created-pvc1
- name: data2
persistentVolumeClaim:
claimName: already-created-pvc2
# params
volume-mounts: [{name: data1, mountPath: /data}]

For continuity:
The Argo team is already aware of this and there is an issue open to this effect: https://github.com/argoproj/argo/issues/4623

Related

Confimap volume with defaultMode no working?

yaml file like below
apiVersion: v1
kind: Pod
metadata:
name: fortune-configmap-volume
spec:
containers:
- image: luksa/fortune:env
env:
- name: INTERVAL
valueFrom:
configMapKeyRef:
name: fortune-config
key: sleep-interval
name: html-generator
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
volumes:
- name: html
emptyDir: {}
- name: config
configMap:
name: fortune-config
defaultMode: 0777
As you can see,I set the configMap volume defaultMode 0777, when I try to modify file in container path /etc/nginx/conf.d, it told me operation is not allowed, but why?
https://github.com/kubernetes/kubernetes/issues/62099
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

Mounting a hostPath persistent volume to an Argo Workflow task template

I am working on a small proof-of-concept project for my company and would like to use Argo Workflows to automate some data engineering tasks. It's really easy to get set up and I've been able to create a number of workflows that process data that is stored in a Docker image or is retrieved from a REST API. However, to work with our sensitive data I would like to mount a hostPath persistent volume to one of my workflow tasks. When I follow the documentation I don't get the desired behavior, the directory appears empty.
OS: Ubuntu 18.04.4 LTS
Kubernetes executor: Minikube v1.20.0
Kubernetes version: v1.20.2
Argo Workflows version: v3.1.0-rc4
My persistent volume (claim) looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: argo-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: argo-hello
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
and I run
kubectl -n argo apply -f pv.yaml
My workflow looks as follows:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-volumes-
spec:
entrypoint: dag-template
arguments:
parameters:
- name: file
value: /mnt/vol/test.txt
volumes:
- name: datadir
persistentVolumeClaim:
claimName: argo-hello
templates:
- name: dag-template
inputs:
parameters:
- name: file
dag:
tasks:
- name: readFile
arguments:
parameters: [{name: path, value: "{{inputs.parameters.file}}"}]
template: read-file-template
- name: print-message
template: helloargo
arguments:
parameters: [{name: msg, value: "{{tasks.readFile.outputs.result}}"}]
dependencies: [readFile]
- name: helloargo
inputs:
parameters:
- name: msg
container:
image: lambertsbennett/helloargo
args: ["-msg", "{{inputs.parameters.msg}}"]
- name: read-file-template
inputs:
parameters:
- name: path
container:
image: alpine:latest
command: [sh, -c]
args: ["find /mnt/vol; ls -a /mnt/vol"]
volumeMounts:
- name: datadir
mountPath: /mnt/vol
When this workflow executes it just prints an empty directory even though I populated the host directory with files. Is there something I am fundamentally missing? Thanks for any help.

kubernetes mongodb ops manager running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims

I am trying to configure MongoDB ops manager on Kubernetes, I have a PersistentVolumeClaim based on dynamic provisioning based on CEPH and configured it successfully, What I am trying to do is to define the volume mounts and volumes in MongoDBOpsManager YAML file, I tried different things but couldn't define them
here is my MongoDBOpsManager yaml file:
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
# podSpec:
# podTemplate:
# spec:
# containers:
# - name: mongodb-enterprise-database
# volumeMounts:
# - name: mongo-persistent-storage
# mountPath: /data/db
# volumes:
# - name: mongo-persistent-storage
# persistentVolumeClaim:
# claimName: mongo-pvc
spec:
# the version of Ops Manager distro to use
version: 4.2.4
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
# the name of the secret containing admin user credentials.
adminCredentials: ops-manager-admin-secret
externalConnectivity:
type: NodePort
# the Replica Set backing Ops Manager.
# appDB has the SCRAM-SHA authentication mode always enabled
applicationDatabase:
members: 3
statefulSet:
spec:
# volumeClaimTemplates:letsChangeTheWorld1
template:
spec:
containers:
- name: mongodb-ops-manager
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
I don't know where should I put the volume mounts and volume definition
I the ops manager om is created successfully but when I check the created pod for it I found this error
running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
spec:
containers:
- image:
....
volumeMounts:
.....
- image:
....
volumeMounts:
......
volumes:
- name:
Volumes tag should come parallel to containers.
Volumes are defined globally for all containers and mounts are speific to containers
Example: https://kubernetes.io/docs/concepts/storage/volumes/
Check with this once

How to mount multiple files / secrets into common directory in kubernetes?

I've multiple secrets created from different files. I'd like to store all of them in common directory /var/secrets/. Unfortunately, I'm unable to do that because kubernetes throws 'Invalid value: "/var/secret": must be unique error during pod validation step. Below is an example of my pod definition.
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xfile
mountPath: "/var/secrets/"
readOnly: true
- name: yfile
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xfile
secret:
secretName: my-secret-one
- name: yfile
secret:
secretName: my-secret-two
How can I store files from multiple secrets in the same directory?
Projected Volume
You can use a projected volume to have two secrets in the same directory
Example
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xyfiles
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xyfiles
projected:
sources:
- secret:
name: my-secret-one
- secret:
name: my-secret-two
(EDIT: Never mind - I just noticed #Jonas gave the same answer earlier. +1 from me)
Starting with Kubernetes v1.11+ it is possible with projected volumes:
A projected volume maps several existing volume sources into the same
directory.
Currently, the following types of volume sources can be projected:
secret
downwardAPI
configMap
serviceAccountToken
This is an example for "... how to use a projected Volume to mount several existing volume sources into the same directory".
May be subPath (using subPath) will help.
Example:
volumeMounts:
- name: app-redis-vol
mountPath: /app/config/redis.yaml
subPath: redis.yaml
- name: app-config-vol
mountPath: /app/config/app.yaml
subPath: app.yaml
volumes:
- name: app-redis-vol
configMap:
name: config-map-redis
items:
- key: yourKey
path: redis.yaml
- name: app-config-vol
configMap:
name: config-map-app
items:
- key: yourKey
path: app.yaml
Here your configMap named config-map-redis created from file redis.yaml mounted in app/config/ as file redis.yaml.
Also configMap config-map-app mounted in app/config/ as app.yaml
There is nice article about this here: Injecting multiple Kubernetes volumes to the same directory
Edited:
#Jonas answer is correct!
However, if you use volumes as I did in the question then, short answer is you cannot do that, You have to specify mountPath to an unused directory - volumes have to be unique and cannot be mounted to common directory.
Solution:
What I did at the end was, instead keeping files in separate secrets, I created one secret with multiple files.

Is there a way to get UID in pod spec

What I want to do is providing pod with unified log store, currently persisted to hostPath, but I also want this path including UID so I can easily get its path after pod destroyed.
For example:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
volumeMounts:
- mountPath: /logs
name: log-dir
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/{metadata.uid}
type: DirectoryOrCreate
metadata.uid is what I want to fill in, but I do not how to do it.
For logging it's better to use another strategy.
I suggest you to look at this link.
Your logs are best managed if streamed to stdout and grabbed by an agent, like shown in this picture:
Don't persist your log on filesystem, but gather them using an agent and put them together for further analysis.
Fluentd is very popular and deserves to be known.
After searching the doc from kubernetes, I finally see a solution for my specific problem. This feature is exactly what I wanted.
So I can create the pod with
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
env:
- name: POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
volumeMounts:
- mountPath: /logs
name: log-dir
subPath: $(POD_UID)
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/
type: DirectoryOrCreate