What are the differences of defining configMap in volumes? - kubernetes

I'm just curious about what are the differences between these 2 ways of defining ConfigMap in volumes section?
p.s. test-config is including config.json file.
newpod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
projected:
sources:
- configMap:
name: test-config
items:
- key: config.json
path: config.json
newpod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
configMap:
name: test-config

No different, they yield the same result. By the way, the readOnly attribute is redundant, it does not have any effect in both cases.

Related

$(POD_NAME) in subPath of Statefulset + Kustomize not expanding

I have a stateful set with a volume that uses a subPath: $(POD_NAME) I've also tried $HOSTNAME which also doesn't work. How does one set the subPath of a volumeMount to the name of the pod or the $HOSTNAME?
Here's what I have:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: pltfrmd
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
# ["/bin/sh", "-ec", "while :; do echo '.'; sleep 6 ; done"]
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --log-to-console --config-path /configuration/settings.json
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RAVEN_Logs_Mode
value: Information
ports:
- containerPort: 8080
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
subPath: $(POD_NAME)
- mountPath: /configuration
name: configuration
subPath: ravendb
- mountPath: /certificates
name: certificates
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: certificates
secret:
secretName: ravendb-certificate
- name: configuration
persistentVolumeClaim:
claimName: configuration
- name: data
persistentVolumeClaim:
claimName: ravendb
And the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: pltfrmd
name: ravendb
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /volumes/ravendb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: pltfrmd
name: ravendb
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$HOSTNAME used to work, but doesn't anymore for some reason. Wondering if it's a bug in the host path storage provider?
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- name: nginx
mountPath: /usr/share/nginx/html
subPath: $(POD_NAME)
volumes:
- name: nginx
configMap:
name: nginx
Ok, so after great experimentation I found a way that still works:
Step one, map an environment variable:
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
This creates $(POD_HOST_NAME) based on the field metadata.name
Then in your mount you do this:
volumeMounts:
- mountPath: /data
name: data
subPathExpr: $(POD_HOST_NAME)
It's important to use subPathExpr as subPath (which worked before) doesn't work. Then it will use the environment variable you created and properly expand it.

ConfigMap volume is not mounting as volume along with secret

I have references to both Secrets and ConfigMaps in my Deployment YAML file. The Secret volume is mounting as expected, but not the ConfigMap.
I created a ConfigMap with multiple files and when I do a kubectl get configmap ... it shows all of the expected values. Also, when I create just ConfigMaps it's mounting the volume fine, but not along with a Secret.
I have tried different scenarios of having both in same directory to separating them but dont seem to work.
Here is my YAML. Am I referencing the ConfigMap correctly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
replicas: 1 # tells deployment to run 1 pod
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: XXX/yyy/image-50
ports:
- containerPort: 9009
protocol: TCP
volumeMounts:
- mountPath: /shared
name: configs-volume
readOnly: true
volumeMounts:
- mountPath: data/secrets
name: atp-secret
readOnly: true
volumes:
- name: configs-volume
configMap:
name: prompts-config
key: application.properties
volumes:
- name: atp-secret
secret:
defaultMode: 420
secretName: dev-secret
restartPolicy: Always
imagePullSecrets:
- name: ocirsecrets
You have two separate lists of volumes: and also two separate lists of volumeMounts:. When Kubernetes tries to find the list of volumes: in the Pod spec, it finds the last matching one in each set.
volumes:
- name: configs-volume
volumes: # this completely replaces the list of volumes
- name: atp-secret
In both cases you need a single volumes: or volumeMounts: key, and then multiple list items underneath those keys.
volumes:
- name: configs-volume
configMap: { ... }
- name: atp-secret # do not repeat volumes: key
secret: { ... }
containers:
- name: hello
volumeMounts:
- name: configs-volume
mountPath: /shared
- name: atp-secret # do not repeat volumeMounts: key
mounthPath: /data/secrets

unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

I'm trying to run FileBeat on minikube following this doc with k8s 1.16
https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html
I downloaded the manifest file as instructed
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/filebeat-kubernetes.yaml
Contents of the yaml file below
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# host: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
When I try the deploy step,
kubectl create -f filebeat-kubernetes.yaml
I get the output + error:
configmap/filebeat-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
As we can see there
DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API
You need to change apiVersion
apiVersion: extensions/v1beta1 -> apiVersion: apps/v1
Then there is another error
missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec;
So we have to add selector field
spec:
selector:
matchLabels:
k8s-app: filebeat
Edited DaemonSet yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Let me know if that help you.

Copying mounted files with command in Kubernetes is slow

I am creating an OrientDB cluster with Kubernetes (in Minikube) and I use Stateful Sets to create pods. I am trying to mount all the OrientDB cluster configs into a folder named configs. Using command I copy the files after mounting into the standard /orientdb/config folder. However, after I have added this functionality the pods were created at a slower rate and sometimes I get exceptions like:
Unable to connect to the server: net/http: TLS handshake timeout
Before that, I tried to mount configs directly into the /orientdb/config folder. But I had an error with permissions so I have researched that mounting to the root folder is prohibited.
How can I approach such an issue? And can I find some workaround?
The Stateful Set looks like this:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", "cp /configs/* /orientdb/config/ ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-backups
mountPath: /configs/backups.json
subPath: backups.json
- name: orientdb-config-events
mountPath: /configs/events.json
subPath: events.json
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-config-hazelcast
mountPath: /configs/hazelcast.xml
subPath: hazelcast.xml
- name: orientdb-config-server
mountPath: /configs/orientdb-server-config.xml
subPath: orientdb-server-config.xml
- name: orientdb-config-client-logs
mountPath: /configs/orientdb-client-log.properties
subPath: orientdb-client-log.properties
- name: orientdb-config-server-logs
mountPath: /configs/orientdb-server-log.properties
subPath: orientdb-server-log.properties
- name: orientdb-config-plugin
mountPath: /configs/pom.xml
subPath: pom.xml
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
- name: orientdb-data
mountPath: /orientdb/bin/data
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-backups
configMap:
name: orientdb-configmap-backups
- name: orientdb-config-events
configMap:
name: orientdb-configmap-events
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
- name: orientdb-config-hazelcast
configMap:
name: orientdb-configmap-hazelcast
- name: orientdb-config-server
configMap:
name: orientdb-configmap-server
- name: orientdb-config-client-logs
configMap:
name: orientdb-configmap-client-logs
- name: orientdb-config-server-logs
configMap:
name: orientdb-configmap-server-logs
- name: orientdb-config-plugin
configMap:
name: orientdb-configmap-plugin
- name: orientdb-data
hostPath:
path: /import_data
type: Directory
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

How to mount a persistent volume in a pod?

How do I mount a persistent volume? How do I reference it?
Here is a pod config:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
Taken from: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pod
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage