consul StatefulSet failing - kubernetes

I am trying to deploy consul using kubernetes StatefulSet with following manifest
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
labels:
app: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: dev-ethernet
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
namespace: dev-ethernet
labels:
app: consul
---
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-config
namespace: dev-ethernet
data:
server.json: |
{
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"disable_host_node_id": true,
"data_dir": "/consul/data",
"log_level": "INFO",
"datacenter": "us-west-2",
"domain": "cluster.local",
"ports": {
"http": 8500
},
"retry_join": [
"provider=k8s label_selector=\"app=consul,component=server\""
],
"server": true,
"telemetry": {
"prometheus_retention_time": "5m"
},
"ui": true
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
namespace: dev-ethernet
spec:
selector:
matchLabels:
app: consul
component: server
serviceName: consul
podManagementPolicy: Parallel
replicas: 3
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
template:
metadata:
labels:
app: consul
component: server
annotations:
consul.hashicorp.com/connect-inject: "false"
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
securityContext:
fsGroup: 1000
containers:
- name: consul
image: "consul:1.8"
args:
- "agent"
- "-advertise=$(POD_IP)"
- "-bootstrap-expect=3"
- "-config-file=/etc/consul/config/server.json"
- "-encrypt=$(GOSSIP_ENCRYPTION_KEY)"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GOSSIP_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: consul-secret
key: consul-gossip-encryption-key
volumeMounts:
- name: data
mountPath: /consul/data
- name: config
mountPath: /etc/consul/config
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
ports:
- containerPort: 8500
name: ui-port
- containerPort: 8400
name: alt-port
- containerPort: 53
name: udp-port
- containerPort: 8080
name: http-port
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8600
name: consuldns
- containerPort: 8300
name: server
volumes:
- name: config
configMap:
name: consul-config
volumeClaimTemplates:
- metadata:
name: data
labels:
app: consul
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: aws-gp2
resources:
requests:
storage: 3Gi
But gets ==> encrypt has invalid key: illegal base64 data at input byte 1 when container starts.
I have generated consul-gossip-encryption-key locally using docker run -i -t consul keygen
Anyone knows whats wrong here ?

secret.data must be base64 string.
try
kubectl create secret generic consul-gossip-encryption-key --from-literal=key="$(docker run -i -t consul keygen)" --dry-run -o=yaml
and replace
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
ref: https://www.consul.io/docs/k8s/helm#v-global-gossipencryption

Related

Local Airflow Instance with minikube

I'm trying to run local Airflow instance on my laptop using minikube, deployment.yml file with the following command: kubectl apply -f ./deployment.yml.
After slightly tweaking this file I was able to end up with all three pods: postgres, webserver, scheduler running fine.
The result of the kubectl get pods
The content of the file:
---
# Source: airflow/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: airflow/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
password: "**************"
# We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: airflow/templates/config/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
airflow-password: "*************"
# Airflow keys must be base64-encoded, hence we need to pipe to 'b64enc' twice
# The auto-generation mechanism available at "common.secrets.passwords.manage" isn't compatible with encoding twice
# Therefore, we can only use this function if the secret already exists
airflow-fernet-key: "TldwdU0zRklTREZ0VDFkamVWUjFaMlozWTFKdWNFNUxTRXRxVm5Oa1p6az0="
airflow-secret-key: "VldWaWQySkhSVUZQZDNWQlltbG1UVzUzVkdwWmVVTkxPR1ZCZWpoQ05tUT0="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-dependencies
namespace: "default"
data:
requirements.txt: |-
apache-airflow==2.2.3
pytest==6.2.4
python-slugify<5.0
funcy==1.16
apache-airflow-providers-mongo
apache-airflow-providers-postgres
apache-airflow-providers-slack
apache-airflow-providers-amazon
airflow_clickhouse_plugin
apache-airflow-providers-sftp
surveymonkey-python
---
# Source: airflow/templates/rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "create"
- "list"
- "get"
- "watch"
- "delete"
- "patch"
- apiGroups:
- ""
resources:
- "pods/log"
verbs:
- "get"
- apiGroups:
- ""
resources:
- "pods/exec"
verbs:
- "create"
- "get"
---
# Source: airflow/templates/rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: release-name-airflow
subjects:
- kind: ServiceAccount
name: release-name-airflow
namespace: default
---
# Source: airflow/charts/postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql-hl
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: ClusterIP
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Postgresql pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/charts/postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
type: ClusterIP
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
nodePort: null
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/templates/web/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
spec:
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30303
selector:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
---
# Source: airflow/templates/scheduler/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-scheduler
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-scheduler
image: "docker.io/bitnami/airflow-scheduler:2.2.3-debian-10-r57"
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: release-name-airflow
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
resources:
limits: {}
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/templates/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-web
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-web
image: docker.io/bitnami/airflow:2.2.3-debian-10-r62
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: "0.0.0.0"
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW_USERNAME
value: airflow
- name: AIRFLOW_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-password
- name: AIRFLOW_BASE_URL
value: "http://127.0.0.1:8080"
- name: AIRFLOW_LDAP_ENABLE
value: "no"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
resources:
limits:
cpu: "2"
memory: 4Gi
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/charts/postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
replicas: 1
serviceName: release-name-postgresql-hl
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
template:
metadata:
name: release-name-postgresql
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
serviceAccountName: default
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: postgresql
image: docker.io/bitnami/postgresql:14.1.0-debian-10-r80
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_VOLUME_DIR
value: "/bitnami/postgresql"
- name: PGDATA
value: "/bitnami/postgresql/data"
# Authentication
- name: POSTGRES_USER
value: "bn_airflow"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: POSTGRES_DB
value: "bitnami_airflow"
# Replication
# Initdb
# Standby
# LDAP
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
# TLS
- name: POSTGRESQL_ENABLE_TLS
value: "no"
# Audit
- name: POSTGRESQL_LOG_HOSTNAME
value: "false"
- name: POSTGRESQL_LOG_CONNECTIONS
value: "false"
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: "false"
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: "off"
# Others
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: "error"
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: "pgaudit"
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
volumes:
- name: dshm
emptyDir:
medium: Memory
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
The idea is that after successful deployment I would be able to access webserver UI through the localhost:30303, but can't for some reason. It feels like there should be a minor change to fix it...
For now what I've tried is to connect to the webserver pod: kubectl exec -it <webserver pod name> -- /bin/bash and run two commands airflow db init and airflow web server -p 8080.

VerneMQ in Kubernetes with persistent volume claim is throwing Forbidden: may not specify more than 1 volume type

Unable to create Verne MQ pod in AWS EKS cluster with persistent volume claim for authentication and SSL. Below is my yaml file:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: vernemq-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: verne-aws-pv
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: xfs
volumeID: aws://ap-south-1a/vol-xxxxx
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: vernemq-storage
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: mysql
name: verne-aws-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: verne-aws-pv
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
spec:
replicas: 1
selector:
matchLabels:
app: vernemq
serviceName: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
terminationGracePeriodSeconds: 200
containers:
- name: vernemq
image: vernemq/vernemq:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local ; sleep 60 ; /usr/sbin/vmq-admin cluster leave node=VerneMQ#${MY_POD_NAME}.vernemq.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local -k; sleep 60;
ports:
- containerPort: 1883
name: mqtt
hostPort: 1883
- containerPort: 8883
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8888
name: health
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
- containerPort: 8888
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
env:
- name: DOCKER_VERNEMQ_ACCEPT_EULA
value: "yes"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
- name: DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout
value: "6000"
- name: DOCKER_VERNEMQ_LISTENER__HTTP__DEFAULT
value: "0.0.0.0:8888"
- name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
value: "infinity"
- name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
value: "10000"
- name: DOCKER_VERNEMQ_MAX_INFLIGHT_MESSAGES
value: "0"
- name: DOCKER_VERNEMQ_ALLOW_MULTIPLE_SESSIONS
value: "off"
- name: DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_PUBLISH_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_SUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_ALLOW_UNSUBSCRIBE_DURING_NETSPLIT
value: "on"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq/vmq.passwd"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/server.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/server.key"
volumeMounts:
- mountPath: /etc/ssl
name: vernemq-certifications
readOnly: true
- mountPath: /etc/vernemq-passwd
name: vernemq-passwd
readOnly: true
volumes:
- name: vernemq-certifications
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-certifications
- name: vernemq-passwd
persistentVolumeClaim:
claimName: verne-aws-pvc
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- name: mqtt
port: 1883
targetPort: 1883
- name: health
port: 8888
targetPort: 8888
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods", "statefulsets", "persistentvolumeclaims"]
verbs: ["get", "patch", "list", "watch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Created an AWS EBS volume in the same region and subnet as in the node group and added it to the persistent volume storage.
Pod is not getting created instead when we do kubectl describe statefulset vernemq getting below error:
Volumes:
vernemq-certifications:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-certifications
Optional: false
vernemq-passwd:
Type: Secret (a volume populated by a Secret)
SecretName: vernemq-passwd
Optional: false
Volume Claims: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m2s (x5 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: pods "vernemq-0" is forbidden: error looking up service account default/vernemq: serviceaccount "vernemq" not found
Warning FailedCreate 40s (x10 over 2m2s) statefulset-controller create Pod vernemq-0 in StatefulSet vernemq failed error: Pod "vernemq-0" is invalid: [spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.volumes[1].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type, spec.containers[0].volumeMounts[0].name: Not found: "vernemq-certifications", spec.containers[0].volumeMounts[1].name: Not found: "vernemq-passwd"]

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

Helm appears to parse my chart differently depending on if I use --dry-run --debug?

So I was deploying a new cronjob today and got the following error:
Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.
You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.
whats going on here?
hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.
I really wish the error-messages of helm would be more helpful.
the corrected chart looks like:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
This may be due to formatting error.
Look at the examples here and here.
The structure is
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
As per provided output you have spec and restartPolicy on the same line:
jobTemplate:
spec:
template:
spec:
restartPolicy: Never #<----------this is not 'Always'!!
The same with spec.jobTemplate.spec.template.spec.containers
Suppose helm uses some default values instead of yours.
You can also try to generate yaml file, convert it to json and apply.

OpenEBS cstore volume is very slow on delete and overwrite

I have an OpenEBS setup with 3 data nodes and a default cstore storage class.
Creation of a file works pretty good:
time dd if=/dev/urandom of=a.log bs=1M count=100
real 0m 0.53s
I can delete the file and create it again with the quite the same times.
But when I rewrite onto the file it takes ages:
time dd if=/dev/urandom of=a.log bs=1M count=100
104857600 bytes (100.0MB) copied, 0.596577 seconds, 167.6MB/s
real 0m 0.59s
time dd if=/dev/urandom of=a.log bs=1M count=100
104857600 bytes (100.0MB) copied, 16.621222 seconds, 6.0MB/s
real 0m 16.62s
time dd if=/dev/urandom of=a.log bs=1M count=100
104857600 bytes (100.0MB) copied, 19.621924 seconds, 5.1MB/s
real 0m 19.62s
When I delete the a.log and write it again, then the ~167MS/s are back. Only writing onto an existing file takes so much time.
My problem is that i think this is why some of my applications (e.g. Databases) are too slow. Creation of a table in mysql took over 7sek.
Here is the spec of my testcluster:
# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: openebs-maya-operator
namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["statefulsets", "daemonsets"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["resourcequotas", "limitranges"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]
resources: [ "disks", "blockdevices", "blockdeviceclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "castemplates", "runtasks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
- apiGroups: ["*"]
resources: [ "upgradetasks"]
verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
subjects:
- kind: ServiceAccount
name: openebs-maya-operator
namespace: openebs
- kind: User
name: system:serviceaccount:default:default
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: openebs-maya-operator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: maya-apiserver
namespace: openebs
labels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: maya-apiserver
imagePullPolicy: IfNotPresent
image: quay.io/openebs/m-apiserver:1.3.0
ports:
- containerPort: 5656
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPENEBS_MAYA_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
value: "true"
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "true"
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "quay.io/openebs/jiva:1.3.0"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE
value: "quay.io/openebs/jiva:1.3.0"
- name: OPENEBS_IO_JIVA_REPLICA_COUNT
value: "1"
- name: OPENEBS_IO_CSTOR_TARGET_IMAGE
value: "quay.io/openebs/cstor-istgt:1.3.0"
- name: OPENEBS_IO_CSTOR_POOL_IMAGE
value: "quay.io/openebs/cstor-pool:1.3.0"
- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
value: "quay.io/openebs/cstor-pool-mgmt:1.3.0"
- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
value: "quay.io/openebs/cstor-volume-mgmt:1.3.0"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "quay.io/openebs/m-exporter:1.3.0"
- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
value: "quay.io/openebs/m-exporter:1.3.0"
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "false"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "openebs-operator"
livenessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: 30
periodSeconds: 60
readinessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: 30
periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
name: maya-apiserver-service
namespace: openebs
labels:
openebs.io/component-name: maya-apiserver-svc
spec:
ports:
- name: api
port: 5656
protocol: TCP
targetPort: 5656
selector:
name: maya-apiserver
sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-provisioner
namespace: openebs
labels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: openebs-provisioner
imagePullPolicy: IfNotPresent
image: quay.io/openebs/openebs-k8s-provisioner:1.3.0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
exec:
command:
- pgrep
- ".*openebs"
initialDelaySeconds: 30
periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-snapshot-operator
namespace: openebs
labels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: snapshot-controller
image: quay.io/openebs/snapshot-controller:1.3.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
exec:
command:
- pgrep
- ".*controller"
initialDelaySeconds: 30
periodSeconds: 60
- name: snapshot-provisioner
image: quay.io/openebs/snapshot-provisioner:1.3.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
exec:
command:
- pgrep
- ".*provisioner"
initialDelaySeconds: 30
periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-ndm-config
namespace: openebs
labels:
openebs.io/component-name: ndm-config
data:
node-disk-manager.config: |
probeconfigs:
- key: udev-probe
name: udev probe
state: true
- key: seachest-probe
name: seachest probe
state: false
- key: smart-probe
name: smart probe
state: true
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: true
exclude: "/,/etc/hosts,/boot"
- key: vendor-filter
name: vendor filter
state: true
include: ""
exclude: "CLOUDBYT,OpenEBS"
- key: path-filter
name: path filter
state: true
include: ""
exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openebs-ndm
namespace: openebs
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: openebs-ndm
openebs.io/component-name: ndm
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 1.3.0
spec:
nodeSelector:
"openebs.io/nodegroup": "storage-node"
serviceAccountName: openebs-maya-operator
hostNetwork: true
containers:
- name: node-disk-manager
image: quay.io/openebs/node-disk-manager-amd64:v0.4.3
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/proc
readOnly: true
- name: sparsepath
mountPath: /var/openebs/sparse
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SPARSE_FILE_DIR
value: "/var/openebs/sparse"
- name: SPARSE_FILE_SIZE
value: "10737418240"
- name: SPARSE_FILE_COUNT
value: "3"
livenessProbe:
exec:
command:
- pgrep
- ".*ndm"
initialDelaySeconds: 30
periodSeconds: 60
volumes:
- name: config
configMap:
name: openebs-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
- name: procmount
hostPath:
path: /proc
type: Directory
- name: sparsepath
hostPath:
path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-ndm-operator
namespace: openebs
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: node-disk-operator
image: quay.io/openebs/node-disk-operator-amd64:v0.4.3
imagePullPolicy: Always
readinessProbe:
exec:
command:
- stat
- /tmp/operator-sdk-ready
initialDelaySeconds: 4
periodSeconds: 10
failureThreshold: 1
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# the service account of the ndm-operator pod
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPERATOR_NAME
value: "node-disk-operator"
- name: CLEANUP_JOB_IMAGE
value: "quay.io/openebs/linux-utils:3.9"
---
apiVersion: v1
kind: Secret
metadata:
name: admission-server-certs
namespace: openebs
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
type: Opaque
data:
cert.pem: <...pem...>
key.pem: <...pem...>
---
apiVersion: v1
kind: Service
metadata:
name: admission-server-svc
namespace: openebs
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook-svc
spec:
ports:
- port: 443
targetPort: 443
selector:
app: admission-webhook
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-admission-server
namespace: openebs
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.3.0
spec:
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
selector:
matchLabels:
app: admission-webhook
template:
metadata:
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: admission-webhook
image: quay.io/openebs/admission-server:1.3.0
imagePullPolicy: IfNotPresent
args:
- -tlsCertFile=/etc/webhook/certs/cert.pem
- -tlsKeyFile=/etc/webhook/certs/key.pem
- -alsologtostderr
- -v=2
- 2>&1
volumeMounts:
- name: webhook-certs
mountPath: /etc/webhook/certs
readOnly: true
volumes:
- name: webhook-certs
secret:
secretName: admission-server-certs
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: validation-webhook-cfg
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
webhooks:
# failurePolicy Fail means that an error calling the webhook causes the admission to fail.
- name: admission-webhook.openebs.io
failurePolicy: Ignore
clientConfig:
service:
name: admission-server-svc
namespace: openebs
path: "/validate"
caBundle: <...ca..>
rules:
- operations: [ "CREATE", "DELETE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["persistentvolumeclaims"]
- operations: [ "CREATE", "UPDATE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["cstorpoolclusters"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-localpv-provisioner
namespace: openebs
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.3.0
spec:
selector:
matchLabels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.3.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: openebs-provisioner-hostpath
imagePullPolicy: Always
image: quay.io/openebs/provisioner-localpv:1.3.0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "true"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "openebs-operator"
- name: OPENEBS_IO_HELPER_IMAGE
value: "quay.io/openebs/openebs-tools:3.8"
livenessProbe:
exec:
command:
- pgrep
- ".*localpv"
initialDelaySeconds: 30
periodSeconds: 60
I am on Kubernetes:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
How can i investigate why it takes so long?
What can me bring some more insides?
Spoke to Peter over Slack. The following changes helped to improve the performance:
Increasing the queue size of the iSCSI Initiator
Increase the cpu from 2 to 6 for each of the three nodes
Create my own sc to use a ext3 instead of the default ext4.
With the above changes, the subsequent writes numbers were around 140-150 MB/s.