Copying mounted files with command in Kubernetes is slow - kubernetes

I am creating an OrientDB cluster with Kubernetes (in Minikube) and I use Stateful Sets to create pods. I am trying to mount all the OrientDB cluster configs into a folder named configs. Using command I copy the files after mounting into the standard /orientdb/config folder. However, after I have added this functionality the pods were created at a slower rate and sometimes I get exceptions like:
Unable to connect to the server: net/http: TLS handshake timeout
Before that, I tried to mount configs directly into the /orientdb/config folder. But I had an error with permissions so I have researched that mounting to the root folder is prohibited.
How can I approach such an issue? And can I find some workaround?
The Stateful Set looks like this:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", "cp /configs/* /orientdb/config/ ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-backups
mountPath: /configs/backups.json
subPath: backups.json
- name: orientdb-config-events
mountPath: /configs/events.json
subPath: events.json
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-config-hazelcast
mountPath: /configs/hazelcast.xml
subPath: hazelcast.xml
- name: orientdb-config-server
mountPath: /configs/orientdb-server-config.xml
subPath: orientdb-server-config.xml
- name: orientdb-config-client-logs
mountPath: /configs/orientdb-client-log.properties
subPath: orientdb-client-log.properties
- name: orientdb-config-server-logs
mountPath: /configs/orientdb-server-log.properties
subPath: orientdb-server-log.properties
- name: orientdb-config-plugin
mountPath: /configs/pom.xml
subPath: pom.xml
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
- name: orientdb-data
mountPath: /orientdb/bin/data
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-backups
configMap:
name: orientdb-configmap-backups
- name: orientdb-config-events
configMap:
name: orientdb-configmap-events
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
- name: orientdb-config-hazelcast
configMap:
name: orientdb-configmap-hazelcast
- name: orientdb-config-server
configMap:
name: orientdb-configmap-server
- name: orientdb-config-client-logs
configMap:
name: orientdb-configmap-client-logs
- name: orientdb-config-server-logs
configMap:
name: orientdb-configmap-server-logs
- name: orientdb-config-plugin
configMap:
name: orientdb-configmap-plugin
- name: orientdb-data
hostPath:
path: /import_data
type: Directory
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Related

$(POD_NAME) in subPath of Statefulset + Kustomize not expanding

I have a stateful set with a volume that uses a subPath: $(POD_NAME) I've also tried $HOSTNAME which also doesn't work. How does one set the subPath of a volumeMount to the name of the pod or the $HOSTNAME?
Here's what I have:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: pltfrmd
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
# ["/bin/sh", "-ec", "while :; do echo '.'; sleep 6 ; done"]
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --log-to-console --config-path /configuration/settings.json
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RAVEN_Logs_Mode
value: Information
ports:
- containerPort: 8080
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
subPath: $(POD_NAME)
- mountPath: /configuration
name: configuration
subPath: ravendb
- mountPath: /certificates
name: certificates
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: certificates
secret:
secretName: ravendb-certificate
- name: configuration
persistentVolumeClaim:
claimName: configuration
- name: data
persistentVolumeClaim:
claimName: ravendb
And the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: pltfrmd
name: ravendb
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /volumes/ravendb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: pltfrmd
name: ravendb
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$HOSTNAME used to work, but doesn't anymore for some reason. Wondering if it's a bug in the host path storage provider?
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- name: nginx
mountPath: /usr/share/nginx/html
subPath: $(POD_NAME)
volumes:
- name: nginx
configMap:
name: nginx
Ok, so after great experimentation I found a way that still works:
Step one, map an environment variable:
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
This creates $(POD_HOST_NAME) based on the field metadata.name
Then in your mount you do this:
volumeMounts:
- mountPath: /data
name: data
subPathExpr: $(POD_HOST_NAME)
It's important to use subPathExpr as subPath (which worked before) doesn't work. Then it will use the environment variable you created and properly expand it.

Apache Cassandra: Unable to gossip with any seeds on Kubernetes/ Ubuntu Server

I am trying to install datastax cassandra (DSE) on Kubernetes on Ubuntu. But getting the below error in the pod logs.
Apache Cassandra: Unable to gossip with any seeds
Replicas: 2
Heap memory given: 8GB
Ram Given: 16GB
Server: AWS EC2
Disk: Local provision
Note: Minikube is not required for this process
Here's my cluster.yaml file
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
namespace: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 2
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: datastax/dse-server:6.7.7
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "8000m"
memory: 16Gi
requests:
cpu: "8000m"
memory: 16Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 8G
- name: HEAP_NEWSIZE
value: 8G
- name: SEEDS
value: "10.32.0.2,10.32.0.5,10.32.0.6"
- name: CLUSTER_NAME
value: "Demo Cluster"
- name: DC
value: "dc1"
- name: RACK
value: "rack1"
- name: NUM_TOKENS
value: "128"
- name: DS_LICENSE
value: "accept"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- "cp /ready_probe.sh /tmp/ready_probe.sh && chmod 777 /tmp/ready_probe.sh && /tmp/ready_probe.sh"
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
subPath: cassandra
- name: cassandra-data
mountPath: /var/lib/spark
subPath: spark
- name: cassandra-data
mountPath: /var/lib/dsefs
subPath: dsefs
- name: cassandra-data
mountPath: /var/log/cassandra
subPath: log-cassandra
- name: cassandra-data
mountPath: /var/log/spark
subPath: log-spark
- name: cassandra-data
mountPath: /config
subPath: config
- name: cassandra-data
mountPath: /var/lib/opscenter
subPath: opscenter
- name: cassandra-data
mountPath: /var/lib/datastax-studio
subPath: datastax-studio
- name: script
mountPath: /ready_probe.sh
subPath: ready.sh
volumes:
- name: script
configMap:
name: script
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntuserver
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 128Gi
Can anyone please help me with that?

Kubernetes: mysqld Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)

I have a the same issue that I have seen other users have about permission for mysql folder in the percona image. But I have it in Kubernetes, and I am not sure exactly how I can chown of the volume before the image is applied.
This is the yaml:
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: db
k8s-app: magento
spec:
selector:
app: db
ports:
- name: db
port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
selector:
matchLabels:
app: db
serviceName: db
template:
metadata:
labels:
app: db
k8s-app: magento
spec:
containers:
- args:
- --max_allowed_packet=134217728
- "--ignore-db-dir=lost+found"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: config
key: DB_NAME
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_PASS
- name: MYSQL_USER
valueFrom:
configMapKeyRef:
name: config
key: DB_USER
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: config
key: DB_ROOT_PASS
image: percona:5.7
name: db
resources:
requests:
cpu: 100m
memory: 256Mi
restartPolicy: Always
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Same issue, but in docker:
Docker-compose : mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
How to fix it in Kubernetes?
I found this solution and it works:
initContainers:
- name: take-data-dir-ownership
image: alpine:3
# Give `mysql` user permissions a mounted volume
# https://stackoverflow.com/a/51195446/4360433
command:
- chown
- -R
- 999:999
- /var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql

How to use ytt to add a sidecar container to an existing Kubernetes Deployment?

I would like to add a fluent-bit agent as a sidecar container to an existing Istio Ingress Gateway Deployment that is generated via external tooling (istioctl). I figured using ytt and its overlays would be a good way to accomplish this since it should let me append an additional container to the Deployment and a few extra volumes while leaving the rest of the generated YAML intact.
Here's a placeholder Deployment that approximates an istio-ingressgateay to help visualize the structure:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
template:
metadata:
labels:
app: istio-ingressgateway
spec:
containers:
- args:
- example-args
command: ["example-command"]
image: gcr.io/istio/proxyv2
imagePullPolicy: Always
name: istio-proxy
volumes:
- name: example-volume-secret
secret:
secretName: example-secret
- name: example-volume-configmap
configMap:
name: example-configmap
I want to add a container to this that looks like:
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
and volumes that look like:
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
I managed to hack something together by modifying the overylay files example in the ytt playground, this looks like this:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
What I am wondering, though, is what is the best, most idiomatic way of using ytt to do this?
Thanks!
What you have now is good! The one suggestion I would make is that, if the volumes and containers always need to be added together, they be combined in to the same overlay, like so:
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.subset({"kind": "Deployment", "metadata":{"name":"istio-ingressgateway"}}),expects=1
---
spec:
template:
spec:
containers:
##overlay/append
- name: fluent-bit
image: fluent/fluent-bit
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
##overlay/append
- name: fluent-bit-config
configMap:
name: ingressgateway-fluent-bit-forwarder-config
##overlay/append
- name: varlog
hostPath:
path: /var/log
##overlay/append
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
This will guarantee any time the container is added, the appropriate volumes will be included as well.

Grafana dashboard not showing data (on EKS)

I have deployed Prometheus-Grafana on kubernetes cluster with following manifest file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: kube-monitoring
labels:
app: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.3.2
imagePullPolicy: IfNotPresent
env:
- name: GF_SECURITY_ADMIN_USER
value: admin
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-secret
key: admin-password
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
memory: 2500Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
exec:
command:
- wget
- localhost:3000
- --spider
initialDelaySeconds: 30
periodSeconds: 30
volumeMounts:
- mountPath: /var/lib/grafana
subPath: grafana
name: grafana-storage
readOnly: false
- mountPath: /etc/grafana/provisioning/datasources/
name: grafana-datasource-conf
readOnly: true
- mountPath: /etc/grafana/provisioning/dashboards/
name: grafana-dashboards-conf
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-usage
name: grafana-dashboard-k8s-cluster-usage
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-cluster-nodes
name: grafana-dashboard-k8s-cluster-nodes
readOnly: false
- mountPath: /var/lib/grafana/dashboards/0/k8s-core-dns
name: grafana-dashboard-k8s-core-dns
readOnly: false
securityContext:
runAsUser: 472
fsGroup: 472
restartPolicy: Always
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: pvc-grafana
- name: grafana-datasource-conf
configMap:
name: grafana-datasource-conf
items:
- key: datasource.yaml
path: datasource.yaml
- name: grafana-dashboards-conf
configMap:
name: grafana-dashboards-conf
items:
- key: dashboards.yaml
path: dashboards.yaml
- name: grafana-dashboard-k8s-cluster-usage
configMap:
name: grafana-dashboard-k8s-cluster-usage
- name: grafana-dashboard-k8s-cluster-nodes
configMap:
name: grafana-dashboard-k8s-cluster-nodes
- name: grafana-dashboard-k8s-core-dns
configMap:
name: grafana-dashboard-k8s-core-dns
and dashboard config is https://pastebin.com/zAYn9BhY (its too long)
Among the list Core DNS & Cluster Usages shows proper data & graphs, but Cluster Nodes doesn't show any data all metric says No data points
Anyone can help here ?
Cluster Nodes won't show you any metrics, because you are probably missing metric-server.
If you are staring with whole Prometheus stack I would consider using prometheus-operator deployed via helm. It is a little overwhelming, but in a fairly easy way, you can start with it and prometheus-operator will deploy metrics-server too.